Hacker News new | past | comments | ask | show | jobs | submit login
The FBI Says How It ‘Legally’ Pinpointed Silk Road’s Server (wired.com)
226 points by nikcub on Sept 6, 2014 | hide | past | web | favorite | 157 comments

The theory I posted last year on Stack Exchange [0]:

> The above environment variables were being dumped in the source of the login page on Silk Road. They contained the real IP address of the server.


Second - the FBI considering "fiddling" with "miscellaneous" input characters into the login page not illegal access is good - it means the next person charged under the CFAA for "exceeding authorization" by fuzzing will have a precedent to cite.

edit: To add a third point: If you are hosting a Tor hidden service, do it inside a virtual machine. Put it behind a gateway that acts as an isolating proxy. Clients/users should be doing the same as well as it protects against the malware attacks (even better freeze a VM snapshot and restore it each time you need it). Whonix[1] does this - although it is easy to setup yourself (I use OpenBSD as the gateway, much slimmer than Whonix and a whole lot less going on)

[0] http://security.stackexchange.com/a/43280

[1] http://www.whonix.org

> If you are hosting a Tor hidden service, do it inside a virtual machine. Put it behind a gateway that acts as an isolating proxy.

While this is good advice and will help you in the event of your server accidentally leaking debugging/config information, it won't help if someone is able to get the hidden service to make a network request. For example, if they can get it to send an email or check if a page is online, then that email will obviously go through the gateway. And obviously if someone gains arbitrary code execution, they can just Google "my IP" and see the externally facing IP.

Some other useful advice: if you're setting up a Tor hidden service, make sure the HTTP server only accepts requests from the Tor network. The FBI claims that when they put the $_SERVER['SERVER_ADDR'] IP in their browser they got the SR home page right on port 80, which is extremely poor security on SR's part. Someone running a distributed scan of all web servers on the Internet could have found it on their own within a few weeks or less. And in fact, this is becoming more common as a technique to identify origin servers hidden behind reverse proxies and CDNs.

That is why you have it segmented:

> Put it behind a gateway that acts as an isolating proxy.

You don't NAT - you forward the port required through the gateway machine and to the virtual machine. Either terminate Tor on the gateway and forward the web port, or terminate Tor on the web server and forward the Tor traffic on the gateway.

In that case nothing can request out from the web server. If your server needs to make requests, such as getting the latest bitcoin price - you do that on another server and run a queue that will pull the data over.

That's correct. Most of these guys are getting cauought because they didn't believe that will have to eventually face an enemy with so many resources (FBI/NSA).

I believe that avoiding detection online can be done if you are very strict about your policies.

I don't believe DRP had the knowledge to understand and the how's and why's of a setup you're describe unlike skilled engineers[1] who design multi-layered approaches.

[1] http://www.daemonology.net/blog/2014-04-09-tarsnap-no-heartb...

  > they didn't believe that will have to eventually face an enemy with so many 
  > resources (FBI/NSA).  I believe that avoiding detection online can be done
  > if you are very strict about your policies.
I think the NSA has sufficient resources and incentive to mount a sybil/correlation attack on the Tor network.

> run a queue that will pull the data over.

Are there standards for these kind of general pseudo devices in virtual machines? (Or you would use the cut&past buffer for example?)

To my knowledge, secure sites use(d) hardware methods to guarantee a one way channel (snip half an ethernet cable, override peer/auto detection and then use udp.)

You'd host the queue service on a separate machine on the private network and communicate with it using either HTTP or standard TCP (all firewalled to restrict everything else)

So you have your gateway virtual machine which has one interface to the public web and one interface to the internal network. On that network you have two or more machines:

a) is the web server, which can't make any requests out to anywhere except other machines on the same local network. It can receive tor or http traffic in from the gateway

b) other servers hosting local services, which can't make any requests out at all but can receive queries from the web server on their http/queue port. A database server for your web application would also sit on a machine like this.

You can put b) on yet another separate network behind the web server if they don't need to route out over Tor (or even if they do).

Your slicing the tasks up and isolating them on separate machines to minimize the attack surface. Someone breaking into the web server will not be able to make any queries out to the web or to tor, and will only be able to query local services servers in b) such as the queue.

RabbitMQ has an HTTP interface, 0MQ uses TCP.

The other way to do it is to isolate every machine from every other machine and implement a SOA over Tor.

Taking the example of your web server needing to know the latest bitcoin price, you would implement an API on another hidden service and restrict access to it from only your web server. The web server would then have to be allowed to make queries out over Tor, but this can remain on the Tor network and can again be restricted to only your hidden 'proxy' services.

Most people associate Tor and hidden services with sites being slow and down all the time, but that is more a function of the level of experience in the field. Tor hidden sites have a different threat model so the ideal architecture is different to most common network architectures.

I personally like the idea of building a machine that terminates TOR (exclusively) into VMs, allowing no other outbound (non-whitelisted) traffic.

The biggest and most obvious threat to a lot of these hidden services are application-layer attacks.

You should not be using platforms that have the ability to do vast inspection of their runtime environment or make arbitrary outbound requests.

I'm not necessarily a fan of the "SOA-over-Tor" approach for something like a Bitcoin price: the explicitly-whitelisted bitcoin-price-checker-service communicating over a small-surface API (0MQ, Rabbit (albeit this is a bit of a larger attack surface, internally)) to another VM that has externally-terminated Tor-only outbound internet access is probably easier to work with.

I should spin up a CoreOS distribution with all guest VM outbound access turned off and try out host-level Tor termination.

True, this would be much more secure. My eyes glazed over the "isolating proxy" part.

By the way, upon re-reading the FBI's full report, it looks like they may not have found the IP address through the environment variable leak, but rather some of its resources (like its CAPTCHA image) were actually be served from its clearnet IP address.

If your server needs to make requests, can/should it be doing them through tor, and would that be sufficient protection?

What do you mean by terminate? I know it means end but not getting the just of what you mean

I fail to see how this was not an Illegal hacking attempt by the FBI? They want us to believe they just had some goons sit at a desk all day and just type crap into the auth box until somehow an IP address was erupted out?

That makes zero sense.

You egress filter everything that isn't tor, allowing any outbound connections other than tor will cause leaks.

>And obviously if someone gains arbitrary code execution, they can just Google "my IP" and see the externally facing IP.

This is what firewalls are for

No, that's not necessary. The point is that the web server does not have Internet access: it can only access Tor (its network interface is a virtual one that can only access Tor). Any Google search for "my IP" will return whatismyipaddress.com and this site will report the IP of the Tor exit node.

I suppose running a service on IPv6 only would make it a lot harder to track down.

Only if you're trying to scan the entire address space.

For an attack like the one used here (where they found the IP) then there is no difference.

You could also start by using something more secure than PHP.

I realize that a significant portion of the internet runs on PHP, but if you are going to do something as high-risk as SR, you should use more robust tools.

Almost every HTTP server, language, and web framework has at one point had a vulnerability that exposes configuration or environment variables.

The fundamental problem is that no matter what language and server combination you choose, it was never written for hostile environments. Exposing an IP address of an internal server is considered a very low risk vulnerability (as compared to say RCE), so very little work is put into auditing for such risks.

what would you suggest? for all its faults, PHP has seen a lot of hardening efforts over its lifetime and its security flaws are well known and can be compensated for.

A lengthy hardening effort and well-known flaws to compensate for does not sound at all like a secure platform to me.

but on the other hand, it is better than having unknown security properties like e.g. ruby or javascript on node.js. While node is built on the venerable V8 engine which has strong security roots, its core libraries and dependencies are less well explored. Plus javascript is generally a terrible language for secure programming.

I'd rather have ample documentation on how to harden my PHP application than no documentation on how to harden my Node application. Security through obscurity is no security at all. Plus, many of the mitigation strategies are simply rules like "don't use mysql_query" or "use htmlentities with ENT_QUOTES and UTF-8 to escape your output", both of which can be built into a framework. See: laravel.

[edit] downvoting is much easier than formulating a response, isn't it?

You are argue that php is better then than using ruby or javascript. Now I could perhaps weight in on that somewhat dubious claim but I'd much rather ask why are you restricting the pool to php, ruby, and javascript?

Why not Python, or Go, or even Haskell? There are many languages other than the three you mention which have much better reputations for secure web programming.

You likely got downvoted because you presented a false choice to back up your argument.

I chose the languages above mainly due to their popularity outside the sphere of silicon valley - you are right that I should have included python in this list, but go and haskell are still only popular in a very restricted audience and I myself haven't used them - they may well be better for secure programming, they just didn't come to mind when I was thinking of examples.

Personally, I would go with Python.

* The language itself has a pretty good security track record * The more popular application frameworks (Django, Flask, et al.) seem to be doing pretty well * Process isolation techniques are well-known

Do you know if torbrowserbundle or tails etc uses any kind of virtualization to isolate things?

Its an off hand observation I've made before e.g. https://github.com/paulczar/docker-torbrowser/issues/2

Tor should run in a VM or container, and the browser should run in a container that can only talk to the tor, and the only other way out is a dumb screen scraping display for the browser. Make sense?

The preferred way is to use Tails on a dvd since the entire point of a privacy live dvd is not to leave local evidence and run it in memory only.

A separate $40 ARM box, running some kind of open firewall software (openbsd, m0n0wall, pfsense) that blocks all outgoing traffic that isn't tor is also preferred than trusting say, Virtualbox NAT, which likely leaks host IP to the guest and your VM state to the host.

A bonus to having a dedicated system for whatever secret squirrel things you're doing is it's a physical barrier protecting you from yourself, like when Sabu accidentally logged into IRC in the clear. It's also a good idea to build your own Tails source and hardcode a gigantic password into the startup scripts. This is the "I'm too drunk to be doing this" protection should you ever be faced off your ass and think it's a great idea to go on IRC and taunt the feds or check your sales PMs for the booters and carberp mods you're peddling on not_legaL_hax4sale.ru forums. After 10 tries and failing you'll either pass out or hulk smash the box either way you will not wake up in jail.

the problem here is the $40 ARM boxes only have one ethernet, unless USB Ethernet is good enough for you.

I've considered putting pfSense on ARM.

> unless USB Ethernet is good enough

Well, most (all) of the $40 ARM boxes I've seen run the ethernet port off the USB bus... so, an USB ethernet dongle should suffice.

(the USB bandwidth should likely still be faster than your internet connection, but depending on which ARM chip you have, that may become your bottleneck in processing the packets... Beaglebone Black seems like it would suffice though, RPi might be a bottleneck)

Torbrowser doesn't even sanbox tabs[0], let alone virtualize.

Checkout Qubes which was a project that virtualized each app:


Doing that today with Docker containers would be interesting.

[0] This is one of the main reasons why there are efforts to move it to Chromium

How would Docker help here at all? Docker dose not provide security like you think it does... it's about application portability...

Docker isn't what provides the security, Docker provides the format for the container - it is LXC, VirtualBox, VMWare, Xen, etc. and Docker provides the abstraction layer - which is extremely useful.

The timing is right since OS X Yosemite will include a Hypervisor - so for the first time ever all major operating systems will support it and we have Docker to make it all portable. No more kernel module installation, which is what has always made these type of solutions hard to roll out to consumers and non-techs.

> OS X Yosemite will include a Hypervisor

Is that a type 1 hypervisor that can run a linux VM image/container without vbox/parallels/fusion, as if the VM/container was an "app"?

It looks like it is first a low layer hypervisor kernel support and then a library that provides virtualized cpu, network, graphics, IO, etc (the equivalent of this layer in Microsoft's Hyper-V is called Enlightment IO, IIRC).

If you look at the architecture diagrams for vbox/vmware etc. you'll see they all have their own implementations of each library to do this on OS X. This is where the performance difference comes from. With Hypervisor.framework and each of the virtualization firms porting their platforms to it, it means this performance difference mostly disappears and the vendors can focus on improving the higher layers.

When Microsoft did this in Windows, the ecosystem exploded. One difference is that Microsoft built and released tools and apps at the higher layers, such as application virtualization with App-V (where they compete with Citrix and VMWare). It doesn't look like Apple will do that, yet - but what it does mean is that someone can port or implement a common virtualization layer on OS X and bring better, cheaper, more accessible and open app virtualization to OS X (with great performance).

It is a pretty big deal with potentially a lot of interesting opportunities and applications. There isn't enough information available about it yet, though (or I simply haven't found it) - someone with more familiarity with the tech and virtualization in general will likely dump info about it and what it would mean when we get more information about it and are closer to 10.10 going gold (the virtualization co's porting their platforms to support 10.10 will likely be an issue).

edit: to add, the good thing about making Hypervisors built into the OS, having an abstraction layer on top of that, and then on the other end having Docker abstract the tech is that suddenly everything becomes portable and developers can focus on the more value add features (since core virtualization tech becomes commoditized)

Sounds promising!

Apple would need to document the virtual device interfaces so that Linux/BSD in-guest drivers can be developed by the open-source community. Until then, only OSX-on-OSX would work.

Hopefully the Apple hypervisor can be disabled, e.g. if someone wants to run OS X on VMware on Windows on Mac hardware, or OS X on KVM on Linux on Mac hardware.

Tails answers this question on their site [0]

Their general opinion is that this makes Tails less secure as now you have to trust both the host and the visualization software.

Provided that you trust Tails itself. The expected way to run it is off of a live cd. This way the trusted OS is only ever in ram, and if you use a non-rewritable disk you can also be assured that the Tails disk itself cannot be modified after its creation.

Tails handles the 'only talking to Tor' via iptables. Unless I am mistaken Tails' firewall will not allow clearnet connections.

[0] https://tails.boum.org/doc/advanced_topics/virtualization/in...

Tails has an 'unsafe browser' so you can log into wifi gateways then start the regular browser. If I'm a federal agent in need to decloak Tails users I would probably target that script that disables the firewall that any unprivileged user can run. If you don't need the script probably a good idea to build Tails without it

This is not the same question.

The question is, inside the tails itself, should apps like the browser and your mail and chat programs be isolated from each other i.e. placed in "jails", "containers", "zones" or whatever?

Security in depth.

> Tor should run in a VM or container

Tails: https://tails.boum.org/

I was just thinking about the second point you bring up. Isn't this still an illegal search because they purposefully circumvented digital security?

Most people replying here will be as sarcastic as I was - but there is a serious question here which i'm certain Ulbricht's defense will ask and challenge. The motions filed leading up to this FBI filing have all been about cornering the FBI into revealing this information specifically so they could challenge it (knowing the information wasn't obtained with a warrant).

On that note it seems like his lawyers are performing well, they have challenged absolutely everything in pre-trial motions and there have been some interesting rulings (they've lost almost all of them).

Warrantless surveillance of foreign servers plus the domestic "general warrants"[0], "bitcoin isn't money"[1], his hacking charge under the CFAA[2] and now a likely challenge to the server evidence that was found by "manipulating inputs" (which is interesting to contrast to Ulbricht's own hacking charge).

Both the motion to dismiss and the judges ruling are interesting reads:



I'd be interested to know what an experience lawyer or trial specialist thinks of how it has been going so far. I have yet to find a good opinion piece on the topic.

[0] http://freeross.org/feds-silk-road-investigation-broke-priva...

[1] http://www.wired.com/2014/07/silkroad-bitcoin-isnt-money/

[2] http://arstechnica.com/tech-policy/2014/07/judge-denies-silk...

I have several criminal defense attorneys I speak with on a regular basis and they always have interesting takes on high profile cases.

They basically said his attorneys are doing the right thing by going after the warrant stuff. It's pretty standard in legal circles. Go at the base of their argument. If you can get the original warrant through out, it all goes. The only problem is they suspect this is the only real strategy his defense team has right now.

Once the judge rules on this - which my attorney friends believe will be on the FBI's side, his defense team won't have much to pivot on. All the evidence is now compounded and will stick. Which as the article pointed out, is pretty bad for him. It will also confirm his identity as 'Dread Pirate' which a lot of the case hinges on. Once the FBI get that point nailed down, the rest is pretty academic.

The only thing that would make this interesting is if the judge rules against the FBI and hence forth a majority of the FBI's case is destroyed. Both of my friends figured the FBI will drop that case and simply let the federal case on conspiracy to commit murder and hiring a hitman run its course.

Also, keep in mind there are still three other admins that have been indicted. If they get any of those guys to give them more information about Albrecht, it's all down hill from there. They might not even need most of the stuff they entered as evidence if they can get one of these three to roll over on Albrecht.

Pretty much any way you slice it - he's screwed.

Wouldn't circumvention be more accurate if they had successfully brute forced a login? Clicking "View Source" on the error page doesn't sound like circumvention.

The question is whether the usage was authorized not whether it was successful.

This seems like a poor defense choice. In Australia, a number civil rights in Australia automatically disappear if you were in the midst of committing a crime while trying to claim them.

This was ultimately to prevent stuff like burglars falling through skylights, then suing the owner of the house they were burgling in civil court while facing criminal prosecution.

I'd be staggered if it was possible to make a relatively weak "authorization" claim stick, seeing as how the police do have the right to investigate things left "in plain view" which you could argue a web server page which you accidentally sent compromising fuzz data too could fall under.

A court would also have little trouble allowing such a thing, since it's a narrow interpretation that doesn't legalize it for the ordinary citizen (though IMO I think it probably should be).

"a number civil rights in Australia automatically disappear if you were in the midst of committing a crime while trying to claim them."

That doesn't make sense to me. You haven't "committed a crime" until you've been convicted.

You seem to be saying that you lose the rights merely by being suspected of a crime.

If that is the case, those rights don't actually exist in the first place.

In the US, they have successfully prosecuted people for "fiddling" just like the FBI did here.

If you've been charged with a criminal crime, you can't press a civil suit against the victim, essentially.

The point here is somewhat similar: trying to sue the FBI for unauthorized access to a server would hinge on the relative standing of that law compared to much more serious offences (i.e. conspiracy to murder being the big one) - since the case would have to come from DPR against the FBI, and would thus be subject I suspect to similar tests of standing.

Other people have made the wider point more thoroughly as well - you'd really struggle to prove wrongdoing when all that was acquired was an IP address.

Example of somebody prosecuted strictly for fiddling?

To answer my own question, I guess you could say weev, but I think his troubles really began when he made the pivot from fiddling to mass scraping. I think it's harder to argue the FBI's access was unauthorized when what they were looking at was the "access is denied" page.

Well warrants let them bypass physical security why would digital security be exempt?

The claim is that they did this before getting a warrant.

Which they could easily claim as being incidental to gathering the evidence needed to obtain the probable cause that is mandatory for a warrant.

CFAA talks about "unauthorized access", not about any "unwanted" interaction with a computer. Though I think FBI will simply claims that the steps used at that stage of the investigation were implicitly permitted by the fact that it was conducted for a lawful government purpose (there's some fancy legal term for this, but I forget what it is), and wasn't otherwise forbidden to the government since it didn't involve a search or seizure.

If accepted, that would mean that J. Random Hacker doesn't get to mess around with websites just because the FBI got to, even in situations short of CFAA violations.

I can't imagine there was anything between the FBI and site that established a context for them to be attempting to access a user on the system that they had not created. So I hope they are relying on some investigative authority and not some hairsplitting definition of unauthorized.

edit: I guess they could have been providing incorrect passwords for an account that they created. I agree that is easier to call unwanted.

Reading the Krebs story on this same topic (http://krebsonsecurity.com/2014/09/dread-pirate-sunk-by-leak...), it occurs to me that FBI may not have needed to try to authenticate at all.

Basically all the analyst said he did was to load the SR website far enough until the captcha popped up, and notice in the wireshark (or equivalent) logs that a non-Tor IP address was reached from the SR website.

If the analyst tooled around on the website after that, then even if the court were to call foul on that subsequent access, the public IP address wasn't derived from tooling around so it (and the evidence derived from that) wouldn't be at risk either.

this seems plausible, yet there must've been thousands of attempts of people doing sql injections/looking at their logs?

if i remember correctly an error page leaked an ip in 2012/2013. someone had to realize the captcha was leaking at some point? consider me confused ;)

As to the second point. This "fiddling" with "miscellaneous" input characters could also be interpreted cover brute-forcing login attempts. I'm not sure that's a good precedent...

If it was "miscellaneous" input characters, it probably wasn't brute-forcing login attempts. Brute-forcing logins would use mostly alphanumeric input characters, not "miscellaneous" input characters.

The word "miscellaneous" to me implies things like quotes, backticks, and similar non-alphanumeric characters. "Fiddling" with them would be attempting to find somewhere that didn't quote an input value correctly: they were attempting to find something like an SQL injection.

My understanding is that they attempted a login a few times. User: dpr. Password: ababa, dada, bobo, etc. This in turn triggers the web app to display a captcha after too many failures. The img src for the captcha revealed the IP.

It wasn't much of a "brute force" attack, it wasn't SQL injection (though it's possible they were poking at that too), but just the simple question, what happens if we try to login five times with "miscellaneous" passwords? Hey, look, a captcha! I wonder what server the image comes from...

Trying a series of passwords is unauthorised access though, surely? Like opening a barrel lock, you try a few positions because that gives up the code eventually, it's not brute force but it's not authorised access by a long shot.

Deciding which way the decision should go must be causing quite a few hours of concentrated legal consideration - there are downsides in both directions for the government.

I don't know. Trying a series of passwords until I get it right is basically my standard login procedure. Guess I have to get to three felonies a day somehow.

Breaking into your own house is not a crime.

What would cause the captcha's image to leak the ip but a regular image on the tor website not?

Something as simple as using 3rd party code for the capcha with a config file that said "host name goes here." Would you be at all surprised to find this in a WordPress plugin or similar?

This is purely guessing, but checking regular images (right size, format, ...) will be the "day-to-day" business of looking after a website. And more often than not the administrator will therefore "Open image in new tab..." and become aware of an incorrect image path.

The Captchas, on the other hand, might have been using an existing software. Remember: These captcha images will have to be autogenerated by a script which, as a convenience to the user, might have used some kind of mechanism to determine "fully qualified" URLs. And this had slipped below the radar, as it's a feature used much less often, and hence likely to receive much less scrutiny.

I think it's pretty likely that these kinds of information leaks can happen when you deal with a larger codebase or system. Hence following the advice of some other HN users, who recommend a strictly firewalled system for this kind of use-case, looks like a prudent thing to do.

The site was misconfigured WRT serving the captcha image resulting in the image link pointing directly to the SR server rather than being routed through TOR.

If there is an external server that is vulnerable to a brute force attack, it is trivial to exploit it anonymously.

People who get caught trying to brute force servers (do people even get caught for this???) are the lowest hanging fruit and are the ones least harmful to society.

My point is precedent doesn't really matter, because realistically, you won't have anyone to actually prosecute except for the 13 year old "hacker" who had no idea what they were doing.

>Second - the FBI considering "fiddling" with "miscellaneous" input characters into the login page not illegal access is good - it means the next person charged under the CFAA for "exceeding authorization" by fuzzing will have a precedent to cite.

I was wondering about this as well. If this kind of thing is not done pursuant to a search warrant and is in violation of the site's TOS, is it legal for them to do? What happens if I put a blanket ban in the TOS on use of the site by anyone acting as an agent of any law enforcement agency? Certainly with a warrant, it would hold up , but without one, it may not.

Interesting legal question, I guess only time will tell.

> What happens if I put a blanket ban in the TOS on use of the site by anyone acting as an agent of any law enforcement agency? Certainly with a warrant, it would hold up , but without one, it may not.

Actually, I'm not so sure you don't have that backwards. If a police officer shows up at your door without a warrant, you can tell him "hey, no cops allowed in my house!" and make him leave. But he or she has a warrant, you can deny them all you want, but they're legally entitled to come in and look for stuff.

With a login page, it would work roughly the same way. "No cops allowed" might work, but not once a judge gives the police a right to be where you don't particularly want them.

I think the point is: all that was one WITHOUT a warrant...

>it means the next person charged under the CFAA for "exceeding authorization" by fuzzing will have a precedent to cite.

Unless Ulbricht's lawyers successfully argue that fuzzing constitutes an illegal search. The law respects precedent, and there is more precedent which finds that fuzzing leads to illegal access.

Isn't there a major PRNG state issue with freezing and reusing snapshots?

Yes there is, I ignore it on the desktop (which isn't good)[0] and run a script to reseed, but on the server/gateway I'd completely avoid freezing.

To make it worse, I never install the VM tools on any machine because i've always felt they are all crappy and insecure (and make VM detection easy). Most of the VM drivers try and fix this problem, but it would be best fixed with either virtualization hooks where commends are run in the client by the host whenever you freeze/unfreeze.

[0] this understanding the pros/cons of each approach and weighing them up. i'd rather have a good known snapshot to start off with each time I use it (and deal with side effects such as time sync, rng seed, handling updates, etc.) than have a long-running machine.

Does the VirtIO-RNG functionality provided by qemu help here? Linux will use it as a hardware RNG.

> they typed “miscellaneous” strings of characters into the login page’s entry fields

So this is legal when the FBI does it, but when someone else does essentially the same thing on an AT&T server it's identity fraud and conspiracy to access a computer without authorization

Yes, just like hitting people with your car while texting, or shooting them in the back while they are on the ground handcuffed.

Assuming the FBI had a warrant, they had a right to do whatever they wanted to SR's server, no?

I think weev's sentence was utter bullshit, but you can't equate the two things. SR was a drug marketplace, AT&T is not. This is like arguing police shouldn't be able to pick the lock of your drug safehouse door because you aren't allowed to pick other people's locks.

A warrant entitles you to search a specific place for a specific thing. It isn't a letter of marque. Courts can't order the FBI to go Dirty Harry up the Internet if it brings down that damn drug website. That's illegal.

Of course, who watches the watchers? The US government already imprisons people for life without trial or charge, interrogates using the same torture techniques the Soviets used on captured Japanese scientists, and surveils the diplomats of its own allies. It might be, and you know, probably is committing industrial espionage.

At some point you have to raise your head up, see the forest amongst the trees, and just admit that the rule of law is dead in the world. The power asymmetries are too vast, and the people who rule your life know it.

Sure you can equate the two cases. Someone with power decided weev and Ross would be removed from the board and so they were. Let that be a lesson to anyone who could piss off the truly powerful.

> A warrant entitles you to search a specific place for a specific thing.

Not all warrants are the same, but I've seen them do a lot of damage (broken-down doors, torn up mattresses, general mayhem) in the course of executing a warrant. This seems in spirit with that.

That's the thing - the FBI didn't have a warrant. They're trying to argue that they should be allowed to use the information anyway, because it wasn't 'hacking,' it was 'entering miscellaneous strings.'

On a related note, why the hell didn't they get a warrant? I doubt it would have taken long.

That does change things, I agree.

However, to play devil's advocate a bit: the FBI essentially saw the output of the PHP call `print_r($_SERVER)`. The only thing that's actually sensitive in there is the server's IP address and hostname. This is not usually considered sensitive information. If it is to be believed that is as far as they went before getting a warrant (and I don't know if that's the case or not), then obtaining the IP address would allow them to actually serve a court order to the hosting provider. In that sense it could be seen as non-invasive and purely conducive to their investigation.

But I agree there should not be a double standard. I think what weev did was not illegal, and what the FBI did here was not illegal, personally.

I remember reading rumors that this happened to some random user when the site went a bit wrong and they posted the information to the SR forums. I think it then got deleted fast.

Could that have been the source of the ip leak?

A search warrant isn't a license for law enforcement to break the law; it's a license to lawfully conduct a search and seizure based on probable cause of the commission of a crime. Warrants exist because of the Fourth Amendment, which /specifically/ deals with searches and seizures; warrants have nothing to do with a nonexistent general rule preventing law enforcement from breaking any law. Law enforcement can and does break the law all the time to conduct sting operations regarding drugs, prostitution, child porn, hitmen, etc.

For instance, the DEA can buy drugs in order to trace and later apprehend drug suppliers. They don't need a warrant to do this (though information gained from such a sting operation may later lead to a judge issuing a search warrant).

I'm quite sure more drug deals have been and are arranged through AT&T's network than the Silk Road.

Completely irrelevant. Many more drug deals have probably been arranged through email than through Silk Road too. SR's express purpose was to provide a marketplace for drug vendors to sell to drug buyers.

My point is that both entities are drug marketplaces, thus they should not be treated differently when it comes to warrants and other legalities.

oh, so a warrant is a free pass to do whatever you want?

apparently "no-knock" warrants that get issued like mints at a dance allow SWAT teams to flashbang a baby and no one faces criminal negligence charges.

I guess regular warrants will let law enforcement really make us grab our ankles and do whatever they want to us.

SR being a drug marketplace and AT&T "not being a drug marketplace" are irrelevent. Law enforcement is tasked with the EXACT SAME LAWS AS EVERYONE ELSE.

Apparently you like to confuse law and justice with "who's got which legislatures on their payroll and can put people down for accessing their stuff without permission"

Didn't they do the "miscellaneous strings" before they got a warrant?

Are you saying that they are/should be allowed to pick to lock of that drughouse without a warrent? That's the relevant issue.

They weren't lock picking they were just putting miscellaneous bits of metal in his door lock.

That's very much the point. The FBI can make an argument that they weren't trying to gain access (since a real attempt to gain access would have looked different from "entering miscellaneous strings"). They were trying to see what the system did when authorization attempts failed. As far as I know, there's no law against that. It's similar to going through a suspect's trash.

If they weren't trying to gain access, what were they trying to do? Buy drugs? I don't buy the trash analogy.

Umm... since when is guessing passwords not an attempt to gain access?

Well, there is a general principle that police can do things that ordinary citizens cannot. But, subject to rules, not indiscriminately. For instance, they can enter a property if they have a warrant, whereas if someone else does this, it is break and enter. If the FBI established "probable cause" that the server is used for crime, they could have gotten a warrant. I can't imagine it would have been difficult, so why wouldn't they have taken the legal precaution.

>so why wouldn't they have taken the legal precaution. That seems to be the golden question here.

You're assuming that weev actually ended up in prison for CFAA violations instead of acting like a dick in court.

Please elaborate or link to more on this. I haven't heard this perspective before.

Couldn't find too good articles, but basically he did everything he could to annoy the judge. http://www.vice.com/read/lulz-and-leg-irons-in-the-courtroom... http://www.nbcnews.com/id/51232207/ns/technology_and_science...

This strongly suggests that anything hosted on Tor should be done through some sort of (ideally plug-n-play, hardened) NAT+hypervisor/container system such that the service itself can never, ever know its real external IP. Such a system could potentially also act as a legal defense in a situation such as this.

While this is certainly better than not doing so, if one is going to run such a hidden service in flagrant violation of the law, it seems prudent to take all feasible precautions--and maybe some unfeasible ones, too.

1) Run the HTTP server in a guest VM to reduce likelihood of hardware identifier leakage (hardware MACs, HD serials, DMI data, CPUID, etc.)

2) Physically separate HTTP server and Tor client, and restrict communication between them to a simple packetized high-speed serial interface.

2a) Consider an inline filter on this link that watches for private keys, etc. and kills communication upon detection, and adds random latency to packets to reduce bandwidth of timing channels.

2b) Physically isolate these systems as much as possible: power line filters, electro-optical couplers for the comm link, etc.

3) Stub out all Tor crypto operations to an HSM; keep the onion key and do all operations on the onion key on that HSM.

4) Make friends in the criminal underground, becasue you're probably going to prison eventually, anyway ;).

Regarding point 2.a.i ("adds random latency to packets to reduce bandwidth of timing channels"):

There's actually a "better" method. Hash every packet, and add latency not only randomly, but also, for each field in the packet, add latency based on the hash of that field. (In actuality you want a random salt, so, HMAC or similar)

That way, assuming they cannot figure out the salt, you actually actively prevent a large chunk of timing attacks. You're adding deterministic variation that hopefully swamps the server-side variation, deterministic being the key word. You cannot get around it by repeating measurements, as the delay stays the same.

(Alternatively, keep each packet so that exactly 100ms or whatever has passed between getting the packet in and sending the response, silently discarding any packets that take >100ms to process.)

Honestly, I'm not so sure about (4). I feel like when I read about criminals that were captured, they all made avoidable mistakes. I have to wonder if there are, in fact, many criminals that are never captured because they are simply better at it.

Sure, but it only takes one avoidable mistake to bring it all crashing down. Want to bet the rest of your life on being perfect? In any case, it was a (half) joke.

But yes, it reminds me of the hilarity of FBI's characterization of PLA Unit 61398 as some super-scary 'master hackers': real 'master hackers' don't get caught.

(4) is a terrible idea. At the very least you'll need to do it without letting them know what you do, without letting them know that you do something, and without even letting them think that you might do something.

Associating with other criminals is a great way to get ratted out. It is also a great way to put yourself on the radar of law enforcement in the first place, so even if you don't hint to your new 'friends' that you are up to something it is still risky.

Here is a better idea: keep it as white collar as possible (no hitmen) and pray for minimum security.

It depends on the crime.

Historically, someone who murders a stranger (ie, serial killers) is nearly always caught because one victim gets away and tells the police. http://www.wired.com/2011/04/mf_billjames/all/ Otherwise, the cops are generally stuck with their habits of sniffing around friends and acquaintances of the victim (which by definition are doing the non-stranger murders).

This will be slightly off-topic, but what the hell. There is an old saying about the level of criminal sophistication.

"Petty criminals break the law. Bigger criminals skirt the law. The big bosses ignore the law. And the biggest criminals of all - they write the law."


A system can only reveal information it knows, so to defend against this "attack vector" when building a secure system, you must ensure that the software knows as little about the system as possible.

There are various ways that don't require NAT, it's more a matter of ensuring the software doesn't need the information it shouldn't know. And both FreeBSD and Linux have tools to lock down a process so it can't find these out (Capsicum and/or SELinux).

That's the premise behind Whonix (https://www.whonix.org). It adds a little overhead running 2 VMs, but it's worth the extra 256MB of memory for the piece of mind.

This is obviously not a sufficient defense on its own, but my first reaction was that the HTTPD should have been bound to and proxied to Tor; that way, it didn't know its IP and couldn't be accessed over the clearnet (although evidently this was a "feature".)

What do you folks think?

It seems really unlikely to me that they would have set up this system the incredibly dumb way.

Ross Ulbricht was an economist who knew some PHP, not a super-hacker.

Curious why they didn't include this description in the original indictment. I don't read a lot of indictment documents but pretty much all that I have laid out the steps law enforcement took to ascertain that a crime had taken place and that the person they were indicting was the person they believe committed that crime. I could them requesting that the description remain under seal to foil other poor server maintainers from protecting their identity but that isn't what happened here.

It wasn't an issue until it was raised by the defence.

Hmm refreshing my memory here, this [1] is the search warrant for SABU (of Anonymous) and the claim is that the FBI didn't need a warrant for Ulbricht (which is where this stuff would have been outlined). At least I can't find such a warrant with basic searching techniques. So I'm guessing had they gotten a warrant (does Iceland need such things?), they would have presumably said all of this in the warrant.

[1] http://www.scribd.com/doc/197510285/a

Even if true it could be a parallel construction.

I don't think anyone here is even remotely doubting that this is anything BUT parallel construction.

While that may or may not be true (which is completely in the spirit of parallel construction, and it fscks with your mind), we ARE talking about the guy who registered an account to advertise his illegal drug marketplace using his personal email address. That part is fully visible and timestamped, and can not in any plausible way have been planted. With someone so full of himself, pretty much anything is possible.

I was looking for this sentiment before I made my own post.

There is absolutely no reason to think the FBI did their work the difficult-but-legal way when they already get rich feeds from NSA. This is parallel construction all the way. I hope the defense figures out how to make a giant stink about that.

Maybe the IP leaked because the "lawful hackers" made it leak...

The full docket of the USA v. Ulbricht case is available at:



His accuser signed his name to the criminal complaint already, if you'd read the indictment.

I think that given the revelations of parallel reconstruction in general, the burden should be on the prosecution to prove that the NSA et al. were not involved. An illegal search/seizure could very well have been what gave the FBI the idea to find the IP address in this way.

Given that the NSA is able to collect most internet traffic, and they've been sharing info with other agencies, I would think that most evidence against any defendant could be thrown out. Yes, that would be ludicrous, and that's exactly why the NSA needs to be reformed.

The hard thing with parallel constructed evidence is to prove that this is how it is done.

I doubt whether a judge will throw out evidence unless you can prove that it is a parallel construction, and that this was illegal.

> the burden should be on the prosecution to prove that the NSA et al. were not involved

You can't prove a negative.

They should be obligated to testify to the truth under oath. It's not proof but gets their position into court records.

And they will. The trial isn't over. Somebody will get on the stand, be sworn in, and then explain the actions he took to reveal the server's IP.

If the site was accessible through an IP, could scanning ports and looking for a particular item on the homepage have identified it?

Yep! Sounds like a pretty big problem. With 10,000 scanning processes taking 5/sec per IP, it'd take less than a month to scan all IPs. It should be easy for a powerful adversary to scale this up and finish in a day or two. It'd take more work if all ports need to be scanned, but restricting to just known hosting blocks could shave a bit off. As could eliminating well-known good sites from a first pass.

It would surprise me if the FBI isn't already doing this for all known onion sites, just to have the info around.

At any rate, sounds like a real basic Opsec101 failure on The Silk Road. Not that hidden services are that safe in the first place. The Tor folks have written about how hidden services are quite vulnerable to some attacks. Not something I'd want to rely on to save me from a 30 year sentence.

Edit: If someone had to run a hidden service, it might make sense to setup another onion site (on another Tor instance) and expose that on port 80, as part of misinformation tactic. Reading several incidents, it seems attackers will use circumstancial evidence to help narrow down the possibilities. By intentionally leaking things (like offhand comments about the weather, using reports from another city) one might be able to gain a few more bits of anonymity back. Of course someone doing such a thing wouldn't setup their illegal site on a public IP in the first place.

It's my understanding that most OpSec failures are real basic OpSec 101 failures. Not because, faced with the question, it's hard to get it right - but because faced with so many questions it is hard to get all of them right, and screwing one up is frequently enough.

Yes but no. All you would have had was a captcha image, you have no way of knowing it was the Silk Road captcha image. If you scan the internet, you will find millions of captcha images.

The FBI went the other way: they had the SR login page with an an unknown non-TOR IP address and asked "what is at this IP address?" Since it was a captcha image, and captcha images have to be sourced by the web site that is doing the captcha verification, it strongly implied the IP address was the SR server. As it turned out, it was.

As a co-worker said, "When you live in an egg shell, small cracks are deadly."

The web server probably wasn't visible on the public internet, but apparently the machine itself was.

"And when they entered that IP address directly into a browser, the Silk Road’s CAPTCHA prompt appeared, the garbled-letter image designed to prevent spam bots from entering the site."

Ouch. In retrospect, seems like they could have almost brute-forced it, if they had a narrow enough block of IP addresses they thought it was in. Fetch the root page on all the IP addresses looking for something that looks like SR.

You could probably do that for the entire internet, assuming you had their levels of resources.

You could do that in the comfort of your own home, with a decent Internet connection (for example with https://zmap.io/)

Isn't that what the NSA does?

Woops, missed that line.

I know a lot of "software" solutions are being discussed in this thread for greater anonymity, but I would personally simply opt for a more secure physical solution. Such as hosting the server somewhere that doesn't require you to give away any of your information.

I know this may sound silly, but there's tons of USB stick sized linux machines that you can plug virtually anywhere. I'm thinking of public spaces such as public wifis. The ToR address can be relayed to other locations if it ever gets breached. And ToR hidden services work great even if you're behind a firewall.

This seems a lot more sensitive than opting for a system where your name is somehow attached to a server.

How could you run a popular website via the upload pipe supplied by a public wifi? There's no way that's practical.

From /r/darknetmarkets (so grain of salt is needed):

> yeah it was about 6-9 months before it shut down if I remember correctly, it was pretty highly upvoted and a lot of chatter about it on the SR forums. though I think there were at least a few bugs that they had to shut down for once they realized that their asses were on the line.


So yeah, apparently this was an actual bug.

But, but, but... "Even a rookie would know better, and DPR is no rookie."


Literally everything about DPR screams rookie. From the fact that he had to ask around on public web forums to get Tor set up, to his PHP skillz.

I know this is in hindsight, but you had to be pretty deep in the reality distorsion field not to see this coming.

Was that "Parallel construction" from the FBI to hide it's NSA source (intelligence laundering)?


"fiddling" with "miscellaneous" input characters could be confirmed with server logs. i wondering if the defence will pursue this.

So fiddling with a website to obtain information not intended for you is legal now? Because if that's so then awesome. Tell Weev's lawyers.

I don't think laws apply retroactively.

That's not a new law. Just incosistent application.

Article says the FBI located the server by entering "miscellaneous" input into the Silk Road login page, which at some point disclosed an IP address. I wonder if they got the Web server to throw a 500 or similar error, that rendered some debug output?

Parallel Construction

Does the Tor Browser allow you to access things on the Internet, because surely it shouldn't, therefore the Captcha should have failed to load for a high percentage of Internet users?

The Tor Browser will let you access things on the Internet... using Tor.

You know, its quite likely. It feels pretty kinda flimsy but without more evidence from the other side yet, it could have been that easy.

It seems fabricated at best.

SR was up for a long time... and I'm sure made a lot of enemies... not to mention bots trying to pound on it... So it seems very unlikely that something as basic as not escaping user input strings would bring them down...




Somehow "fiddling with inputs" and "noticing an IP", while innocently phrased, sounds unlikely. It's just so hard to know whether this is parallel construction. The FBI's history with false testimony (i.e. COINTELPRO) is more than damning.

The 'not in the US, not subject to US law' is a common trick, and also a way both that the NSA skirts laws about monitoring US citizens (Snowden's leaked documents repeated showed how partners with the NSA regularly traded bulk information on US communications for their own), how the CIA and FBI are able to stop Americans and journalists abroad and how intelligence agencies can propagandize its own citizens while retaining plausible deniability.

The CIA is authorized to, and in fact in the past decade and a half heavily decided to, plant stories to international media wire drops that US journalists subscribe and use as a source of information. The CIA also works with partners overseas and private enterprises to engineer foreign stories that the US media will also follow and use for reports (Lincohn Group, Zarqawi PsyOp, etc). Inevitably some of the material released to the Voice of America and other international propaganda outlets make it back to the US Media.

Furthermore information is now able to be targeted in very sophisticated ways (i.e. MINERVA, USAID Cuba Twitter program) for the spread of ideas and support (or dissidence) via 'social contagions', all without ever having to directly propagandize any citizens. These sorts of programs are the ultimate and extreme irony of the Soft Power philosophy.

I find it unfortunate that the Constitution grants rights restricting search and seizure (and others) but that our representation does not recognize these rights, broadly 'de facto' recognize, as rights worth enforcing outside a strictly domestic interpretation. Are these "rights" or "Rights" if our representation allows and even encourages others to infringe on them?


This is just the cover-story.

Prove it.

Sorry citizen, that's classified.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact