Hacker News new | past | comments | ask | show | jobs | submit login
Analyzing the FBI’s Explanation of How They Located Silk Road (nikcub.com)
319 points by nikcub on Sept 7, 2014 | hide | past | favorite | 121 comments

Came across a discussion on Twitter pointing out reddit users who apparently discovered IP addresses in error messages printing out in the HTTP response:

26 Mar 2013: http://www.reddit.com/r/SilkRoad/comments/1b1lvy/warning_the...

26 Mar 2013: http://www.reddit.com/r/Bitcoin/comments/1b1n7y/warning_to_s...

03 May 2013: http://www.reddit.com/r/SilkRoad/comments/1dmznd/should_we_b...

Wow, I love how those threads are full of people who insist the OP is full of shit and that Silk Road doesn't even have a public IP address.

Maybe we'd have fewer incidents like these if keyboard jockeys were a bit more teachable. Even super 1337 h4xx0r5 make mistakes, often obvious ones. Software is complicated. Let this be a lesson to any of us who may dismiss a correction too quickly.

Yes. They all tend to talk in certainties about something that they are only speculating about:

  > you would have to go out of your way to expose
  > your public IP via Tor.

  > However, the hidden service you connect to,
  > would be the load balancer. This service would
  > have zero knowledge of its "external" ip
  > address, as it would be running in a VM
  > pointing to a port on the loopback device.
(because obviously nothing else is possible)

  > - Server is almost 100% certainly running in a
  > VM and doesn't have a public IP.
  > - The entire harddrive for the server (which
  > includes all the the information such as
  > private messages and addresses) is almost 100%
  > encrypted.
  > This is the motherfucking SilkRoad. They've
  > been operating so successfully for so long
  > because they're not amateurs. OP is a liar.
... etc ...

the funny part is: the people i know that used the service (who were far down towards 'non' on the technical spectrum) complained regularly about it's terrible design, stability and security, and were under the impression it was administrated by parties that had little idea what they were doing. they theorized it was a big scheme to go Gox* on them once enough coins were present in the system.

it's interesting that these people, who have no idea what an ip or load balancer or VM is, were able to pick up on this. i would guess that the h4xx0r5 actually spent little time using the service or had the bit of their own necks being exposed taint their assessment.

* yeah, i know no one really knows exactly what happened, but it's fun to poke fun.

Assuming the reddit posters are correct. This means the PHP app knew the public IP. Isn't the only way to know about it to enter it into the web server configuration?

No, there are several $_SERVER variables available to all php applications. The two relevant ones in this case would be HTTP_HOST and SERVER_ADDR, most likely the author of the application was leaking one of those in his application.

But a webserver does not know its external IP, that is the whole confusion. Don't PHP docs say:

'SERVER_NAME' The name of the server host under which the current script is executing. If the script is running on a virtual host, this will be the value defined for that virtual host.

'HTTP_HOST' Contents of the Host: header from the current request, if there is one.

1.) You're assuming that the script was running on a vhost, we don't know.

2.) Even if the script is running in a vhost, if a listening port is specified but not an IP address the SERVER_NAME will be the IP of whatever the interface the http connection came from was.

Whoever wrote this article either very poorly understands network traffic or is being intentionally obtuse to create doubt. He states that there is no way that a public IP would show up in a packet capture over a Tor network. This is blatantly false and idiotic to think.

HTTP servers, caching proxies, TLS terminators and load balancers have a habit of putting extra HTTP headers in the HTTP response to make the path of a request more clear. It's very possible that under some error condition one of them included an X-FORWARDED-FOR header or something similar that included the IP. This is all application layer data and can contain IPs.

Even ignoring headers, they could have triggered an error page that just dumps all PHP server variables, one of which includes local IP addresses.

Edit: It's infuriating that this article author doesn't understand that a packet sniffer can see contents of the HTTP protocol.

He states that there is no way that a public IP would show up in a packet capture over a Tor network.

No, he does not. He says that there's no way a public IP would show up at layers 3 or 4 of the OSI model. The author takes the position that the IP leak must have come from layer 7 (quoting from the article):

[...] and it is at the application layer that the FBI uncovered the IP address.

Depending on how one interprets the FBI's wording, this impossibility contradicts their accounting of how they discovered the IP. Let's look at what the FBI said:

Upon examining the individual packets of data being sent back from the website,3 [sic] we noticed that the headers of some of the packets reflected a certain IP address not associated with any known Tor node as the source of the packets.

Note that they're talking about headers of the packets, not HTTP headers. This is the FBI, though, so it's possible they confused "packets" with "HTTP requests". Taking their words at face value, though, the FBI seem to say that they got the IP address from the IP headers on packets received from Silk Road, or at least traffic generated by their browser on behalf of Silk Road. That's what the author argues is impossible.

Nobody, least of all the author, disputes that Silk Road could have leaked its IP in HTTP headers, or via some other misconfiguration. But if they did, why didn't the FBI just say that? Why claim that they leaked it in the IP layer?

Two possibilities come to mind. The first is that the FBI got the IP from the application layer, as everyone believes is possible, and misattributed the leak to the IP layer due to terminology confusion. The second is that they got the IP from another source, attempted to deceptively misattribute the source, and accidentally picked a fake source the leak could not have come from. If the second option is the case, we're looking at an example of parallel construction.

Edit: Grammar. Gotta keep up appearances.

Note that they're talking about headers of the packets, not HTTP headers.

HTTP headers are in the packets, and I could see myself very very very easily writing "of the packets" in lieu of "in the packets". When proofreading, I'd be much more likely to catch the extra "3" in that same sentence, than to catch the s/in/of/ as introducing confusion.

My gut reaction to that is skepticism.

I've done basic switch configuration, looked at packets, had to troubleshoot MTU and MRU misconfiguration in commercial switches, etc.

Conceptually stuff at that layer is a long ways from the actual content.

It's like pointing to a guy walking his dog at the park after noticing a parked car with it's lights on, and saying "there's the owner in the car" vs "there's the owner of the car".

It feels, to me, like those sorts of linguistic mistakes must be exceedingly rare?

Or maybe the person writing up the statement wasn't the person who generated the notes and they just had a transcription error. I'd buy that I guess.

> It's like pointing to a guy walking his dog at the park after noticing a parked car with it's lights on, and saying "there's the owner in the car" vs "there's the owner of the car".

So here's a bit of nuance for me. I can parse "the headers of the packets" in one of two ways:

1) For each packet, the header of said packet (clearly referring to the IP layer header)

2) For each packet, the headers of said packet (ambiguous)

"The header of a packet" is relatively unambiguous, and akin to your analogy. I would grant this kind of mistake to be rare, although I'm not sure we'd agree on how rare. Certainly not impossibly rare - more common than those spurious 3s in my own writing, and yet we see one of those quite clearly.

"The headers of a packet" is more ambiguous. A packet has only one header. The header has options, so that could be what's being referred to. Or it could be more loosely referring to metadata, or headers of another protocol contained within the packet data.

I could see myself using this kind of loose terminology quite easily. I'm not a network engineer, I don't do switch configuration, I don't have to get down and dirty with IP layer issues with MTU and MRU misconfigurations, so it's quite possible I'm more prone to this kind of mistake than you are.

I'm just a programmer who's occasionally asked to dabble with layer 7 networking stuff, and mostly interact with this when I e.g. fire up Wireshark to debug why my Amazon S3 authentication isn't working, only to discover Unity's WWW class ignored the (HTTP) headers I wanted to send on iOS (explaining why it worked perfectly fine on Windows!)

You must not have used a packet capture tool like wireshark then. Try Wireshark, a packet capture of HTTP traffic will be parsed and you can click through all of the headers of both of the individual packets as well as the HTTP traffic. The difference inside the program is just one layer of clicking. Either way you are extract the IP from headers presented to you in the packet capture.

Your gut reaction is to lean towards conspiracy rather than a simple typo/transposition in the mind of the FBI agent who wrote the report?

That's awful snarky, and misses the point entirely.

My gut reaction is that if I, as a civilian, were to ask an FBI agent for details about a case, I'm as likely to receive half-truths and lies by omission as anything at all though. So if you want to be pedantic I guess so.

I did point out that I'd believe it could just be a transcription error by someone unfamiliar with the subject matter though.

The point here is that this is nothing more than a simple writing mistake on the part of the FBI. Pretending like there's some conspiracy or parallel construction taking place here is just nonsense and the folks suggesting it are doing so out of irrational fear more so than any actual evidence.

Your gut reaction seems to place you in the camp of the conspiracy theorists. That should raise alarm bells for you.

> The point here is that this is nothing more than a simple writing mistake on the part of the FBI.

You say that like it's fact. But you're assuming.

I thought it was noteworthy that I can't recall ever seeing that specific error in the wild. Maybe I just missed it. Maybe it's a lot more common than I imagine. Maybe not. I don't know. Once it was brought to my attention I thought it was noteworthy though.

> Pretending

I'm not pretending anything. I gave an additional explanation I felt might be plausible. I'm sure there are others. I just doubt the plausibility of the one brought forward.

> irrational fear

Tomato tomato. There's no evidence it was a writing mistake, but you're presenting it as fact.

I guess it all boils down to this: Given domestic spying revelations I think you're extremely naive if you think it's irrational to think that the FBI might be telling less than the whole truth.

Your gut reaction puts you in the camp of people who would volunteer information to a LEO. Which I think should raise a lot more alarm bells than me shrugging my shoulders and suggesting I'd need actual evidence before taking the FBI at it's word.

The very idea that you would is blissfully naive IMO. But good luck with that.

And if you have an point to make, a little less ad-hom and snark (aka Reddittude) would be appreciated. My original comment was a few wonderings-out-loud, including posing a couple questions for discussion. Nothing in there intended to be snarky. If you want to discuss further, it would be nice if you could approach the conversation with the same level of basic respect.

Sidestepping your bruised ego entirely, the fact that you haven't seen someone mistake an IP header with an HTTP header is a terrible standard with which to judge whether or not such a mistake is possible.

This isn't really about you, it's about showing the other folks reading this how nuts you are, and luckily you're doing that for me.

That's a weird argument (that I never made). Though I have to wonder about that. IPv4 headers and HTTP headers don't much look the same IME (feel free to hit up Wikipedia if you're unfamiliar).

As far as "how nuts your are". You seem to be taking this way too seriously.

"My gut reaction is to be skeptical" somehow equates to, in your words:

  * Nuts
  * Conspiracy Theorist
  * Nonsense
  * Irrational
Expressing skepticism that the FBI would "tell the whole truth and nothing but", despite high ranking intelligence officials outright lying to congress, and the history of the FBI being a bit checkered... That you think should raise alarm bells. Skepticism.

If that's all it takes to be a conspiracy theorist in your book, well sign me up I guess.

The outright dismissal that it might just be possible you don't have all the facts (and I'm certainly not claiming I do)... That's an idea you dismiss outright. I have to wonder how you get there. Because if anything here seems irrational, that's what I'd put my finger on. But maybe that's just me.

Its entirely possible to leak addresses at layer 3, ICMP. Redirect and Type 3 unreachables intentionally encode additional IP headers inside the data segment. Additionally ICMP processing goes up to the control plane, not just a data/routing/forwarding plane. Its incredibly common to get back valid ICMP responses sources from RFC1918/3330 space on the internet. The router (or whatever) responds with the "private" management address as the source. I would not be surpised in the least to have a box leak "private" or unintential data via ICMP.

So we have a pretty good guess of how they did it, and while their affidavit doesn't give a lot of important details, ultimately they were just interacting with a web service in the way it is supposed to be interacted with - by sending it http requests - and the buggy site revealed itself to them because the guy who wrote it wasn't a very good web developer.

I can understand why they wouldn't want to give more details than they have to; it makes the prosecution's work harder. But I and many others have been arguing for years that there's nothing illegal about sending messages to a server and looking at the responses, if you aren't trying to damage anything. I can't see any reason why it would be less legal for the FBI to do it.

"...they were just interacting with a web service in the way it is supposed to be interacted with - by sending it http requests"

That line of reasoning sounds a lot like the Andre 'Weev' Auernheimer case [0], where he gathered AT&T customers' email addresses by interacting with a server simply by 'sending it HTTP requests.' The FBI made its position clear on that case, prosecuting Weev for "conspiracy to gain unauthorized access to computers" and ultimately getting the guy sentenced to three years in prison.

The overarching circumstances are clearly different but undeniably parallel. It seems curious to me that the FBI could use these some sort of apparently 'criminal' tactics (by their own precedent) as legal grounds in their case against DPR.

[0] http://www.huffingtonpost.com/2013/03/18/andrew-weev-auernhe...

The slight difference is the FBI is investigating the source of the Silk Road (not illegal), vs. trying to obtain private information of folks from ATT (illegal).

There is no difference if they did not have a warrant. The FBI has the ability to break into your house, but if they do that without having a warrant they can't use anything they found against you.

I wouldn't call this breaking in, more like surveillance.

IMO a better analogy for "just sending packets" would be entering a store and looking around, or sending a package through the mail.

Why? Because it could EITHER be totally-legal or blatantly-illegal, depending on the time, place, manner, and other details.

Maybe they entered during business-hours, or maybe they broke in at 3AM. Maybe the package was a gift, maybe it was an explosive.

And with some very obvious exceptions, warrants are required to commit surveillance that violate certain privacy and property rights.

The thing is I don't believe the court will find this to be violating any laws.

I think you're probably right, but the argument up-thread was that the justice system is hypocritical regarding the permissibility of actions taken by the government compared to individual people.

Yeah, I think the key here is that Silk Road's IP address is public information, especially since they were able to connect to it directly, whereas AT&T users' private information is private.

Are you also surprised that the FBI can shoot people dead, while most people would get prosecuted for that?

"Are you also surprised that the FBI can shoot people dead, while most people would get prosecuted for that?"

Hopefully the FBI won't be shooting people dead as a means of preparing a legal case—-that would be a pretty frightening 'ends justify the means' mindset. The crux of the matter is due process [0].

Granted, I'm not a lawyer nor am I well versed in legal processes: from a purely lay perspective, I just am seeing the FBI call tactic A a crime right before employing the same tactic A to indict someone else, and it smells like hypocrisy to me (honestly, more likely mistreatment of Weev than of DPR.)

[0] http://en.wikipedia.org/wiki/Due_process [edit] add quote from parent comment

> Hopefully the FBI won't be shooting people dead as a means of preparing a legal case

It's certainly a novel approach to jury selection. Probably would be popular amongst a few State Bars.

That would certainly be a very dire voire.

Congratulations! You have been nominated for worst pun on HN 2014. An awards dinner in Anchorage is scheduled for December and Jon Stewart will be compering!


We agree, but one reason we'd want more details about the FBI going to greater lengths to trigger the bug, is that when the court confirmed that was legal for the FBI to do, it could add precedent that it's legal for others to go to those greater lengths too. (legal or just general consensus, which effects judges decisions even when it's not a legal precedent)

Whatever it is that the FBI did, the agency should be held to exactly the same standard as a private individual interacting with a company's online presence. Have private individuals ever been prosecuted for exploiting security loopholes to extract data from a public site? If so, then government agencies should be held to the same standard, and any data obtained in that way should be deemed inadmissible as evidence.

I do not feel that it is a good idea to further encourage or embed a precedent in this direction. We've had too many cases going in that direction already. I would far rather see cases establish a precedent that, if no harm is otherwise caused, then the act of confirming a security flaw is not unlawful in its own right.

My point is that, for the purposes of justice, the exact same standards should apply to government agents and private individuals in the matter of admissibility of evidence. Meaning that if they want to relax the standard for the government, then security analysts should also benefit.

All of these analyses of how they might have located the server are great. I'm pleased to see folks trying to prove or disprove that the FBI is not trying some parallel construction here.

But what worries me is that all of this will have to be eventually dumbed down to an explanation that a jury of average joes can understand. The increasingly technical complexity of evidence in some of these cases worries me that our system doesn't have an adequate way to deal with it.

They probably won't argue this in front of a jury since it's a pre-trial motion to challenge the evidence the FBI collected. No idea how a judge will understand this, I guess they bring in experts to testify if the FBI's claim is possible or not.

I share that concern in general (and in particular with regard to software patent cases, which often cover highly technical material and where the jury has no context about the novelty of the patented inventions), but I think this case is pretty much a slam dunk however you approach it. Regardless of how they obtained the IP, they got to the box to find it running Silk Road code, they got to the box's owner to find him holding Silk Road bitcoins, etc.

This isn't a case that hinges on a technical nuance. It will be practically impossible for DPR to convince a jury that there is any reasonable doubt as to his guilt.

If the evidence was obtained using illegal methods, all that evidence can't be presented at trial and the jurors can't take that information into account when deciding guilty or not. There are many people who are acquitted for things they obviously did, but for which there is no formal legal way of proving it.

If the evidence was obtained using illegal methods, all that evidence can't be presented

Isn't that for the judge to decide? Judge decides what can be presented to the jury, and jury renders the verdict, right?

Then the argument doesn't need to be dumbed down for the jury. They won't see it at all.

Although the argument will still probably have to be dumbed down for the judge.

Nope. A judge cannot decide to admit an illegally obtained evidence, no matter what it is.

OK but he gets to decide if it was illegally obtained.

This is decided in pretrial hearings. There have been entire cases based on the admissibility of evidence. If the judge rules it is or is not admissible, there is some manner of recourse or escalation for the usurped party. IANAL YMMV BBQ

Except that if they obtained the IP illegally, and the defense can prove it, all the evidence obtained based on that is thrown out. Which is why they're pursuing it so vigorously: as you've noted, it's the code they found on the box and the bitcoins they found by finding the box's owner that pin him so severely.

Wouldn't the only evidence obtained through that hack be the actual IP and related data?

I mean, the actual box was presumably seized after a proper warrant for searching a specific physical location. The question is would the fact that his stuff was searched be considered "fruit of the poisoned tree" since the servers were identified through that IP or as "inevitable discovery" since if he's a suspect, it's standard procedure to search his stuff anyways. But IANAL so I'm not sure how it works in USA.

> But what worries me is that all of this will have to be eventually dumbed down to an explanation that a jury of average joes can understand.

Not necessarily. They could give up the right to a jury trial and go for a bench/judge trial. That's often done in complex or technical cases (the reasoning being that, despite the pro-state biases, better to appeal to an experienced and known-intelligent person like a judge than to risk the prejudices of the common man).

<i>The increasingly technical complexity of evidence in some of these cases worries me that our system doesn't have an adequate way to deal with it. reply</i>

Indeed. Then, "any sufficiently advanced technology is indistinguishable from magic", and you have some ingredients for a witch-hunt.

Honestly I have no worries about this or any other "technical" issue like financial fraud cases. If a cop can learn enough to catch someone then it is possible to explain that to a nurse or an electrician.

Jurors don't switch off because they are learning how to move money between banks or break into servers - they switch off because it takes three months to present a case.

> If a cop can learn enough to catch someone then it is possible to explain that to a nurse or an electrician.

You think a cop did this? No... the FBI has technical personnel. Just like any corporation, they have different departments to fulfill different needs.

Not quite what I meant - no I don't think cops were writing SQLi but they learnt enough to laugh at Little Bobby Tables, just as they can learn enough to follow the money through 17 bank accounts.

This is the equivalent of is the FBI allowed to search my trash if the trash is in a trashcan on my property or on the sidewalk, being pulled by a street cleaner or knocked over by a fox. We can make it as complicated as we like but the law needs to draw some lines somewhere and stick to them.

Is the constitutional issue here that exploiting a security hole (SQL injection, remote execution to dump the $_SERVER variable, what have you) constitutes an illegal search?

If so, is their any guidance from case law about the difference between some sort of legal poking and prodding vs illegal hacking?

I wonder if this could be considered a case of exigent circumstances? [0] The exigent circumstances exception can be invoked "to forestall the imminent escape of a suspect, or destruction of evidence". Arguably, that's what this was -- Silk Road had a security flaw that could be fixed at any moment, and the FBI may have felt they needed to act quickly to obtain evidence before the security flaw was fixed.

[0] https://en.wikipedia.org/wiki/Exigent_circumstance_in_United...

Well, they were also committing criminal activity in "plain view" so to speak.

In general in order to break the law you must have a warrant. For example, a cop entering private property without a warrant or probable cause is nothing more than a trespasser. These agents likely violated the CFAA by doing what they did. Because they didn't have a warrant, IMO it probably constitutes an illegal search, but the courts may well have a very different opinion. Regardless of which side loses this argument, this will likely find its way to the appellate courts and possibly the Supreme court, as it seems to be an unanswered question of great importance.

From the CFAA:

(f) This section does not prohibit any lawfully authorized investigative, protective, or intelligence activity of a law enforcement agency of the United States, a State, or a political subdivision of a State, or of an intelligence agency of the United States.

I'm not sure what "lawfully authorized" means there, but my current belief is that one form of it is permission from the Attorney General.

But I wonder if computer hacking (which this likely would fall under) can be "lawfully authorized" outside the confines of a search warrant. The Attorney General could order the FBI to break into the computers of a hedge fund suspected of insider trading, but that clearly wouldn't be lawful unless authorized via search warrant. This is very similar.

Well, I don't think they broke into the computer. But I guess my point was more that the CFAA doesn't have any answers, you have to figure out the scope of the authority possessed by the Department of Justice.

+1. I wasn't aware this clause existed in the CFAA.

If they're simply tweaking the request vars to access unauthorised information, how does this compare with Aaron Swartz's crime?

But if it's a server not in the US? Does it get the same legal protections?

In the past simply typing in random URLs has been deemed illegally circumventing a site's security. Like most things it's fine when the government does it but you'll get jailtime if you do it to one of their friends.

it can probably set a precedent that will free all scapegoat script kiddies currently in outsourced prisons.

I think it's more likely by header the FBI meant HTTP response header not IPV4 header.

Legally I don't know if we should allow "fuzzing" without a warrant. They've put people in jail for doing less.

Weev was put in prison for wayyyy less than what the FBI did.

I don't think Weev should have been tried criminally at all, but I kind of disagree with this purely from a technical standpoint.

The FBI discovered a flaw that let them see the IP address of a web server.

Weev discovered a flaw that let him see private information of AT&T customers.

Customer information is generally considered much more sensitive than an IP address.

Privacy is relative. In Weev's case, AT&T considered the information of its users to be sensible, private information, while, in SR's care, the IP address is a very sensible and private information, considering that tor was used to hide it.

And the judge who released him from prison expressed skepticism about that.

Regardless, two wrongs...

Yes, but that's not why he was released. Those charges are still untested, he was let off the hook due to the venue.

I'm suspicious of anyone who seems to have trouble understanding the FBI's claims. It's clear to me that what the FBI is claiming is that some part of the CAPTCHA was linked to the server directly, rather than its TOR address.

From the article:

Even in the hypothetical case where – for some unrealistic reason – the Silk Road hidden site was including an image on an external server by referencing its IP address or hostname, the agents would still observe this traffic as having come from Tor. There is no magic way that the traffic from a real IP embedded within the HTML of a hidden service would find its way directly to a client without passing over the Tor network and through Tor nodes. Were this the case, it would be a huge vulnerability in Tor, as it would allow the administrator of a hidden site to uncover visitors by including an element that is served directly to the client over clearnet (thankfully it isn’t and this doesn’t work – try it).

I don't think they are presenting it accurately. I think what they are indicating is it was something like this:

    Login page
I believe $code_to_embed_captcha was returning something along the lines of http://REAL_SERVER_IP/captcha.jpg instead of http://ONION_ADDRESS/captcha.jpg

This wouldn't allow you to identify users, the request for captcha.jpg is still routed through TOR. However it does reveal the true IP of the server.

This is my guess as well. That or the captcha had a query parameter like ?nonce=1234&redirect_url=$HOSTNAME/login

Yeah, except that as he said the Silk Road site was pretty heavily scrutinized and if it was that simple other people would have spotted it.

We just barely have some solid ground protecting phones you have on-person from unwarranted search, and, AFAIK, there's no real laws protecting your rights when you have data residing on third party servers in foreign countries.

I'm definitely not a lawyer, but the server had an unknown provenance and it was physically located in Iceland, and that probably makes for some fuzzy ground at best as to whether or not what anything the FBI did to gain some sort of access to the physical server, even if it was made possible by parallel construction, would even be illegal.

Look, this is a great chance to see if anything is amiss. We can simply directly test our hypotheses:

Start up the Sill Road server with a tor circuit in front of it (isolated from the main network.) Then have the FBI demonstrate how to get the server to cough up it's IP. It's easy to prove who's telling the truth here.

not really, it could be parallel construction : they found the flaw after having seized the server.

Well, I can't say much about the reality of the FBI approach, y'all seems to have that better covered.

Instead, I'm led to ask at what point will people stop using "myself" as a formal version of "me"? It isn't, it actually has a completely different usage.

In this case: "the SR Server was located by myself and another member of the CY-2". Here 'by myself' means 'on my own', not 'by me', so it directly confuses the meaning of this sentence. To me the obvious meaning is that they both work independently and found the same thing.

About the only other time you can use '-self' is in the reflexive, as in "I did this to myself", "you do this to yourself". I will never do something to yourself, and you will never do something to myself.

So, I know this is completely out of place, but if I can get one person to not do this in the future, I will make my life a bit happier. Also if I can persuade that small group of nameless people (you know who you are) that "could care less" actually means the opposite of what they think it does; and that "alot" is not a word.

I'm all for the English language evolving and expanding, but let's weed out the bad ideas which only increase confusion and bring nothing good. And seriously, "alot"? "Lot" is a three letter word, it's not hard to spell. As for "a" ...

I understand your confusion, but this is a common usage case for the nominative first person of "me" in formal speech. This is one of a gajillion special case word choice situations. This particular sentence structure (Direct object, past perfect verb, subject) suggests "myself". However, the writer should have put in the other person's name before "myself", as is common practice, leading to confusion.

Consider: "the SR Server was located by another member of the CY-2 and myself".

I'm not claiming that "me" wouldn't parse, but that using the current rules properly would not cause confusion in the first place. Someone is trying to sound intelligent, while putting their name first so as to emphasize their contribution.

So, I'm definitely not confused :)

I do appreciate it's very common in what I'd call pseudo-formal speech, mostly used when trying to impress someone one perceives to be of higher social station. Estate agents do it a lot when talking to clients. And so it does have the "well, lots of people do it" support.

However, if you go back to the roots (Latin), "myself" is specifically the reflexive form of the first person. When the first person is the object or the dative, it is "me/to me". So to have correct agreement with the verb, your example should end in "and me".

While your example is easy enough to parse, it doesn't actually follow the rules - though maybe the common practice. And if you rearrange the two object nouns (i.e. back to the original) the sentence loses clarity. Using "me" never loses clarity, and the ordering of object nouns shouldn't matter beyond providing emphasis.

Basically, that example is ok in Perl, but not in Java ;)

And ultimately, using the reflexive form incorrectly never makes a sentence clearer, and it's longer to write and say.

Anyway, this is so far off the OP's topic, which is actually much more important & interesting, I do apologise to him/her.

The case you cite, "the SR server was located by myself and another member..." is not ambiguous at all. For "by myself" to take on the "on my own" meaning, the sentence would need "I" as its subject:

"I located the SR server by myself.", not "The SR server was located by myself", which I don't think anyone, even those using this objectionable form, would produce.

I understand and sympathize with your distaste for the expression: it goes against previous rules of formal written English, and it signals the style adopted by people speaking for organizations attempting to convey power and control, which goes against the anti-authoritarian ethos of HN. But these sorts of judgments are usually motivated by deeply-held feelings and then rationalized as pleas for clarity and precision in language.

If they obtained a warrant, is fuzzing/hacking Silk Road the equivalent of kicking down a door during a raid?

They didn't obtain a warrant.

Probably... though I have an easier time imagining someone picking a lock or, in the case of fuzzing, using a bump key.

If I ever set up a hidden service I'll make sure it lives on a box that has no public IP whatsoever. Preferably a physical box.

Unfortunately, the technology you need to do that is not yet finished: https://trac.torproject.org/projects/tor/ticket/9498

ETA: I should have said "one possible technology"; there may be others, but I am pretty sure that an IP-less Tor node requires that you play the Tor Bridge game and stream the Tor protocol over a non-IP link. I have had a prototype of this design running, but have yet to have it to a point I consider robust.

praptak didn't say IP-less, but no public IP. If the server has an IP in the 192.168.1.X range, that doesn't tell an attacker much, supposedly.

This looks like it's unrelated.


All of the stuff that I expect to leak or get hacked lives on box A in a private network. The second host in this network would be the Tor gateway B, hardened, stripped down of any unnecessary stuff. Its only function would be to route traffic to/from A into Tor. Obviously it would have to be connected to the public internet but at least it would have much less software that can be hacked.

As an additional security measure, B could live behind a firewall that only let connections to Tor nodes. (yup, it might be a PITA to set up.)

Not that big a PITA, I don't think. People already use Tor exit node lists to block activity from Tor exit nodes, you're kind of just looking for the inverse.

Shouldn't be hard to create a script to generate e.g. iptables rules that permit connections to Tor public relays.

The security benefit is a bit marginal, though. The public relay list is basically world-writable.

Entire operating systems like Whonix already exist to deliver that kind of security.

We're really talking about two different kinds of "security" here. There's blocking access to anything but a Tor gateway (including in the case of a compromise at the kernel level), and blocking access by the Tor gateway to anything but Tor relays.

But that raises another point I'd neglected. On the Tor gateway, you can block access to the public internet by anything other than the Tor process.

The security benefit against maliciousness is iffy, since the most likely route of compromise for a Tor gateway that's been set up with even a little care is the Tor process, but it certainly reduces the chance of accidents.

How do you actually target the Tor gateway machine though? You can attack a Hidden Service, because there must be a webserver that responds to requests. The Tor gateway just passes traffic from the network to your client device.

There's a lot of work that goes on both before and during that "just".

Bugs can exist even in code most of us would consider trivial. The Tor daemon is definitely not trivial, and the project itself points out that it needs work![1]

Here[2] is "just" container.c from the Tor project. It's one very small part of the project that does fairly simple things any professional developer ought not have much trouble understanding.

It's almost 1600 lines. Are you telling me you can guarantee there are no bugs of potential security significance in that file?

They also want to make Tor even more multithreaded than it already is[3]. Threaded code never has bugs, right?

Any software is a potential attack vector. Tor is no different.

[1] https://www.torproject.org/getinvolved/volunteer.html.en#tor...

[2] https://gitweb.torproject.org/tor.git/blob/HEAD:/src/common/...

[3] https://www.torproject.org/getinvolved/volunteer.html.en#use...

To be fair: 1600 lines is relatively small.

You could probably proof it from anything but business-level error.

How do you actually target the Tor gateway machine though?

Not the OP, and I don't know much about Tor's protocol, but at a guess one might start with deliberately malformed layers of the onion, so you route through a Tor entry node, the outer onion layer is removed, then the packet is forwarded to a Tor relay.

The Tor relay then unwraps the next layer of the onion, but the data within that layer is deliberately malformed to trigger a bug in the Tor code running on that relay.

The simplest case is trivial. You place the hidden service box (HSB) and the Tor gateway (TG) on private IP space, and deny the HSB all connectivity to anything except the TG.

Ideally you actually place the HSB and TG in different network segments so packets from the HSB must pass through a router to get to the TG and vice-versa, ensuring a compromised HSB has no possibility of sniffing any incriminating broadcast traffic to/from the TG.

Then you shill your service with a pseudonym connected to your real name so they find you anyway

While I agree that the FBI's explanation as they gave it is impossible, I see 2 possible explanations for how their explanation would make sense. I don't know why they would glaze over these details, but:

1. They could have meant simply viewing all the HTTP response headers with something like Chrome's dev tools, Firebug, or possibly a proxy like Burp Suite.

2. They may have set up Tor to essentially use no hops and use only their own host as an exit node. That way they would be able to directly observe traffic from a clearweb IP address in a PCAP. Or they could set just a single hop and specified another host they control as an exit node. This seems way overboard just to find an exposure like this, but it's possible that the FBI or NSA have such test environments to probe for other Tor-related vulnerabilities and misconfigurations.

The thing that struck me as the most odd about this article was the assertion that keeping a hidden service securely hidden must be easy, because the TOR website's Wiki article on it is so short. I think it's clear that there isn't that much proven experience out there in keeping a hidden service secure against a determined and well-funded adversary, and what experience there is probably hasn't made it to the TOR Wiki page for various reasons. Just the last HN article on this generated dozens of comments with various best-practices for securing such a service that clearly weren't followed by Silk Road, and we're mostly amateurs at this.

The guy was not a good developer and they caught him because of it. It really does appear to be that simple. No parallel reconstruction needed when you're dumping server vars to the public.

Slightly off topic: could you please increase the font size on your site? It's actually below 12pt. The blockquotes are more readable than the body of the post.

I only noticed today after somebody sent me a screenshot that i'd been viewing my own website zoomed in at two levels. I updated the stylesheet and bumped the size up earlier but it might not come through without a hard refresh because of crazy caching™

It is 17px for me, but most browsers support zooming into a page, that will make text easier to read on almost any site, but could make it more difficult if accidentally zoomed out.

Every browser has supported this since forever

Why is using var_dump debugging bad practice? Or is he insinuating that using var_dump while testing on a live site is bad practice?

With XDebug and proper breakpoints (or HHVM's debugger) I don't see any need to do var_dump debugging these days. Far more powerful to have an XDebug client like Codebug drop you into a REPL at your breakpoint so you can explore what's gone wrong.

I think he infers that both are bad practice, but for the record I don't think there's anything wrong with var_dump debugging. Sure, a real interactive debugger is nicer, but it can be a pain to set up with PHP (I know, having set these up several times). Print-based debugging is fine as long as you don't expose sensitive information to end users.

Difficult to set up? I disagree, Xdebug with connectback is amazingly easy to set up in a Linux VM! On windows, yes, it's hard, but Linux and OS X its very simple nowadays. I should write blog post on how to do it (having done it dozens of times over the past 6 months)

Yeah, I'm pretty certain that he means using var_dump on a live server (especially of $_SERVER) is incredibly dumb.

It's definitely bad practice on a live server, and frankly its a pretty painful way to debug PHP, at least without xdebug.

He is saying using `var_dump` on a live server is a bad practice.

I'm also pretty sure that was print_r not var_dump.

1. get warranty to track someone id'ed as recipient of usps drug shipment.

2. exploit client tor missconfig

3. repeat 1-2 until you have a net so wide of tor users that only node out is the service

Nothing will keep you from getting hacked by a determined government, but using Apache/PHP isn't exactly trying to make their life difficult. But at least they didn't use Ruby on Rails. :P

So you are saying Rails might make intruders' lives less difficult? Of course, the _implementation_ is what would be more or less secure, but i'm a little curious what you meant.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact