Most security research does not in fact work this way. Consider, for instance, virtually any memory corruption vulnerability; while it was once straightforward (in the 90s) to work out an exploit "blind", today, researchers virtually always have their targets "up on blocks", connected to specialized debugging tools.
I am a little surprised that we are only now hearing about high-profile researchers getting dinged for actively scanning for actual vulnerabilities in other people's deployed systems. It has pretty much always been unlawful to do that.†
(These are descriptive comments, not normative ones. My take on unauthorized testing of systems in production is complicated, but does not mirror that of the CFAA).
It's for this reason that you should be especially appreciative of firms, like Google and Facebook, that post public bug bounties and research pages --- those firms are essentially granting permission for anonymous researchers to test their systems. They don't have to do that. Without those notices, they have the force of law available to prevent people from conducting those tests.
(Background, for what it's worth: full time vulnerability researcher, started in '94.)
† Caveat: it does depend on the vulnerability you're testing for. There are a number of flaws you could test for that would be very difficult to make a case out of. But testing deployed systems without authorization is always risky.
However, one thing has always crossed my mind: since the legal definition of authorization is still very fuzzy, what stops a third party from going after a researcher, even though the company who owns the server which was technically hacked has no interest in filing any complaint against the researcher?
To clarify my question, the recent Brazilian law regarding computer hacking establishes that only the owner of the hacked computer can file a complaint against the attacker, and legal proceedings can commence only after such a complaint has been filed. Does it work the same way in the U.S.? My understanding of american law is very weak, but I know that, for some crimes, the victim does not have a say, i.e. the state will prosecute regardless of the victim's will.
After a US law enforcement agency has been notified of a complaint by a victim of a crime they forward it to a prosecutor. At this point the victim can no longer drop the charges. The only person who can drop the case then is the prosecuting lawyer. They occasionally do drop cases where it doesn't make sense anymore. But procescuters don't get 'cybercrime' cases very often and they often make headlines , especially these days, so I doubt many lawyers would voluntarily drop that opportunity for their resumes and work on the usual murder or drug trials instead.
Expensive attorneys and ambitious prosecutors, each trying to twist half-truths to, more or less, ignorant judges and jurys. Makes me wonder if some of these servants of justice are forgetting that, their specific role aside, as the above description suggests, their common goal is to reach an honest conclusion about whether someone actually did something wrong, which implies everyone's effort to understand in what ways are the related actions harmful and how does that harm balance against fundamental freedoms.
That sounds a lot like ignorance.
The facts of the case: someone broke into a computer system without permission.
The inability to interpret those facts in the light of what a security researcher does isn't a result of different values, but a lack of knowledge of the context. People who don't know how computers or the internet work are open to being told whatever story the prosecution decides to spin.
Edit: I think the ignorance is actually made clear by the example in the GP. Imagine some good Samaritan is walking past a jewelry store after closing time. They notice that the front door is ajar, and upon testing they find that the alarm doesn't go off when they enter the store. So they call the owners and wait in the store until the owner can get there and make sure the store is secure.
Do you think it's likely that this person would be prosecuted? Or, if they were, that the prosecutors and judge would throw the book at them to "make an example"? People understand that scenario and are likely to treat it with leniency in a way that they don't understand the equivalent scenario in computing.
P.S., Always a pleasure to be slapped down by tptacek :)
In increasing levels of seriousness:
1. The person is walking by the store and, in the course of their everyday activity, sees that the door is ajar; they then contact the owner. This seems fine to me.
2. The person is walking by the store, sees the door ajar, and then altering their normal activities decide to actively test the door to see if they can break into the store; they can and then contact the owner. This seems dodgy to me.
3. The person chooses to visit each jewellery store in town to see if any have a door ajar. This definitely seems inappropriate.
The reason I come down opposed to the person in the second example is two-fold.
Firstly, ignoring intent, where do you draw the line on an acceptable level of 'break the security' activity?
- Thinking that the door is ajar and pushing on it?
- Seeing that the lock is vulnerable and picking it?
- Finding a ground floor window and breaking through it with a brick?
The resolution I choose is that if you have gone out of your way to subvert the security of my stuff without my consent then you have crossed the line. Gray is black.
Second, I don't care about your intent. Every security system will break at some point, and so I view the existence of doors and locks as mainly being about roughly outlining the boundaries that I expect to be respected. If I want to improve my security then I'll hire someone to advise me on how to do it. If I come home tonight to find a stranger who has broken into my house in order to prove that it's possible then (1) I already know, and (2) they have just caused the harm which they are nominally trying to protect me against.
But most likely a security researcher will fire off some multiple of a thousand probes to see if the door is open. Collateral damage is likely. This is not what is happening in your jewelry store door case.
That the every-other-month theft of giant numbers of credit cards or passwords can be prevented: These things can be prevented by the folks in charge paying attention to the alarms going off in the back.
What if no private data were actually accessed, let's say if the researcher only compromised his own account.
Or about the case that he hacked a device that he bought, violating the Acceptable Use Policy of the producer.
Or the case where someone automated the retrieval of data that he already had legally access to, like, if I recall correctly, Aaron Swartz.
All these examples are unique and would fail any physical-world analogies, so they should be examined and judged differently, by people that do give a shit, want to take the effort to understand their unique aspects -and are actually able to. I'm not sure if that's the case.
My general point is about how we found ourselves in a system where justice servants, like prosecutors, appear to treat their job "just like any job" (at least in cases that they might consider abstract -"hacking", less clear and direct effects than "murder"), where they can put their careers first and ignore any consequences to others. Or that someone has to bear enormous defense costs to stand a chance, or be coerced to plead guilty or abstain from exercising what should be his right, out of fear of finding himself involved in such a situation.
The prevailing norm is that property rights are sacrosanct, and any invasion of those rights is considered suspicious and explanations about benevolent intent are disbelieved. There is no general right to "tinker" with other peoples' property without permission, for fun, for research, or for any other reason. We are not a society that requires security measures to be effective in order to serve as a signal to keep out. A velvet rope is as effective as a steel door for the purposes of signaling that access is not allowed.
This is not a matter of prosecutors putting their careers ahead of the spirit of the law. It's about hackers not understanding that we're a society that requires you to keep your hands to yourself.
NB: I have a beef with the CFAA, but it's not with the spirit of the law, but rather the fact that criminal penalties under the CFAA are totally out of line with those in analogous physical scenarios. The standards for trespass on digital networks shouldn't be higher than the standards for trespass in the physical world. But juries can't do anything about this problem, and judges really can't either. It's Congress's problem for putting the felony escalation provision in there.
"Lanier said that after finding severe vulnerabilities in an unnamed “embedded device marketed towards children” and reporting them to the manufacturer, he received calls from lawyers threatening him with action.
As is often the case with CFAA things when they go to court, the lawyers and even sometimes the technical people or business people don't understand what it is you actually did. There were claims that we were 'hacking into their systems'.
The threat of a CFAA prosecution forced Lanier and his team to walk away from the research."
The CFAA is vague and over-broad, you won't get any disagreement from me on that. Applying it in a case involving a device you bought and own is totally inconsistent with traditional norms of private property. But those are edge cases. The actual prosecutions people get up in arms about aren't edge cases. They pertain to conduct that clearly violates the norms of trespassing on private property, and hackers justify their actions by saying that those norms shouldn't apply to digital networks. Juries, unsurprisingly, don't buy that. So hackers and the broader tech community call them "ignorant."
For you, someone finding a vulnerability in the software that provides a network service, hosted in some server he doesn't own, is clearly trespassing private property -even if he only accesses his own account's data- but finding a vulnerability in the software that comes bundled on a device he bought, is not.
For Sony, let's say, both constitutes violations of her property -it's her software, she owns it and she doesn't care if the carrier is her server or the device she just sold you. In both cases she only gives you permission to use her software in a certain way, which excludes any sort of hacking.
Maybe the reason that many draw the line to the medium, is because it is easier to visually compare a computer network to a physical property than a device that you have bought (but has data you don't own)?
But is the physical ownership of the medium that carries the data what matters or the ownership of the actual data that are being accessed? If it's the medium, why, when the really important thing that the owner cares to protect is, in almost all cases, the data?
Not trying to argue, just expressing some questions that I think are tricky and deserve more thought than they get. In any case, I think physical and digital property analogies can only take us that far, so I try to keep clear of them.
What we are talking about in this thread is the supposed criminalization of security research. If you're trying to get someone to take the other side of the argument that security research is needlessly legally risky, you're probably not going to find many takers. There is a world of difference, however, between being sued and being imprisoned.
That could be said for racist laws just as well (e.g Jim Crow stuff).
Even if they don't have to "understand his values", they should be made to, and the law is bad in this regard.
Hence, I don't see the point in pointing out the status quo and what privileges they have in a neutral manner. Seems like apologist to me.
The fact that juries judge things from a certain perspective says nothing about whether they ought to do so or not.
usually these cases go to special DA Investigation units who invest huge sums of money to determine who was involved and to what extent (think full blown computer forensic investigations to gather evidence). even when the cases make absolutely no sense to pursue, they will persist on the sole outcome of recovering some of these costs... i know first hand, borderline extortion.
While what you describe may be the sad reality, it makes zero sense. If a legit researcher, 'specially one that's being transparent about it, researches any domestic system, then that's got to be better than the Iranians, Russians or Chinese doing it (which they do anyway).
But hey, what do we know anyway. There's probably some benefit that makes it preferable for a foreign party to uncover our vulnerabilities without our knowledge.
* Security testing is extremely disruptive to production systems, most especially if those systems haven't been hardened in any way. Security testers are not as a rule good at predicting how their tests can screw up a production system.
* No matter how much effort you put into a security program (Google and Facebook put a lot of effort into it; more than most people on HN can imagine), attackers still find stuff. So there's not a lot of intellectual coherence to the idea that open, no-holds-barred research applies a meaningful selective pressure.
* It can be very difficult to distinguish genuinely malicious attacks from "security research", and malicious attacks are already extraordinarily difficult to prosecute.
I'm not saying that the rules as they exist under the CFAA today make perfect sense.
When "researchers" then flip around and talk to the press and don't follow responsible disclosure, then what you're dealing with really is a hacking attempt. You're walking up to the doors and windows of a business and jiggling then to see if they're open and taking notes on what kind of locks they're using and how they could be bypassed -- without any kind of approval from the business owner. Then you're turning around and damaging the business by talking to the press about it.
Back when I was more interested in computer security (roughly '94 just like tptacek), I knew that scanning systems that I didn't own without permission would get me in trouble. We seem to have devolved a bit in our collective maturity where we think we can just fly the flag of "security researcher" and that this gives us permission to initiate what look just like attacks on systems.
If you don't own a system and don't have permission for it then don't attack it, and don't put the government in the position of trying to discriminate between a foreign government launching attacks and a "security researcher" with pure motives... And don't be too shocked if the government and legal institutions have issues in distinguishing between those two cases and throw you in jail for 15+ years. The way to avoid that outcome is not to do it. Only attack and probe systems that you own or have permissions to attack and probe. Just because you're a "security researcher" who is egotistical enough to think you can save the internet from itself, that doesn't mean you're going to get treated differently from a foreign national with less pure motives. Stay away from shit that isn't yours (and the security of the entire internet is not your sole responsibility).
(Don't get me wrong; any prison time for good-faith vulnerability research, no matter how negligent or ill-advised the research is, seems like a travesty).
Justice isn't the machine language of a computer that has deterministic outcomes given its inputs. You're asking humans to determine your motives, which will be necessarily be subjective. And I'm not willing to put my freedom at the risk of someone else's subjective determination. And when "security researchers" do grey-hat hacking they shouldn't be too shocked if they're arrested and charged with those kinds of crimes, because they're asking too much of the legal system.
And that doesn't mean that its 'right', and i'm totally against that kind of penalty. Even though I think its wrong to test vulnerabilities and spin around to go to the press, I see a huge difference, and think that the penalties should be closer to a slap on the wrist (a fine, and 30 days in jail / community service kind of penalty -- not 15+ years in prison).
But I'm not going to put myself into the position of making a judge and the legal system make those kinds of distinctions. What is so important about being able to do that kind of grey hat hacking that you're willing to put your own freedom into that level of jeopardy?
That's scary right there. If you're deploying something you know has vulnerabilities you have bigger problems than losing sleep at 3am. Same for operating something you know is vulnerable. You (collective, not you, personally) totally deserve to get up at 3am. It's grossly irresponsible, because what you probably don't already know is how that harmless XSS vuln you know about is really a leaf in a 7-level deep threat tree that results in information disclosure. I can just imagine that such a cavalier attitude is how the Sony PSN network got owned.
My point stands. Attack from Iran or probe from a researcher (your points in your following paragraph noted and notwithstanding)?
"...If you don't own a system and don't have permission for it then don't attack it."
That's loud and clear, for sure.
Everything has vulnerabilities.
Does everything have known vulnerabilities that are not actively being worked on?
But, someone asks, what if the business is really really important? Then that's all the more reason to not mess with it.
All large companies have vulnerabilites, there's always work that needs to get done, that always get triaged according to impact and then people who ideally should have 40-hour work weeks have to start patching code, then it needs to get Q/A tested to prove that rolling it out won't break everything else, and all that takes time.
And I have worked for companies that took security seriously and worked for companies that had laughable security practices. In either situation, having 'help' from external 'security researchers' was not useful. In the case where companies were run competently it just means that you cause people to scramble and push solutions before they're ready. In the case where companies were not run competently it just causes people to scramble and does nothing to affect the underlying shittiness of the company. You are not going to be able to fix shitty companies. Its not your job to stop future Sony PSN networks from getting hacked, you can't do that, and you should stop thinking you can, and stop using that as justification for your own actions.
"...especially if those systems haven't been hardened..."
Well that's just it, isn't it? If the system hasn't been hardened then it wouldn't hold anything of interest and therefore wouldn't be targeted by either friendly researchers or malicious adversaries.
If a system holds value it should be appropriately secured. That must include dealing with attacks as part of business as usual.
As for meaningful, selective pressure - well then why bother with bug bounties? Even Microsoft, the only organisation at that level with a published SDL [edit: security development lifecycle], offers them now.
SDL ref. http://msdn.microsoft.com/en-us/library/windows/desktop/cc30...
I've had my rant. Will shut up now.
If we're analogizing, an exterminator seeing rat droppings in your restaurant and offering to solve your problem rather than letting the department of health deal with it, is a slightly more realistic example.
A more legit exterminator would agree to come past while the customers were not there.
The CFAA requires access without authorization or exceeding authorized access. Presumably you are an authorized user of your own systems.
It is possible that some vendors may try to use User Acceptance Licenses to further restrict what actions can be taken with their software (even in case where you've purchased it and installed it on your system).
I believe (and would love to be corrected by a lawyer), that even those cases would be civilly prosecuted, and still not related to the CFAA.
This is one of the reasons why when providing penetration testing/application testing training we always took great pains to drill into their heads to never use any of those techniques on systems you do not own. Not poking around on your bank's website, etc.
If you knowingly access a system that you do not have authorization for, the owner of the system might not care (or might not notice), but under the CFAA, they can file charges against you.
Reasonable people may disagree what constitutes "exceeding authorized access" (where reasonable people might be your attorney and a prosecutor).
I mean once you've been sentenced under the CFAA you might as well have a shootout with the police or kill some people it make no difference hell the extra charges won't make much a difference you're still facing life.
Does that make sense to anybody?
What they do require though is an exception for researchers and you can define researchers anybody who discloses the vulnerability to the owner of the vulnerable system before publishing it publicly. A security researcher is required to disclose publicly the results of his research in order to be considered a researcher.
A regular hacker cannot claim to be a security researcher since hackers never disclose the vulnerabilities they find to the owner of the system even if they do share them publicly with other hackers sometimes. It is not in their interest to let the owner of the vulnerable system know they have a problem.
Is this setting up a precedent?
> HD Moore, creator of the ethical hacking tool Metasploit and chief research officer of security consultancy Rapid7, told the Guardian he had been warned by US law enforcement last year over a scanning project called Critical.IO, which he started in 2012.
British might confuse "warning" for what's known in Britain as a "police caution", which is a extra-judicial criminal prosecution, judged summarily by police, and is also referred to as a "formal warning". Such warnings become part of their criminal record in the UK and effect things like employment, as they are in effect a criminal conviction (as I understand it, although the UK describes them as "not a criminal conviction but an admission of guilt [after being accused by the police]", which I view as an irrelevant distinction). There is no such system under federal law in the United States. A UK reader might rationally assume "police cautions" are just called "police warnings" or "US law enforcement warnings" in the US. Police cautions are not something most people in the US know about, and would probably be outraged to know of their existence. (In effect, the police say you admitted to a crime, so they go around telling everyone who asks that you're a criminal. Such as potential employers and landlords.)
At least, to me, that's the implication of the statement.
> ...judged summarily by police...
> In order to safeguard the offender's interests, the
> following conditions must be met before a caution can be
> * there must be evidence of guilt sufficient to give
> a realistic prospect of conviction;
> * the offender must admit the offence;
> * the offender must understand the significance of a
> caution and give informed consent to being
I wonder if people are so busy rushing to do things online they don't want to pay the cost of strong security, so they let themselves be vulnerable and need laws to protect them. As a few people have said, foreign government hackers aren't bound by such laws and even they can't get in to many sites.
If we stop seeing hackers as guilty people to blame, and think of them as an unavoidable natural presence on the internet, just like data corruption or power failures, then we won't need laws, instead we'll need safety standards and licenses for IT workers just as we do for, say gas plumbers.
Every day, spammers "hack" my web forum by solving the captcha. I don't want to find them and send them to prison. I want to build better defenses to prevent them doing it.
Is it akin to going to a bank during normal business hours and using lawful powers of observation, i.e., implicitly authorized? Or is akin to breaking into the bank after its closed, or otherwise violating some implicit lack of authorization, e.g., going somewhere off-limits, such as trying to secretly enter the vault?
Because I think you'll recognize the inherit danger of allowing people to willy-nilly try and break into banks to "test they are actually fulfilling their promise".
Over the last few weeks I've been wondering when the scale flips and general purpose computing will die outright. Things that were once considered forgone conclusions about tech are turning out to be accidents of the fact adoption starts with individuals. How long can tech empowering people continue to outrun the oldschool powers using tech to empower themselves?
I hear that kind of talk a lot, usually about taxes and government programs. It seems incredibly depressing, for one thing. It's fundamentally saying that you can never win, just delay the inevitable loss.
Fortunately, it doesn't seem to be true, whether it's taxes or computers. Computers might be getting squeezed a bit now, but there have been far worse periods, followed by better. Go back in time to, I don't know, 1990. You want an OS for your PC? Sure, Windows or DOS? You want a wide-area network connection of some kind? We have a variety of choices for you, ranging from the local phone company to the local phone company, or even the local phone company.
Remember when you had to be careful never to tell the phone company that your second line was for dialup internet, because they'd charge you more if they found out you were going to use it with a modem? Remember when you had to worry that they'd figure it out anyway from your usage patterns, or that they'd just cut you off regardless because you were tying up a line for hours and hours every day?
I don't want to tell you not to fight. Certainly, there are plenty of problems right now, and it's well worth fighting. But we should realize that there are many ways that it can be and has been worse, and that the ratchet really does go both ways when people want it to.
Who would actually oppose fixing that? Is it purely a lack of understanding the issue on the part of legislators?
I actually do not have a problem with the CFAA's statutory prohibitions on unauthorized access. They seem eminently sensible to me. Don't mess with systems that don't belong to you.
I do think the CFAA has a grave and dangerous flaw: I think its sentencing makes absolutely no sense. I generally do not believe that computer crimes should have sentences that scale with the iterator in a "for()" loop. In the cases where sentences could reasonable scale along with the magnitude of the attack, the meaningful scaling factor should (and I think typically does, in a sane reading of the law) come from some other crime charged along with CFAA.
"Don't mess with systems that don't belong to you" worked much better in 1980 when typical computers cost a million dollars and were only expected to be used by the employees of the bank or government that owned them, because in that context you know you're authorized when you file a W2 and are issued a security badge.
Once you put systems on the internet for access by the general public it changes everything. "Mess with systems that don't belong to you" is practically the definition of The Cloud. The defining question is no longer who is authorized, because everybody is authorized, so the question becomes what everybody is authorized to do.
The problem is that nobody has any idea what that means in practice. All we can do is make some wild guesses -- maybe SQL injection against random servers of unsuspecting third parties is unauthorized access whereas typing "google.com" into a web browser without prior written permission from Google, Inc. is not. But what about changing your useragent string to Googlebot? What if that will bypass a paywall? What if that will bypass a paywall, but you're a web spider like the real Googlebot? What if you demonstrate a buffer overrun against the web host you use in order to prove their breach of a contract to keep the server patched? Can you charge a journalist for reading a company's internal documents when the company made its intranet server accessible to the internet without any authentication?
The answers to these questions depend primarily on which judge is deciding the case. Which is ridiculous, and the hallmark of a bad piece of legislation.
He was released on appeal over a jurisdictional issue, not a statue or misapplication of the law.
This is actually why we don't know anything from that case. District court rulings aren't binding on other courts and the appellate court apparently threw out the case without ruling on the CFAA, so there was no precedent created either way.
But if the appellate court had ruled the same way as the district court and created that precedent, I don't think you could reasonably describe that as an improvement in the CFAA situation.
The whole thing about unauthorized access - not sure about. If you get burglarized and live worse part of town - because you did not lock your front door - is this you fault or criminal's? Ultimately buck stops with you, you would look very stupid arguing that a stranger walked off the street and pinched your laptop, better yet, if you leave your laptop on your front lawn.
Just because my door is unlocked, or my digital property is unsecured, you do NOT have permission and should not assume you can take access. That is the scummiest argument I've heard in quite some time. You do not have permission to steal something just because it's super convenient to do so; regardless of whether it is physical or digital.
Such rationale is the rationale of a lowlife. "The front door was unlocked so its their fault I stole from them." "If they didn't want me to steal their lawnchair, they shouldn't have left it unchained on their porch." Nothing is inexcusable with that line of thinking. "If she didn't want to get raped, she shouldn't have been all alone in the middle of the night in a dark alleyway." "If he didn't want to get brutally assaulted, he shouldn't have left such a stupid comment on HN."
We never call up the owner of a web server and ask them for permission to browse their site. We just connect to port 80 or 443 and go to town. This is universally accepted as authorized use.
Now, say you're running a vulnerable sshd such that if you send just the right bytes, it'll log you in as root without the password. I imagine most will say that this is unauthorized use.
But what's the difference really? In both cases, you're asking the server to do something, and then it does it. In the real world, we have various things to look for. Private dwellings are off limits without an invitation. Elsewhere, a lock means you don't go in, even if it would be trivial to defeat. Or just a sign that says you should stay out. It's not so clear with computers.
People have been convicted of a crime for taking a public URL and chopping off the last component and getting a directory listing from the server. To one side, the fact that you had to edit the URL and the fact that the directory listing wasn't what the rest of the site was like was enough to establish that as "unauthorized". To the other side, the guy just asked the server, "Can I have what's located here?" And the server replied, "Yep, sure, here you go."
A few weeks ago, there was a story here about a blackjack player who cheated a casino out of a bunch of money. He asked for a dealer who spoke Mandarin. His confederate then asked the dealer in Mandarin to turn certain cards upside down for luck. Normally this would be fine, but the cards at this particular casino weren't quite symmetric on the back, so they could tell them apart. The request would be suspicious, but they used a language the bosses couldn't understand, so they didn't realize what was going on.
In the end, the casino sued the guy for hefty damages. And yet all he did was ask and then receive what he asked.
In many ways, servers are like that dealer. You talk to it in a weird language that the owner can't understand (or he can, but he doesn't listen in on everything) and sometimes you can ask it for something the owner would refuse, but the server/dealer says yes.
So while it's clear that walking off with somebody's laptop just because they left the front door open is wrong, it's much less clear to me where you draw the line with networked computers, and it doesn't look like others have a particularly clear idea either. Given that fundamental lack of clarity, I don't think it's completely unreasonable to characterize these guys as locating spots where access is authorized (and thus legal) but shouldn't be, rather than locating spots where unauthorized access can be gained.
The real problem is that a lock on a door is more obvious than a URL scheme. The government is saying that entering a 7-11 that is unlocked, but walking in backwards, is criminal trespass because that's not what the 7-11 intended for the customer to do. That's nonsense. Implicit authorization in physical property is just so much more straightforward, and the government is trying to maliciously take advantage of the lack of common sense on what is unauthorized, helped along by a Congress that willfully authorizes such action.
And I like your server dealer analogy. The question is whether or not a computer is an agent of its owner and whether its decisions, right or wrong, can be relied upon in business dealings as the actions of its owner.
So what is the digital equivalent of a lock on a door? Must the law explicitly say a lock on a door signifies lack of authorization to enter? Is walking into a 7-11 store backwards implicitly unauthorized?
I should mention that the player involved, Phil Ivey, is probably the most famous poker player in the world. According to Wikipedia, "Ivey is regarded by numerous poker observers and contemporaries as the best all-round player in the world today...his other nickname is 'The Tiger Woods of Poker'." 
Hence my analogy stands.
Going around trying to open everyone's doors is a similar analogy to some other security research. And while its not as clear-cut, in fact arguably not a commonly cognizable crime, it certainly is suspicious and its reasonable for law enforcement to investigate such activity.
And I see that the door is significantly ajar (one can see valuables through the open door)
And the house appears to be empty,
And the doorway is flush against the sidewalk, where I am walking by on my way somewhere else (the door opens inwards and is not in my way)
If I knock on the door (holding it so as to not make it swing inwards further and hit the wall) and ask if anyone is there,
And recieving no responce, close the door,
I should be punished?!?
If I see someone injured and unconcious on a sidewalk, should I just walk around them in order to avoid infringing on their personal space?
What if I have relevant medical experience?
Am I to let them lie there?
If someone (a stranger) is unconscious from drinking alcohol to excess, and is lying on their back, am I to refrain from turning them on their side, and instead allow them to choke on their own vomit and die, so to avoid running afoul of laws intended to protect against pickpockets?
If someone has a problem and is in danger of significant loss, but is unaware of it, and I am unable to inform them of it, but I am able to easily lessen the danger, at no cost to them or any other person, through an interaction that bears some similarity with some action that would be reasonable to forbid due to causing harm,
Should I not help that person simply due to that similarity?
It's possible that I misunderstood what was said somewhat. I'm not sure.