Hacker News new | past | comments | ask | show | jobs | submit login
Bought and returned set of WiFi home security cameras; can now watch new owner (reddit.com)
454 points by tshtf on June 19, 2016 | hide | past | web | favorite | 150 comments

If something like this happens to you - where you gain unauthorized access inadvertently to something - I'd be careful. Under the CFAA you can be charged criminally and the penalties are severe.

So for example, if the OP was to casually drop a few photos the camera took and a badly worded warning in their mailbox trying to help, the 'victim' could report it to the police and an inexperienced DA might try to bag their first cyber prosecution.

I'd definitely not contact the customer. Contact the vendor instead with an email and immediately remove your own access to the system. That way you have it on record (the email) and mention in the email you immediately revoked your own access.

The CFAA is a blunt and clumsy instrument that tends to injure bystanders.

Here's an extract from the CFAA:

Whoever having knowingly accessed a computer without authorization or exceeding authorized access, and by means of such conduct having obtained information that has been determined by the United States Government pursuant to an Executive order or statute to require protection against unauthorized disclosure for reasons of national defense or foreign relations, or any restricted data, as defined in paragraph y. of section 11 of the Atomic Energy Act of 1954, with reason to believe that such information so obtained could be used to the injury of the United States, or to the advantage of any foreign nation willfully communicates, delivers, transmits, or causes to be communicated, delivered, or transmitted, or attempts to communicate, deliver, transmit or cause to be communicated, delivered, or transmitted the same to any person not entitled to receive it, or willfully retains the same and fails to deliver it to the officer or employee of the United States entitled to receive it;


> The CFAA is a blunt and clumsy instrument that tends to injure bystanders.

I feel you. It's because of 1984's CFAA law that I was thrown in jail with a felony charge for rick-rolling my school.

Uuuuh. Could you add more details to this story?

Long story short - used a rainbow table against the Windows XP SAM file to get the password that they used globally - including the login credentials for modifying the content in their CMS for their website. As a joke, we threw Rick Astley up on the main page. The next day, they brought the site down, scoured the logs for my IP address and pressed felony charges against me (Made possible by the CFAA). It's all good now though, after a lot of explaining the judge eventually dropped the case.

Sending someone a link that turns out to go to Rick Astley is rick-rolling. Cracking a password, accessing a CMS, and defacing a site is slightly more than just "rick-rolling my school".

Yes, it probably should have resulted in a suspension, not a felony charge. No, it's not as benign as you implied.

While I admire the original hijinks, I think you have a point. There's a difference between showing someone a funny picture, and picking the lock to their house so you can leave a copy of it in their underwear drawer.

> No, it's not as benign as you implied.

Yes it is. He didn't hurt anyone or even mess with their files in a naughty way. I was so happy to get away from school and this kind of bullshit.

He purposely went out of his way to crack a password for the purpose of gaining unauthorized access to a system that wasn't his.

That's exactly the kind of thing the CFAA was created for.

I agree with the GP that he was harmless, and doesn't deserve anything terribly serious as punishment. But what he did is a lot more than using simple HTML injection to add a rick roll to something.

It sounds like what he did had exactly the same effect as "using simple HTML injection to add a rick roll to something".

'But he achieved it using leet hacker skills' should not be a factor in determining the nature of the charge or potential sentence. That's along the lines of making a big deal out of someone using a "Subversion" system to access and maintain code.

By "using simple HTML injection" I'm thinking of a form where they don't sanitize input so you could put a <video> tag in the 'name' field and suddenly the video would appear on the page.

Getting access to the administrator account is pretty different.

Not for a high school student. School networks tend to be very insecure, and the students tend to see them as just another resource in their education.

> He didn't hurt anyone or even mess with their files in a naughty way.

I've had people deface sites. It's stressful, upsetting, and a lot of work. You can't really trust a compromised server, and school IT is fairly unlikely to have great processes for it.

> I was so happy to get away from school and this kind of bullshit.

The real world isn't likely to look any more fondly on this sort of behavior.

The IT dept has no idea if he messed with their files in a naughty way. They will have to spend x hours and y moneys on checking now.

While finding the problem should never be punished if it's responsibly disclosed, exploiting it (publicly in this manner) should be.

I escalated access to my school district's webserver and accidentally took it down. For some reason I was trying to close the hole I had used; instead, I default denied all access. Whoops. I explained what happened to a teacher, he called in the IT group, my change was reverted and nothing was ever spoken about it again.

I also hacked the online grade review site, which was trivially hackable (javascript password protection, wtf), told a teacher, nothing happened to fix it (or to damage me) and I dropped the issue.

Both of those were done in attempt to be altruistic but technically both were completely illegal. Since no administrator ever found out, no cops were ever called and no prosecutorial discretion was in play, but I wouldn't want to bet I'd have been "let off", which is sort of terrifying.

Out of curiosity, could you expand on the story a bit? I've recently undergone a very similar experience. The administrators at my school alongside the IT Department freaked out and were considering legal action since they didn't really understand what had happened, however I was able to talk them down from it. The entire situation has kind of made me curious about other peoples experience.

You vandalised property and cost the school money.

You obviously don't deserve prison time, but you didn't get it either. It shouldn't have gotten as far as it did though.

But do you think the person/IT responsible for the website was not harmed? You get they are human and 'possibly' went under an amazing amount of stress?

They may have been disciplined for what happened, either officially or off the books.

There would have been a lot of hours work post mortem trying to fix the problem. This is money on a possibly tight IT budget. If the people who run the website care, which many do, this also is stressful.

I'm not saying don't prank or even don't do this prank. Pranks come at a cost, but they also make the world a better place. But be self aware of your own actions. You hacked and got caught it really has nothing to do with the CFAA.

> You obviously don't deserve prison time, but you didn't get it either.

Because they decided not to press charges, not because the CFAA makes any sense.

> But do you think the person/IT responsible for the website was not harmed? You get they are human and 'possibly' went under an amazing amount of stress?

There is a difference between inconvenience and harm. The vast majority of the "harm" caused in cases like this is a result of ridiculous overreactions to anything that involves a computer.

Imagine the analog version of this case. Some kid sitting in the main office sees the teacher enter the combination to the filing cabinet and then the kid unlocks it and sticks in a picture of Rick Astley. If that happened, would we still be hearing about a "post mortem" (as if serious level = human fatality) and the harm and stress the kid caused? Does a law allowing that kid to be prosecuted for a felony really make any sense?

I was thinking the analog version may be someone breaking into your house and leaving a picture of Rick Astley on your kitchen bench. You come home, find out someone has been in your home, but not what they've done in there. You don't know that it was your neighbour playing a prank or if it was some criminal that's put up little IP cameras all around your house until you do some investigation. Until you come to some conclusion you could be very worried and/or paranoid. Granted, this is the extreme version of the analogy.

Jail would be an over reaction for your neighbour, but you'd want that as an option for some creep breaking in, right? It sounds like in the parent posts case everything worked out more or less OK.

> I was thinking the analog version may be someone breaking into your house and leaving a picture of Rick Astley on your kitchen bench.

It was a school, not a home.

(It's also a notable contrast that when people are arguing for bad laws the fear is that criminals will invade your privacy, but when people are arguing for mass surveillance then it's all "nothing to hide" as if organized crime getting access to the surveillance apparatus isn't somewhere north of probable.)

> You don't know that it was your neighbour playing a prank or if it was some criminal that's put up little IP cameras all around your house until you do some investigation. Until you come to some conclusion you could be very worried and/or paranoid.

The criminal who is secretly trying to hide IP cameras in your house is going to leave you a picture of Rick Astley?

The problem with this theory in general is that it has nothing to do with any wrongdoing. Suppose your neighbor sees what kind of locks you have on your door and proceeds to pick them in front of you in ten seconds, then advises you to use better locks.

Now you're in exactly the same situation. You just learned that that person or anybody else with amateur-level lockpicking skills could have been in your house at any point after you installed those locks. If you're a paranoid person then that fact is going to distress you and you're going to search your house for IP cameras, but the source of that distress is the bad locks, not the person who brings them to your attention.

> Jail would be an over reaction for your neighbour, but you'd want that as an option for some creep breaking in, right?

That's the whole problem. You need to find something to distinguish those situations and codify it into law, instead of having a law so broad that it covers both and then having to rely on prosecutorial discretion. Whether you go to jail and for how long needs to depend more on what you did than how much the prosecutor likes you, or we're living in a police state where anybody can be imprisoned at will.

There is a reason we don't just have a single law that says "you must do the right thing; penalty up to life in prison" and then let prosecutors decide what the right thing is.

> It was a school, not a home.

It was someones network that they're in charge of securing.

> The criminal who is secretly trying to hide IP cameras in your house is going to leave you a picture of Rick Astley?

See anything from Anonymous-like hacking groups. Leaving troll notes behind isn't all that uncommon in network breaches. The point is you just don't know until you investigate.

> Suppose your neighbor sees what kind of locks you have on your door and proceeds to pick them in front of you in ten seconds

Or picks them while you're out and leaves a note saying "your locks suck". You discover it's your neighbour after checking your cameras. The OP did say they traced his IP to discover who it was, this wasn't some white hat pen test.

> That's the whole problem. You need to find something to distinguish those situations and codify it into law, instead of having a law so broad that it covers both and then having to rely on prosecutorial discretion.

Great point, I'm on board. But there's a lot to cover that isn't just "what harm did you cause once you were in?". There's potentially time and money (resources) that law enforcement spend investigating. Resources that the company spends investigating. If the breach is public, stock prices could be impacted. IP could be discovered - whether or not it is disseminated, sometimes you just can't know.

All of this because you wanted some lulz and to see if you could? How about stay out of the network you aren't meant to be in. Go into pen testing if you find that work so rewarding and fun.

Punishment isn't the only reason we have laws. Deterrence is also key.

> There's potentially time and money (resources) that law enforcement spend investigating. Resources that the company spends investigating. If the breach is public, stock prices could be impacted. IP could be discovered - whether or not it is disseminated, sometimes you just can't know.

Which is necessary because of the vulnerability, not because of the breach. If bad people could have gotten into your network and that is something you care to spend resources investigating then you need to do it regardless of whether the person who notified you of the vulnerability trolled you with it or not. Whether they troll you is independent of whether they steal your secrets; you can have either without the other or both or neither.

> All of this because you wanted some lulz and to see if you could? How about stay out of the network you aren't meant to be in.

It isn't a question of right and wrong, it's a question of proportionality. If you troll somebody you deserve to be chastised and given detention or community service, not thrown in prison.

> Deterrence is also key.

I'm not sure deterrence is working in your favor here. If your network is insecure and you get trolled then you look stupid and fix it and give the kids detention. If your network is insecure and you deter the trolls then it takes another year before someone who is harder to deter breaks in and then you get arrested because the people who broke in were using your servers to distribute child pornography.

I'll take the Rickroll please.

Not to mention all of the doors and windows appear to be undamaged. You have no idea how they got in and now you have to dig around in case there's a tunnel been dug into the basement, a secret hatch on the roof, do they have a spare key cut somehow - how did they get access to the key?

If you were applying the penal systems to kids, all kids would end up in jail, they spend their time fighting each others and telling each others nasty things.

Good god. I hope you are doing better.

If a cloud provider mistakenly provides access to someone else's content, the end-user has not violated the CFAA. I have not heard of anyone charged for this alone. Not to mention, I'm not sure CFAA protects private computers / webcams in any case, only "protected computers".

Additionally, I'm not sure if this would be a violation of the Wiretap Act as an "interception" either, even if it was intentionally used to spy on the new owner of the device. Federal law is somewhat lacking in this regard.

Not that I'm arguing for criminal charges, but if you exploit an oversight in the design of a webcam to take pictures of someone else's house, you aren't a "bystander."

Reloading a page isn't exploiting it though. That's normal use.

With a bar that low it becomes a game of you reading the minds of the implementors. "Oh, maybe they didn't think that anyone would bookmark a URL, etc. Maybe they want me to go to page X, which may or may not link to this anymore...?"

It'd be better for everyone (judges) to assume that they meant to make the service entirely unprotected. Then innocent page-reloaders wouldn't go to jail and people could sue the company for their marketing lies, etc.

Since CFAA defines no strict-liability crimes, simply reloading a page is extraordinarily unlikely to get you charged.

That's gamed though. It's supposed to be criminal intent but prosecutors routinely bend it to be intent to perform any action during which a crime may have happened. (I did not intend to trespass on your land, though of course I did intend to go for the walk in the first place...)

I can intentionally call your documents department and "guess a URL" without breaking the law, but if I do it on your webserver that same legal intent is turned around and used a proof of criminality. Leaving me with guessing "well, there's a document there in the public folder but maybe they don't want me to see it."

Of course, this is all about malicious and inconsistent prosecution. For you and I it's a theoretical game, but Weev could go back to jail for relying on convention to guess that index.html is public.

We need an actual technological bar otherwise our security industry becomes an ass-covering exercise.

Yes, I do see some need for a technical bar. That's why I've said before that for 'unauthorized access' to a computer system you (should) need to knowingly access a protected system in a way not permitted by the rights granted to you by the computer system, or by deliberate deception of either the computer systems or people.

So there needs to be an actual lock on the door that you've done something to bypass, whether that be manipulating the computers or lying to someone (or thing) to gain access.

No similar provision exists for unauthorized access to property. Opening an unlocked window (or door, or curtain) and climbing through it is an offense with a name: breaking and entering.

Yes, but a computer has programming to enforce the rules and it can (and should) reasonably be expected to enforce those intentions accurately.

Another way of putting this is that they are using the computer to express their intention to authorize (or not authorize) access by means of the programming. It should, as a matter of good public policy, be on them to get this right. That's why my standard would require material deception--that is, but for the intentional deception, access would not have been granted.

Anything less simply sets too low a bar and excuses incompetence. This is bad public policy because it allows people to stumble into felonies while excusing all kinds of negligence on the part of those people who were supposed to be protecting things. Otherwise we have an "I know it when I see it" standard for which parts of a site are okay to interact with and reasonable minds can (and frequently do) disagree over the particulars. My standard would move this rule to determining statements of fact--did they know or should they have known they were deceiving this computer system/person in order to gain access? It also deliberately prevents people from shifting the blame for negligently configuring access to their computers.

I think we both know the widespread public harm caused by networks of hacked computers and we both know that, unlike the real world, essentially every computer's locks and windows are tested many times a day. Leaving things open is clearly negligent in my view and I've seen far too many clients of mine leave vulnerabilities open longer than is justifiable, contrary to my advice. I mean, I still see PCI audits reporting POODLE, which is just sad.

Now, inasmuch as you're telling me that the law doesn't and isn't likely to see things my way, sadly, I have to agree with you there.

People can be civilly liable for negligence in their own security without changing the fact that other people are criminally liable for exploiting that negligence. I see this argument in virtually every thread about computer security and the law, and it never makes any sense to me.

But that's the thing, there's no clear definition of 'exploiting' in the law right now, just a fuzzy mess that judges are supposed to sort out on some ad hoc basis. If you want to go back to physical property law, it'd be more like 'trespass to chattels' anyhow, which is not a good basis to decide these things.

The fact that users are supposed to just guess about what access sites have or have not authorized, with felony charges for anyone who gets it wrong, does not make sense to me when they have the means to express their rules for authorization in code.

That's why I say you should have to intentionally deceive those rules (or their people) to get in.

I don't understand the logic here at all. I can't make sense of it. I can be civilly, or even criminally, liable for negligently protecting property that other people rely on. But the person who abuses my negligence is also fully liable. Liability simply isn't zero sum.

We interact with computers in a very different way from how we interact with real world property. There are no clear property lines and no clear boundaries. Even when liability isn't zero sum, I think we've both seen companies blame the hackers fully and use that as a fig leaf for their own negligence. I really haven't seen companies punished beyond a few cost of doing business fines.

The idea that we should be deliberately vague about where the boundaries are and let people stumble into felonies doesn't make any sense to me. The idea that we should let someone write that into a thousand page ToS also doesn't sit well with me. I'd rather it be a question of fact.

Take the case about modifying the URL. It's normal to be able to type any URL I like. Why should it be a felony if I try other IDs? I'm simply making a request, it's up to them to decide what access I should or should not have, or even googlebot may end up being a felon. The fact is that much of the web is and always has been open by default. Anonymous FTP is normal... if you want authentication, you should configure that. The idea that someone could be a felon because they were somehow supposed to know that your misconfigured FTP server wasn't supposed to let them in is simply unreasonable and it only works out because prosecutions are rare.

That's why I want a proper boundary. If you're not deliberately hacking someone or social engineering someone and if the only thing you do is to report the bug you found to responsible parties (the site owner/operator or government) I'm not willing to charge someone with a felony for modifying a URL or logging into anonymous FTP or whatever else like that.

You're saying the same thing you said earlier. This doesn't clarify anything for me.

If you want to advocate for liability for software security negligence, I won't argue --- at least, not on a moral basis (I think "be careful what you ask for" but whatever).

But I do not see what any of this has to do with liability for intruders.

It's more a social than a legal problem on that front. If they can say "X was prosecuted for hacking us" it doesn't make them look as incompetent to the public at large as if they have to admit that a random person on the internet could see obvious flaws in their setup. You can see it as another way of encouraging responsible reporting, not unlike one of the goals of bug bounties.

I think people would start seeing more social costs for running businesses negligently if they couldn't point to iffy hacking prosecutions to justify themselves.

There's no liability because they aren't intruders, they're requesters.

If all it takes to get something is to request it, without any interaction and thus without any fraud, then it's public.

What's the difference between me calling a phone number and asking for a company's financials before they're publicly released, and checking the probable URL before they're publicly released?

The first isn't a crime, why is the second?

I think you replied to the wrong comment. Maybe you meant to reply to one upthread?


I mean that you keep presuming that we're discussing intruders which implies guilt, but when viewed in a more realistic context, as requesters, your comments about liability aren't relevant.

Nobody is at fault for simply asking for a document in the real world and nobody should be online.

It's a little discourteous to jump into the middle of someone else's discussion and attempt to alter the premise. Reply somewhere else if you'd like to have a different discussion. Thanks!

Hah, HN police. It's my thread, look up. But I magnanimously grant you the right to post in it. You're welcome!

You're the one trying to move the goalposts and alter premises. You reframe everything in a violent physical metaphor even though you've been around long enough to know that it's the least useful thing to compare information to. I can't copy a house by looking at it so it's a crappy analogy for a website.

There's a world of difference between asking for a document and opening a door and taking it.

This comment confirms my suspicion that discussing this issue with you is likely to be unproductive for both of us.

But you aren't opening an unlocked window (or door, or curtain). You're asking the property manager to open it for you, and they do.

Also breaking and entering!

Breaking and entering implies there is no authorization but here we have the property manager saying yes.

Also, breaking and entering is generally a misdemeanor rather than a felony.

And on top of that, there is no entering. People love to paint bad abstractions on top of computers, but fundamentally all there ever can be is you asking the remote computer to do something and it deciding whether to do it or not. That doesn't look anything like B&E or trespass. The best meatspace analogy is social engineering. But there is no generic law against manipulating people and the laws against specific types of manipulation are highly context-dependent because so is the harm.

Where are you getting this "misdemeanor" thing from?

Burglary is a felony, and it's defined in most places simply by entering a building or vehicle without permission and with illicit intent (that intent need not be "to steal stuff").

> Where are you getting this "misdemeanor" thing from?


Breaking and entering is generally listed as a misdemeanor. Misdemeanors result in jail sentences of less than one year. However, breaking and entering is often associated with the felony crime of burglary. Burglary is usually defined as "breaking and entering with the intent to commit a felony while on the premises".

The difference between burglary and B&E of "intent to commit a felony" is the sort of thing that justifies the felony charges. I don't see that requirement in the CFAA.

Nope, same deal with the CFAA, which is why the CFAA cross-charging with the Wire Fraud statute is so problematic.

The CFAA doesn't have "intent to commit a felony," it has committed in furtherance of any criminal or tortious act in violation of the Constitution or laws of the United States or of any State, which is much broader. It allows felony penalties without felonious intent.

And as you point out, even that limitation is basically swallowed by the fact that the Wire Fraud statute covers very similar conduct to the CFAA, so they can charge you with computer fraud in furtherance of wire fraud.

The physical building metaphors are yours and you keep pushing back to those. The whole point of a standards-compliant webserver is to speak to standards-compliant web clients. When used without a password, on a public IP, at a standard port, the simple existence implies you're allowed to speak to it.

Asking a human for a document isn't a crime, so it shouldn't be when you ask a computer.

Fraudulently claiming to be someone else to get a document is a crime, and that's how it should be on a computer.

This is another super popular argument about computer security, and it falls apart almost instantly under scrutiny. By your logic, I can dump a SQL database full of credit card numbers due to an SQLI in a GET handler, because, after all, the software components are just doing what they're meant to do!

No, at least in my standard, that would be material deception. You said that was your name/address/whatever, but it's actually an SQL command.

I suppose you'll ask what if someone's name really is Bobby Tables, but then I submit that it either can't be tailored to your database, or that their intent when changing their name to exploit you counts as social engineering and I don't consider it worth optimizing for.

How is that any different from "I can reasonably infer that my customer ID is 101, but I told you it was 102 in a URL to see the account information for a different customer"?

These issues are less fraught than people seem to think they are. We walk through commercial spaces all the time that are full of unlocked, unmarked doors, and it rarely takes us more than a moment to realize when we've gone somewhere we weren't meant to be. It's not a new problem.

The difference is the ultimate usefulness and workability of the system.

Let's say I spray-paint people's medical records on the sidewalks in the park. It's not your responsibility to avoid the park despite anything you may see. A public area by definition carries no expectation of you having to wonder if you were meant to be there.

Similarly, if I leave sensitive information all over some publicly accessible URLs it shouldn't be your obligation to avoid those URLs, even gratuitously. As it's not your obligation to plug your ears when your neighbors fight, etc. You are still obliged to not break the law even if supplied with the means. But that means the act of blackmailing is illegal, not the listening.

The current application of the current law requires reading the mind of the viewer/listener/requester. That would punish you for picking up my keys from the sidewalk just because you don't play well to the jury.

tldr; The proper law would only penalize actual crime, not theoretical crimes.

Yeah, but we don't normally hammer people that open the wrong unmarked door by mistake in a long hall full of unmarked doors. It's not hard to typo a URL. It's not unreasonable to connect to anonymous FTP--they choose to authorize you and you have no good reason to second-guess them.

Now, yes, when you do that 100 times with a script, you can argue that they're doing something bad... which is why I also advocate a safe-harbor for people who report the bug only to the site owners or the government. If someone is making a reasonable effort to help you fix your bad security, we shouldn't be treating them like a bad guy.

No, we hammer them for what they do with those URLs; for instance: long IRC conversations about how they're going to sell the identities of everyone who bought an iPad to spammers, or suchlike.

Assuming that was just a joke and they didn't actually do that and that they did report it to the site owners or a government agency that could reasonably be assumed to oversee it, I wouldn't hammer them.

If they actually do that or take steps that would make a reasonable person conclude that they actually were on their way to do that before they were caught? Then yes, hammer away.

I'm more concerned with their being a brighter line for what is and isn't unauthorized access and a safe harbor so people can't hammer anyone who simply makes them look bad by pointing out that their fly is open.

If you can fit facts of some old case to show that they were guilty in a principled way, I won't argue. I may or may not agree with the specific reading of the facts, but at least I would find that to be a principled disagreement.

My point in this exercise is to shift things from an exercise in how people feel about some particular intrusion or a person's motives which are things not likely to reach broad consensus to an exercise where we debate the specific facts of the case which reasonable people should be able to agree upon given enough information, no matter how they feel about the parties of any particular case.

Criminal law doesn't work that way. Most of it is motivated very powerfully by the inferred motives of the accused. Google mens rea.

I specifically require material deception as an overt act that establishes some level of mens rea for that very reason, but my construction is meant to frame discussion of their motives around overt acts, rather than attempts at mind reading, which are far more susceptible to personal biases of every kind. Having elements of the crime that demonstrate criminal intent is not, in fact, a construction unknown to law in general. The elements of a crime like shoplifting might require elements like both concealing the merchandise and attempting to exit the store. You can see that the act of concealment gives information about their motives in a way subject to fact-based inquiry, rather than having to attempt to read someone's mind. And people can come to a fact-based conclusion about this, as the jurors have to say this person didn't actually conceal and remove the merchandise, rather than saying this kid looks like a good guy, or that kid looks like a thief and deciding a motive from that.

So I think you'll find that the better-constructed laws require overt acts that inform us about motive in many (but not all) cases, rather than mere inferences about motive. Though I certainly agree that there are reasonable facts which one can use to infer mens rea from. Good intent can also be inferred--that's why I specified a safe harbor for people who were trying to simply do the right thing and report a bug they'd found. Someone who reports a problem to the site owners in good faith is simply not someone I think we should be hitting with criminal penalties in general, though this might be weighed against other things like attempts to ransom the bug or sending DROP DATABASE injections and whatnot which could wipe out any inference of good intent.

That said, I've always said this was my view of how the law ought to work. Implicit in that is that the law does not, in fact, work that way, so I honestly can't disagree with you here--the law certainly doesn't work my way, and this is all my own thinking on how it ought to work in my own view. So inasmuch as you're saying the law doesn't work this way, I completely agree.

Scraping sites for email addresses isn't illegal or we'd arrest Google. It's the spam that's actionable. So intent to scrape email addresses to give to spammers doesn't constitute mens rea.

There has to be 1) believable intent to 2) commit an actual crime.

Your reasoning is circular without that - "he had bad intent so whatever he was doing was illegal which is how we can say his intent was bad rather than simply unsavory, etc."

There's no valid precedent for simply requesting a document being illegal, nor ultimately a security benefit from it being so.

Non sequitur.

I think he's just saying you need an actus reus to go with your mens rea.

That;s the wrong part of the CFAA. I doubt the CFAA even applies.

>..accessed a computer without authorization... AND having obtained information .... determined ... to require protection against ... disclosure for reasons of national defense ... [AND]... willfully communicates ... to any person not entitled to receive it...

Unless in the control room of a nuclear power plant, that part doesn't criminalize taking a pic via someone else's webcam.

The CFAA applies here. His right to access the cloud service legally expired the moment he returned the camera. I wouldn't want to see such a prosecution, but there is almost no question that he has technically violated the CFAA by accessing the cloud service after his right to access it was legally terminated.

His access to the account was not legally terminated - it is still his account, it is active, and it's okay to use that account - if he had other linked cameras, that would be the point to use them.

However, the issue is that his account shows data also from a device that isn't his (anymore). He has access to the cloud service, but simply the set of valid devices should be empty, and isn't.

Why did the cloud provider not revoke his access once his account was expired? Is not the cloud provider also guilty here?

Under the CFAA, the burden isn't on the service provider to block access. They can be as incompetent as they'd like. It's up to the person accessing the service to not exceed their access rights - and in this case he had no access rights.

Simply put, the guy that bought the cameras acquired access rights to the view the camera stream through the cloud service with his purchase. When he returned them, those rights expired. By logging back into the service and viewing the stream from cameras he knew he had returned, he exceeded his access rights.

I really think we need to get them to make a more sensible definition of 'unauthorized access'. I've posted a few times before about how I think we should have defined it -


No, and they never will be because we've decided that a group hug is the only cyber-defense you have a professional obligation to perform.

If you send me a paper investor-relations document with a document ID of 7, I'm not committing a crime by calling your documents department and asking for documents 1-6 and 8. If they give them to me, I now even have a legal right to that copy of the document - such that you can't compel me to return them.

But the USG put Weev in jail for accessing a series of incrementally numbered public URLs. It was theoretically so bad as to warrant jail, and yet so meaningless that they didn't even tell customers about it or reprimand a sysadmin. No security people were fired for incompetence.

So no, there's no technical obligation for the service provider. If the cleaners unplug your servers while cleaning, cyber-attack! If someone scrapes your website, cyber-attack!

To bring the physical world to parity, when banks get robbed we now recommending hanging dream-catchers around to make everyone feel better. /s

If I, as owner of house number 7 on Honest Drive, give you permission to pop in and take a copy of the leaflet I wanted to give you, you don't automatically get a right to wander into houses 1-6 and 8 on the same road. If the doors to those houses aren't locked, it doesn't mean you have permission to enter. There's no requirement for technical barriers on your wandering around, particularly when I give you specific instructions on where to go and get what I have given you permission to get, in order to make explicit your lack of permission to go elsewhere.

Right. But, if I knock on the door of houses 1-6 and 8, and they let me in (or hand me a letter through the post slot), I have permission to enter (or take the information on that letter).

A request to a URL is just that - a request. What level of access a request grants should be the responsibility of the grantor.

Please don't invoke the tired and ridiculous "URL:physical address" metaphor. A moment's thought is all that's required to see how dumb it is, every single time it comes up. In real life, one host sending another host a packet is nothing like physically breaking into a house. Houses have many purposes other than the distribution of leaflets. In fact, those physical entities actually designed to distribute leaflets are much more like internet hosts than houses are. No one ever got thrown in FPMITAP for taking one of each of the hundreds of different tourist brochures at a highway rest stop.

an employee is afaik also liable if he deletes customer data after he's been let go but his credentials hadn't been revoked yet.

shouldn't that be essentially the same thing?

No, because accidentally loading the page at a cached URL that just happens to show you something sensitive takes no intent. Logging into work to delete data to hurt them shows malicious intent.

But in today's legal climate, both people are dirty hackers despite neither actually performing any hacking.

Nope. Being insecure isn't a crime.

These types of exceeding invasive products need to have their damages tested in courts. After a few lawsuits and payouts the liabilty will begin to increase and that will force companies to adapt/improve or go under.

The problem is our entire generation doesn't care about privacy. They willingly hand over everything about them to an app and care not a single drop that their government spies on them without a warrant.

> The problem is our entire generation doesn't care about privacy.

Yup -- law follows culture, not the other way around. IMHO this (cultural priorities) is at the root of other ills too, e.g. educational system and criminal justice system brokenness. I think most people genuinely do want the right thing but just aren't aware of the long-term consequences of the current approach.

Recently I thought that it would be cool if I bought a bathroom scale that would sync with my iPhone so I could keep tabs on my exercise effects.

I bought a clever looking one and took it home and was dismayed to find out the only to get it to sync was to create a "cloud account" which would supposedly allow me to "check my progress from anywhere".

I returned that one to the store and bought another - same requirement: Cloud account needed to activate. Took that back. Decided it was easier to just type the number into my phone.

Its hard for me to understand how there are so many people oblivious / ok with the constant surveillance that goes on in their lives.

One motivation for this from the vendor's point of view is that a cloud account means no NAT headaches=fewer customer service calls. I know, you don't need to talk to your scale when you're not home, but even setting up LAN-only access is beyond most civilians.

Security and privacy awareness are not wholly absent, and awareness, including among the young, can be high. The awareness is, however, highly uneven, and is quite problematic especially in how it's reflected among commercial enterprises and law.

As I've been saying for quite some time: Data are liability.

This is simply the home-security edition.

The other part to this is what can they do about it. Kids these days see riots responded to with military force. Cops kill people on sight and get off unpunished. People get imprisoned over enumerating URLs and we're trying to extradite a person to charge them with treason for revealing that the government is indeed spying on everyone.

I don't think kids these days are naive, I think they just see that trying to fight this stuff can realistically get them life in prison and/or killed.

I've yet to hear of reporting product defects or opting out of Facebook leading to an armed occupation.

(Though security researchers might want to take care in how they identify and report vulnerabilities.)

Its not so much that it does not care, but that it has trouble converting a physical concept (close the door and you are in a private place) to the digital realm.

Instead of feeling appalled the NSA may be watching their intimate moments, many Americans are titillated and enthused.

I have a handful of D-Link cameras, and plan to buy more.

D-Link offers some sort of cloud service, but I've never used it. I keep the cameras segregated onto a separate Wifi network that can't access the internet, and they work just fine in that configuration. The cameras have built-in HTTP servers and present what they see as an MJPEG stream. I use 'motion' running on a machine to handle motion detection, recording, etc. I use a VPN server to handle my remote access needs.

I get everything that the cloud stuff offers, but all hosted locally.

What's described in the article scares me, which is why I've set things up the way I have. Even if the cameras were used (they weren't) and tied to someone else's account, they can't send anything back to the cloud service.

If it's hosted locally, what stops an intruder from stealing or destroying your server once they break in?

This is the real rub. As soon as remote storage is a requirement, you start to need some kind of service provider to do it. This service provider could be anything from a purpose-built service to a generic storage service (think S3, Google Drive, DropBox, etc), to a co-located server, to a VPS, to a machine sitting at a friend's house.

No system will be perfect though. What if the internet connection goes down? What if the power goes out? What if the provider for the remote storage goes down?

They also need a large amount of storage space and to define how to handle retention (do I keep all footage for a week? etc).

Nest Aware charges $10–30/mo for this (https://store.nest.com/product/security/camera).

The amount of storage depends on the camera's resolution and frame rate.

One of my cameras records 640x480 at 1 fps, saving movies at 7 fps when motion is detected. I end up using 1GB to 1.5GB per camera per day, depending on how much motion is being detected. The files saved are 7 fps videos for each event, 1 fps videos for each hour (24 videos), plus the event videos all stitched into one video.

For my purposes, this is sufficient. Other folks might want 720p or 1080p video, have more cameras, etc, and have correspondingly larger storage requirements.

What kind of retention do you implement considering those storage requirements? I mean, a simple 4 camera setup would be 4-6GB/day which is ~1.5-2.2TB/year.

Locally doesn't mean it has to be stored on the same physical premise thus physical access to cameras does not necessarily mean physical access to storage machines/devices. Great work GP, congrats for taking extra steps.

Is there any open-source/documentation that is accessible to more people (not just network experts) on how to do this kind of setup?

There are really three separate problems that need to be solved:

* Remote access to your home network

* Recording and storage of video, possibly with motion detection and alerting.

* Remote storage of video/events (optional).

Each problem is "sort of" solved:

Most routers have some kind of VPN server built in. My Asus router supports PPTP, which isn't very secure, out of the box. I think some routers are starting to support OpenVPN, but without some easy wizard to set it up and distribute the profiles and certificates it would probably be beyond the average user.

A lot of NAS devices come with software to record from IP cameras. My QNAP has it, but I have no idea how good it is as I've never tried it. I know that to use more than two cameras they want you to buy extra licenses.

A lot of NAS devices can also sync folders up to various cloud storage providers. This could solve the optional remote storage requirement.

As for making it all work together, that's another story. I'm not aware of any kind of easy to follow HOWTO for a user who's goal is "access my cameras remotely without sending everything to the cloud".

The ubiquiti unifi video server (formerly aircam?) is ridiculously easy to set up on a local x86-64 linux system. They give away the software free for use with their IP cameras. Or it need not be physically local, the server could be on the other end of a VPN tunnel from the layer2 LAN that physically contains multiple security cameras.

"I'm not mistaken, anyone could get the serial number off your cameras and link them to their online account, to watch and record your every move without your permission."

There's a name for a hacking strategy where you mass purchase products, modify it or acquire relevant information, then resell them or return them. "Catch and release" comes to mind, but I can't find any references.

You can also do this with PG&E accounts. Based on my conversations with them about it, it appears to be a feature.

I set up an online account

The title is missing an important fact: these are not traditional network cameras, they're ones that apparently stream video into the cloud.

Those cameras that do not "phone home" to a cloud service don't have this problem; the ones that you can set up with a username/password and then connect directly to from the network. Ironically it's the cheap no-name ones that usually work like this, as the company just sells the hardware and isn't one to bother with their own set of servers/accounts/etc.

IMHO these cameras that do rely on a third-party service are to be avoided, since what happens to that service is completely out of your control.

People thought the Shadowrun authors where off their meds when they made gear lose stat points from being offline, and now here we are...

Buzzwords combined with profit motive produce some worrying outcomes.

The cameras you're discussing are not very safe for the layman either; you wanna be sure you have a properly-configured perimeter firewall before you use them and that they don't open any ports with UPnP. A cursory glance at shodan will reveal many such cameras that are happily streaming their images out to the open internet.

Upnp in its intial form, getting media devices to talk to each other and exchange data, was fine. But how the heck did it end up being about punching holes in firewalls?!

At the very least, they go out of business and now your expensive hardware stops working.

HN readers: Do you think the engineers knew?

I ask because I've worked on various products, and single units change hands between engineers constantly. Phones for testing, accounts with shared dev passwords, the actual hardware, all kinds of test units get spun up and passed around, even on crappy products where the engineers' imaginations are the only QA.

Surely one engineer set up a camera, passed it along to another engineer, who set up the camera and encountered this error?

There are lots of classes of error that can hide in a product, but this feels like one that it's nearly impossible not to hit.

If it's really as simple as knowing a serial number to get access to a camera then yes I'm sure an engineer conjured this corner cases in their head.

Probably some PM told them to ignore it so the product could ship on time.

Probably still an "issue" in their project tracker just waiting... one day... as it gets pushed further and further down the list of tasks.

Props to Dropcam/Nest for solving this problem.

My brother gave me his Dropcam after setting it up for himself, and I had to prove my identity and he had to prove his to get them to move the camera to my account. It was a hassle at the time, but I was glad to know that they at least had decent security.

I reported 768-bit DHE on one of Nest's servers to Google security around mid-2015. Do anyone remember the tweets by @NestSupport on Twitter around this time (there was also https://bugzilla.mozilla.org/show_bug.cgi?id=1170833)? It wasn't long after that they had to hire a VP of security (when Alphabet was formed I think).

Here is some of the old tweets actually:





(Notice that they eventually posted a clear screenshot showing the problem and there was still not much of a response)

I still have the old emails from Google Security in my mailbox. June 2, 2015 was when I received the first "received" email. June 3, 2015 is when I received the "triaged" email. June 4, 2015 is when I received the "filed a bug" email. You can see from the Bugzilla bug that it was fixed by June 5, 2015.

Nest really went down hard after the acquisition. They were a company who built a cool thermostat. That is all. Everyone who didn't work on engineering the thermostat seems incompetent, especially management. The UI which took them like a year to do once they bought Dropcam is much worse than the old Dropcam site. They never came through for Dropcam Pro buyers who they promised 1080p recording to (the hardware can do it, but they never made fixing it any kind of priority). They then go and slap those users and early adopters in the face by releasing the Nest cam with the same hardware with 1080p enabled.

The connected smoke detector is useless, since it's only useful in emergencies, and an app-connected thing which runs complex firmware is the absolute last thing I'd trust to save my life. There's a reason why sprinkler heads to put out fires are dead simple.

They did nothing with an unlimited budget for 2 years: http://arstechnica.com/gadgets/2016/06/nests-time-at-alphabe...

I've tried finding a camera that has a server that can encrypt traffic, and I can't. It'd be nice to have access from outside of my network but I don't trust it. It really took me by surprise how bad at security these things are. I guess I could set up some kind of vpn but I assumed when I bought it I could enable ssl or something.

Possibly overkill, but you could set up some kind of home VPN to remote in securely.

This is probably the only way to be sure about security in this scenario.

Put IP cameras in their own VLAN on a network that also contains network DVR software. Access it via VPN tunnel and/or https. For unauthorized access to an individual camera somebody would need to be on the same layer 2 broadcast domain as the camera, local on site. Following that principle, if an adversary has physical access to a device it's likely pwned anyways.

Maybe the Apple HomeKit ones are better - at least it'd be a big PR problem for Apple if they aren't.

Systems that provide an online account tied to a physical device have to be carefully designed for transfer of ownership scenarios, and it sounds like they didn't do the work here, or else something went wrong and the resulting error state is unfortunate.

Frankly i suspect the devs never even contemplated a transfer of ownership scenario. The whole idea seems more and more foreign, or perhaps quaint, to the people involved in tech these days. tech is treated as something disposable, not something durable to transfer from person to person.

You can more than likely pick up the serial through the web-admin panel that these cameras expose on the local network.

God forbid they have a wireless AP with the serial number somehow encoded in the SSID.

How is it that these companies still don't give security a passing concern?

> How is it that these companies still don't give security a passing concern?

Lack of lawsuits. The kind that bankrupt companies and set binding precedents for everyone else.

That's why "surveillance cameras" better describes these products, than "security cameras", especially if they're cloud-connected.

I had the same problem with a WD home server. I returned it when it wouldn't do what it was supposed to do. Later, I started receiving emails from the server as it kept me up-to-date on its status.

Until people start demanding security, and become willing to pay for it, the IoT is going to be positively defined by this kind of nonsense. That, or some kind of legislative action I guess, but that seems like pure fantasy.

That's like saying "until people start demanding safety on cars and become willing to pay for it there will always be fatalities". Sure part of the blame is on the consumer, but maybe the company shouldn't be selling cameras that are inherently insecure.

These types of things typically play out with lawsuits which increase liability for the producers. The problem is that it's (currently) difficult to prove damages when it's only privacy.

For most consumers security is a barrier to usability and we all know which one is more important to the product team.

That's exactly what happened with safety in automobiles; remember Ralph Nader?

> That's like saying "until people start demanding safety on cars and become willing to pay for it there will always be fatalities"

Uh, that's how it sorta works -- for better and for worse.


"On March 22, 1966, GM President James Roche was forced to appear before a United States Senate subcommittee, and to apologize to Nader for the company's campaign of harassment and intimidation. Nader later successfully sued GM for excessive invasion of privacy. It was the money from this case that allowed him to lobby for consumer rights, leading to the creation of the U.S. Environmental Protection Agency and the Clean Air Act, among other things."

We need a standardized protocol/set of protocols first - something that it's easy for manufacturers to adhere to which can handle:

* Pushing common actions in a standardized way (e.g. turn on light, flip channel on TV, raise thermostat to 28 degrees).

* Sending / receiving streams of data via UDP.

* Service registration / discovery & authentication.

* Encryption.

* Upgrading firmware.

And which has a diverse set of servers which can talk these protocols.

This is already starting to happen, for example with HomeKit, but unfortunately instead of all the large manufacturers recognizing that working together would benefit everyone, they each try to create their own proprietary ecosystems with a race to the bottom.

Maybe a protocol / standard that has all the security / best practices baked into it?

Not a single one, but people need to be better educated on their options. How about, here are the options where you can actually have control of your devices as under property rights.

I do however also think the gplv3 sphere needs to offer better netguis for the non cli-heroes.

I'm not sure how easy it would be to communicate, and insulate from marketeering. "Steel roll cage" or "Airbags" and "Seatbelts" are relatively simple concepts. I don't know that even something as common as "SSH" would mean much to the average person, and by the time they learned, any given term or technique would probably have been supplanted by another, newer, better one.

I guess the devops team can view all of them

...I guess they've curated a set of good-looking and sometimes-not-completely-dressed camera users whom they view more often than the rest of their customers.

Seen this same method applied to used equipment for sale, especially if it was stolen.

Basically, someone steals a laptop, wipes it, reinstalls the OS with backdoors, sells the laptop for cash, exploits backdoor access to own other devices, exploits owned devices, etc.

Take it one step further. Someone has a target that they are trying to acquire (company website access). So they run a fake contest where the prize is a laptop. The laptop that they ship to the "winner" is backdoored as you have described.

Right, hacker might even target a website that's known to be visited by the targets, hack it, use it for drive by downloads attacks - and use the contest win a backdoored device (laptop,iPad,drive,etc) to cherry pick any targets that have not been compromised.

this is a general class of problems that is only going to get bigger.

When I returned my lease car I had to have a bit of a think about what might be sync'd from my phone via bluetooth with it, and what functionality existed to erase that. The answers didn't make me feel great.

The fun pastime of buying old HDD's off ebay and carving deleted files off them to see what might be kicking about is going to get a whole lot more interested with everything-connected society moving forward.

What's with the "cloud" security systems? Why don't they just provide hardware where you store the information locally?

Ignoring the privacy implications mentioned here, and that you esentially pay monthly/yearly for storage, if your ISP has an outage your security system is becoming useless. It also is a weak point for smarter thieves (just make sure that Internet access is cut).

NETGEAR has previously informed our resellers that retailers are not to resell cameras which have been returned. The Arlo camera system in this instance was resold without our authorization. When setting up a previously owned camera it is advised that all Arlo cameras be reset from the original base station, which will clear connection with any previously existing account. The configuration for the camera needs to be cleared as the settings may contain associated account information of the previous owner. NETGEAR is aware of this concern and takes the security of our customers seriously.

Additionally, NETGEAR has tested for various scenarios in which unauthorized access to an Arlo video might be possible (including using randomized serial numbers). From the testing we have conducted, NETGEAR has not seen a possible scenario where an unauthenticated user plugs in random serial numbers and has unauthorized access to a video stream.

The Arlo camera system is secured by design and has been tested by independent auditors and security researchers. NETGEAR also conducts bug bounty programs to further ensure the security of Arlo customer’s video streams and other NETGEAR products.

Yet people still recoil as if in horror when I try to explain that this is one of the core reasons why gplv3 is so important. Look, we've lost the hardware freedom wars so far, but we still have software, and we can work on improving our hardware side as we progress.

One of the Common arguments I hear in response is, "But open source doesnt pay, and therefore doesnt innovate as much."

While the lack of funds coming arent ignorable, innovation is always happening in the foss space, often surpassing the proprietary alternatives, often falling far behind as well. It still gives you the power to control your own systems, which is the freedom you can choose to not give up.

The only way you surrender your freedom is voluntarily.

Wow. You know the situation is bad when you are actually better off implementing you own security as a bunch of Arduinos with webcam shields on the LAN and a server with a feature phone in the closet.

LOL, just look at this vigilant little bastard :p http://www.arducam.com/arducam-porting-raspberry-pi/arducam-... No one is sneaking up on that without leaving a mugshot.

Yep, but what does the average consumer do?

I can't tell but it doesn't seem like the OP reset the devices before he returned him. Isn't this his or her fault then? Like having nude selfies on a phone and returning it without wiping the phone to factory defaults?

fwiw I recently started using the Samsung network camera sold by Costco (SNH-V6414BN), after various homebrew and RPi solutions over the years. It has an on-camera password that is set as part of the WiFi pairing process so is not open to this kind of attack. This password is separate from the cloud account credentials, so provided you don't ask the web site or mobile app to retain it (optional), without that password the camera content can't be accessed remotely (of course the firmware could be compromised and I don't know if the password is adequately protected from eavesedropping).

Holy shit. Never buying off the shelf consumer grade security equipment now.

Sounds like the security part is sorely lacking. That and someone needs to get a life.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact