So for example, if the OP was to casually drop a few photos the camera took and a badly worded warning in their mailbox trying to help, the 'victim' could report it to the police and an inexperienced DA might try to bag their first cyber prosecution.
I'd definitely not contact the customer. Contact the vendor instead with an email and immediately remove your own access to the system. That way you have it on record (the email) and mention in the email you immediately revoked your own access.
The CFAA is a blunt and clumsy instrument that tends to injure bystanders.
Here's an extract from the CFAA:
Whoever having knowingly accessed a computer without authorization or exceeding authorized access, and by means of such conduct having obtained information that has been determined by the United States Government pursuant to an Executive order or statute to require protection against unauthorized disclosure for reasons of national defense or foreign relations, or any restricted data, as defined in paragraph y. of section 11 of the Atomic Energy Act of 1954, with reason to believe that such information so obtained could be used to the injury of the United States, or to the advantage of any foreign nation willfully communicates, delivers, transmits, or causes to be communicated, delivered, or transmitted, or attempts to communicate, deliver, transmit or cause to be communicated, delivered, or transmitted the same to any person not entitled to receive it, or willfully retains the same and fails to deliver it to the officer or employee of the United States entitled to receive it;
I feel you. It's because of 1984's CFAA law that I was thrown in jail with a felony charge for rick-rolling my school.
Yes, it probably should have resulted in a suspension, not a felony charge. No, it's not as benign as you implied.
Yes it is. He didn't hurt anyone or even mess with their files in a naughty way. I was so happy to get away from school and this kind of bullshit.
That's exactly the kind of thing the CFAA was created for.
I agree with the GP that he was harmless, and doesn't deserve anything terribly serious as punishment. But what he did is a lot more than using simple HTML injection to add a rick roll to something.
'But he achieved it using leet hacker skills' should not be a factor in determining the nature of the charge or potential sentence. That's along the lines of making a big deal out of someone using a "Subversion" system to access and maintain code.
Getting access to the administrator account is pretty different.
I've had people deface sites. It's stressful, upsetting, and a lot of work. You can't really trust a compromised server, and school IT is fairly unlikely to have great processes for it.
> I was so happy to get away from school and this kind of bullshit.
The real world isn't likely to look any more fondly on this sort of behavior.
While finding the problem should never be punished if it's responsibly disclosed, exploiting it (publicly in this manner) should be.
Both of those were done in attempt to be altruistic but technically both were completely illegal. Since no administrator ever found out, no cops were ever called and no prosecutorial discretion was in play, but I wouldn't want to bet I'd have been "let off", which is sort of terrifying.
You obviously don't deserve prison time, but you didn't get it either. It shouldn't have gotten as far as it did though.
But do you think the person/IT responsible for the website was not harmed? You get they are human and 'possibly' went under an amazing amount of stress?
They may have been disciplined for what happened, either officially or off the books.
There would have been a lot of hours work post mortem trying to fix the problem. This is money on a possibly tight IT budget. If the people who run the website care, which many do, this also is stressful.
I'm not saying don't prank or even don't do this prank. Pranks come at a cost, but they also make the world a better place. But be self aware of your own actions. You hacked and got caught it really has nothing to do with the CFAA.
Because they decided not to press charges, not because the CFAA makes any sense.
> But do you think the person/IT responsible for the website was not harmed? You get they are human and 'possibly' went under an amazing amount of stress?
There is a difference between inconvenience and harm. The vast majority of the "harm" caused in cases like this is a result of ridiculous overreactions to anything that involves a computer.
Imagine the analog version of this case. Some kid sitting in the main office sees the teacher enter the combination to the filing cabinet and then the kid unlocks it and sticks in a picture of Rick Astley. If that happened, would we still be hearing about a "post mortem" (as if serious level = human fatality) and the harm and stress the kid caused? Does a law allowing that kid to be prosecuted for a felony really make any sense?
Jail would be an over reaction for your neighbour, but you'd want that as an option for some creep breaking in, right? It sounds like in the parent posts case everything worked out more or less OK.
It was a school, not a home.
(It's also a notable contrast that when people are arguing for bad laws the fear is that criminals will invade your privacy, but when people are arguing for mass surveillance then it's all "nothing to hide" as if organized crime getting access to the surveillance apparatus isn't somewhere north of probable.)
> You don't know that it was your neighbour playing a prank or if it was some criminal that's put up little IP cameras all around your house until you do some investigation. Until you come to some conclusion you could be very worried and/or paranoid.
The criminal who is secretly trying to hide IP cameras in your house is going to leave you a picture of Rick Astley?
The problem with this theory in general is that it has nothing to do with any wrongdoing. Suppose your neighbor sees what kind of locks you have on your door and proceeds to pick them in front of you in ten seconds, then advises you to use better locks.
Now you're in exactly the same situation. You just learned that that person or anybody else with amateur-level lockpicking skills could have been in your house at any point after you installed those locks. If you're a paranoid person then that fact is going to distress you and you're going to search your house for IP cameras, but the source of that distress is the bad locks, not the person who brings them to your attention.
> Jail would be an over reaction for your neighbour, but you'd want that as an option for some creep breaking in, right?
That's the whole problem. You need to find something to distinguish those situations and codify it into law, instead of having a law so broad that it covers both and then having to rely on prosecutorial discretion. Whether you go to jail and for how long needs to depend more on what you did than how much the prosecutor likes you, or we're living in a police state where anybody can be imprisoned at will.
There is a reason we don't just have a single law that says "you must do the right thing; penalty up to life in prison" and then let prosecutors decide what the right thing is.
It was someones network that they're in charge of securing.
> The criminal who is secretly trying to hide IP cameras in your house is going to leave you a picture of Rick Astley?
See anything from Anonymous-like hacking groups. Leaving troll notes behind isn't all that uncommon in network breaches. The point is you just don't know until you investigate.
> Suppose your neighbor sees what kind of locks you have on your door and proceeds to pick them in front of you in ten seconds
Or picks them while you're out and leaves a note saying "your locks suck". You discover it's your neighbour after checking your cameras. The OP did say they traced his IP to discover who it was, this wasn't some white hat pen test.
> That's the whole problem. You need to find something to distinguish those situations and codify it into law, instead of having a law so broad that it covers both and then having to rely on prosecutorial discretion.
Great point, I'm on board. But there's a lot to cover that isn't just "what harm did you cause once you were in?". There's potentially time and money (resources) that law enforcement spend investigating. Resources that the company spends investigating. If the breach is public, stock prices could be impacted. IP could be discovered - whether or not it is disseminated, sometimes you just can't know.
All of this because you wanted some lulz and to see if you could? How about stay out of the network you aren't meant to be in. Go into pen testing if you find that work so rewarding and fun.
Punishment isn't the only reason we have laws. Deterrence is also key.
Which is necessary because of the vulnerability, not because of the breach. If bad people could have gotten into your network and that is something you care to spend resources investigating then you need to do it regardless of whether the person who notified you of the vulnerability trolled you with it or not. Whether they troll you is independent of whether they steal your secrets; you can have either without the other or both or neither.
> All of this because you wanted some lulz and to see if you could? How about stay out of the network you aren't meant to be in.
It isn't a question of right and wrong, it's a question of proportionality. If you troll somebody you deserve to be chastised and given detention or community service, not thrown in prison.
> Deterrence is also key.
I'm not sure deterrence is working in your favor here. If your network is insecure and you get trolled then you look stupid and fix it and give the kids detention. If your network is insecure and you deter the trolls then it takes another year before someone who is harder to deter breaks in and then you get arrested because the people who broke in were using your servers to distribute child pornography.
I'll take the Rickroll please.
Additionally, I'm not sure if this would be a violation of the Wiretap Act as an "interception" either, even if it was intentionally used to spy on the new owner of the device. Federal law is somewhat lacking in this regard.
With a bar that low it becomes a game of you reading the minds of the implementors. "Oh, maybe they didn't think that anyone would bookmark a URL, etc. Maybe they want me to go to page X, which may or may not link to this anymore...?"
It'd be better for everyone (judges) to assume that they meant to make the service entirely unprotected. Then innocent page-reloaders wouldn't go to jail and people could sue the company for their marketing lies, etc.
I can intentionally call your documents department and "guess a URL" without breaking the law, but if I do it on your webserver that same legal intent is turned around and used a proof of criminality. Leaving me with guessing "well, there's a document there in the public folder but maybe they don't want me to see it."
Of course, this is all about malicious and inconsistent prosecution. For you and I it's a theoretical game, but Weev could go back to jail for relying on convention to guess that index.html is public.
We need an actual technological bar otherwise our security industry becomes an ass-covering exercise.
So there needs to be an actual lock on the door that you've done something to bypass, whether that be manipulating the computers or lying to someone (or thing) to gain access.
Another way of putting this is that they are using the computer to express their intention to authorize (or not authorize) access by means of the programming. It should, as a matter of good public policy, be on them to get this right. That's why my standard would require material deception--that is, but for the intentional deception, access would not have been granted.
Anything less simply sets too low a bar and excuses incompetence. This is bad public policy because it allows people to stumble into felonies while excusing all kinds of negligence on the part of those people who were supposed to be protecting things. Otherwise we have an "I know it when I see it" standard for which parts of a site are okay to interact with and reasonable minds can (and frequently do) disagree over the particulars. My standard would move this rule to determining statements of fact--did they know or should they have known they were deceiving this computer system/person in order to gain access? It also deliberately prevents people from shifting the blame for negligently configuring access to their computers.
I think we both know the widespread public harm caused by networks of hacked computers and we both know that, unlike the real world, essentially every computer's locks and windows are tested many times a day. Leaving things open is clearly negligent in my view and I've seen far too many clients of mine leave vulnerabilities open longer than is justifiable, contrary to my advice. I mean, I still see PCI audits reporting POODLE, which is just sad.
Now, inasmuch as you're telling me that the law doesn't and isn't likely to see things my way, sadly, I have to agree with you there.
The fact that users are supposed to just guess about what access sites have or have not authorized, with felony charges for anyone who gets it wrong, does not make sense to me when they have the means to express their rules for authorization in code.
That's why I say you should have to intentionally deceive those rules (or their people) to get in.
The idea that we should be deliberately vague about where the boundaries are and let people stumble into felonies doesn't make any sense to me. The idea that we should let someone write that into a thousand page ToS also doesn't sit well with me. I'd rather it be a question of fact.
Take the case about modifying the URL. It's normal to be able to type any URL I like. Why should it be a felony if I try other IDs? I'm simply making a request, it's up to them to decide what access I should or should not have, or even googlebot may end up being a felon. The fact is that much of the web is and always has been open by default. Anonymous FTP is normal... if you want authentication, you should configure that. The idea that someone could be a felon because they were somehow supposed to know that your misconfigured FTP server wasn't supposed to let them in is simply unreasonable and it only works out because prosecutions are rare.
That's why I want a proper boundary. If you're not deliberately hacking someone or social engineering someone and if the only thing you do is to report the bug you found to responsible parties (the site owner/operator or government) I'm not willing to charge someone with a felony for modifying a URL or logging into anonymous FTP or whatever else like that.
If you want to advocate for liability for software security negligence, I won't argue --- at least, not on a moral basis (I think "be careful what you ask for" but whatever).
But I do not see what any of this has to do with liability for intruders.
I think people would start seeing more social costs for running businesses negligently if they couldn't point to iffy hacking prosecutions to justify themselves.
If all it takes to get something is to request it, without any interaction and thus without any fraud, then it's public.
What's the difference between me calling a phone number and asking for a company's financials before they're publicly released, and checking the probable URL before they're publicly released?
The first isn't a crime, why is the second?
I mean that you keep presuming that we're discussing intruders which implies guilt, but when viewed in a more realistic context, as requesters, your comments about liability aren't relevant.
Nobody is at fault for simply asking for a document in the real world and nobody should be online.
You're the one trying to move the goalposts and alter premises. You reframe everything in a violent physical metaphor even though you've been around long enough to know that it's the least useful thing to compare information to. I can't copy a house by looking at it so it's a crappy analogy for a website.
There's a world of difference between asking for a document and opening a door and taking it.
Also, breaking and entering is generally a misdemeanor rather than a felony.
And on top of that, there is no entering. People love to paint bad abstractions on top of computers, but fundamentally all there ever can be is you asking the remote computer to do something and it deciding whether to do it or not. That doesn't look anything like B&E or trespass. The best meatspace analogy is social engineering. But there is no generic law against manipulating people and the laws against specific types of manipulation are highly context-dependent because so is the harm.
Burglary is a felony, and it's defined in most places simply by entering a building or vehicle without permission and with illicit intent (that intent need not be "to steal stuff").
Breaking and entering is generally listed as a misdemeanor. Misdemeanors result in jail sentences of less than one year. However, breaking and entering is often associated with the felony crime of burglary. Burglary is usually defined as "breaking and entering with the intent to commit a felony while on the premises".
The difference between burglary and B&E of "intent to commit a felony" is the sort of thing that justifies the felony charges. I don't see that requirement in the CFAA.
And as you point out, even that limitation is basically swallowed by the fact that the Wire Fraud statute covers very similar conduct to the CFAA, so they can charge you with computer fraud in furtherance of wire fraud.
Asking a human for a document isn't a crime, so it shouldn't be when you ask a computer.
Fraudulently claiming to be someone else to get a document is a crime, and that's how it should be on a computer.
I suppose you'll ask what if someone's name really is Bobby Tables, but then I submit that it either can't be tailored to your database, or that their intent when changing their name to exploit you counts as social engineering and I don't consider it worth optimizing for.
These issues are less fraught than people seem to think they are. We walk through commercial spaces all the time that are full of unlocked, unmarked doors, and it rarely takes us more than a moment to realize when we've gone somewhere we weren't meant to be. It's not a new problem.
Let's say I spray-paint people's medical records on the sidewalks in the park. It's not your responsibility to avoid the park despite anything you may see. A public area by definition carries no expectation of you having to wonder if you were meant to be there.
Similarly, if I leave sensitive information all over some publicly accessible URLs it shouldn't be your obligation to avoid those URLs, even gratuitously. As it's not your obligation to plug your ears when your neighbors fight, etc. You are still obliged to not break the law even if supplied with the means. But that means the act of blackmailing is illegal, not the listening.
The current application of the current law requires reading the mind of the viewer/listener/requester. That would punish you for picking up my keys from the sidewalk just because you don't play well to the jury.
tldr; The proper law would only penalize actual crime, not theoretical crimes.
Now, yes, when you do that 100 times with a script, you can argue that they're doing something bad... which is why I also advocate a safe-harbor for people who report the bug only to the site owners or the government. If someone is making a reasonable effort to help you fix your bad security, we shouldn't be treating them like a bad guy.
If they actually do that or take steps that would make a reasonable person conclude that they actually were on their way to do that before they were caught? Then yes, hammer away.
I'm more concerned with their being a brighter line for what is and isn't unauthorized access and a safe harbor so people can't hammer anyone who simply makes them look bad by pointing out that their fly is open.
If you can fit facts of some old case to show that they were guilty in a principled way, I won't argue. I may or may not agree with the specific reading of the facts, but at least I would find that to be a principled disagreement.
My point in this exercise is to shift things from an exercise in how people feel about some particular intrusion or a person's motives which are things not likely to reach broad consensus to an exercise where we debate the specific facts of the case which reasonable people should be able to agree upon given enough information, no matter how they feel about the parties of any particular case.
So I think you'll find that the better-constructed laws require overt acts that inform us about motive in many (but not all) cases, rather than mere inferences about motive. Though I certainly agree that there are reasonable facts which one can use to infer mens rea from. Good intent can also be inferred--that's why I specified a safe harbor for people who were trying to simply do the right thing and report a bug they'd found. Someone who reports a problem to the site owners in good faith is simply not someone I think we should be hitting with criminal penalties in general, though this might be weighed against other things like attempts to ransom the bug or sending DROP DATABASE injections and whatnot which could wipe out any inference of good intent.
That said, I've always said this was my view of how the law ought to work. Implicit in that is that the law does not, in fact, work that way, so I honestly can't disagree with you here--the law certainly doesn't work my way, and this is all my own thinking on how it ought to work in my own view. So inasmuch as you're saying the law doesn't work this way, I completely agree.
There has to be 1) believable intent to 2) commit an actual crime.
Your reasoning is circular without that - "he had bad intent so whatever he was doing was illegal which is how we can say his intent was bad rather than simply unsavory, etc."
There's no valid precedent for simply requesting a document being illegal, nor ultimately a security benefit from it being so.
>..accessed a computer without authorization... AND having obtained information .... determined ... to require protection against ... disclosure for reasons of national defense ... [AND]... willfully communicates ... to any person not entitled to receive it...
Unless in the control room of a nuclear power plant, that part doesn't criminalize taking a pic via someone else's webcam.
However, the issue is that his account shows data also from a device that isn't his (anymore). He has access to the cloud service, but simply the set of valid devices should be empty, and isn't.
Simply put, the guy that bought the cameras acquired access rights to the view the camera stream through the cloud service with his purchase. When he returned them, those rights expired. By logging back into the service and viewing the stream from cameras he knew he had returned, he exceeded his access rights.
If you send me a paper investor-relations document with a document ID of 7, I'm not committing a crime by calling your documents department and asking for documents 1-6 and 8. If they give them to me, I now even have a legal right to that copy of the document - such that you can't compel me to return them.
But the USG put Weev in jail for accessing a series of incrementally numbered public URLs. It was theoretically so bad as to warrant jail, and yet so meaningless that they didn't even tell customers about it or reprimand a sysadmin. No security people were fired for incompetence.
So no, there's no technical obligation for the service provider. If the cleaners unplug your servers while cleaning, cyber-attack! If someone scrapes your website, cyber-attack!
To bring the physical world to parity, when banks get robbed we now recommending hanging dream-catchers around to make everyone feel better. /s
A request to a URL is just that - a request. What level of access a request grants should be the responsibility of the grantor.
shouldn't that be essentially the same thing?
But in today's legal climate, both people are dirty hackers despite neither actually performing any hacking.
The problem is our entire generation doesn't care about privacy. They willingly hand over everything about them to an app and care not a single drop that their government spies on them without a warrant.
Yup -- law follows culture, not the other way around. IMHO this (cultural priorities) is at the root of other ills too, e.g. educational system and criminal justice system brokenness. I think most people genuinely do want the right thing but just aren't aware of the long-term consequences of the current approach.
I bought a clever looking one and took it home and was dismayed to find out the only to get it to sync was to create a "cloud account" which would supposedly allow me to "check my progress from anywhere".
I returned that one to the store and bought another - same requirement: Cloud account needed to activate. Took that back. Decided it was easier to just type the number into my phone.
Its hard for me to understand how there are so many people oblivious / ok with the constant surveillance that goes on in their lives.
As I've been saying for quite some time: Data are liability.
This is simply the home-security edition.
I don't think kids these days are naive, I think they just see that trying to fight this stuff can realistically get them life in prison and/or killed.
(Though security researchers might want to take care in how they identify and report vulnerabilities.)
D-Link offers some sort of cloud service, but I've never used it. I keep the cameras segregated onto a separate Wifi network that can't access the internet, and they work just fine in that configuration. The cameras have built-in HTTP servers and present what they see as an MJPEG stream. I use 'motion' running on a machine to handle motion detection, recording, etc. I use a VPN server to handle my remote access needs.
I get everything that the cloud stuff offers, but all hosted locally.
What's described in the article scares me, which is why I've set things up the way I have. Even if the cameras were used (they weren't) and tied to someone else's account, they can't send anything back to the cloud service.
No system will be perfect though. What if the internet connection goes down? What if the power goes out? What if the provider for the remote storage goes down?
Nest Aware charges $10–30/mo for this (https://store.nest.com/product/security/camera).
One of my cameras records 640x480 at 1 fps, saving movies at 7 fps when motion is detected. I end up using 1GB to 1.5GB per camera per day, depending on how much motion is being detected. The files saved are 7 fps videos for each event, 1 fps videos for each hour (24 videos), plus the event videos all stitched into one video.
For my purposes, this is sufficient. Other folks might want 720p or 1080p video, have more cameras, etc, and have correspondingly larger storage requirements.
Is there any open-source/documentation that is accessible to more people (not just network experts) on how to do this kind of setup?
* Remote access to your home network
* Recording and storage of video, possibly with motion detection and alerting.
* Remote storage of video/events (optional).
Each problem is "sort of" solved:
Most routers have some kind of VPN server built in. My Asus router supports PPTP, which isn't very secure, out of the box. I think some routers are starting to support OpenVPN, but without some easy wizard to set it up and distribute the profiles and certificates it would probably be beyond the average user.
A lot of NAS devices come with software to record from IP cameras. My QNAP has it, but I have no idea how good it is as I've never tried it. I know that to use more than two cameras they want you to buy extra licenses.
A lot of NAS devices can also sync folders up to various cloud storage providers. This could solve the optional remote storage requirement.
As for making it all work together, that's another story. I'm not aware of any kind of easy to follow HOWTO for a user who's goal is "access my cameras remotely without sending everything to the cloud".
There's a name for a hacking strategy where you mass purchase products, modify it or acquire relevant information, then resell them or return them. "Catch and release" comes to mind, but I can't find any references.
The title is missing an important fact: these are not traditional network cameras, they're ones that apparently stream video into the cloud.
Those cameras that do not "phone home" to a cloud service don't have this problem; the ones that you can set up with a username/password and then connect directly to from the network. Ironically it's the cheap no-name ones that usually work like this, as the company just sells the hardware and isn't one to bother with their own set of servers/accounts/etc.
IMHO these cameras that do rely on a third-party service are to be avoided, since what happens to that service is completely out of your control.
Buzzwords combined with profit motive produce some worrying outcomes.
I ask because I've worked on various products, and single units change hands between engineers constantly. Phones for testing, accounts with shared dev passwords, the actual hardware, all kinds of test units get spun up and passed around, even on crappy products where the engineers' imaginations are the only QA.
Surely one engineer set up a camera, passed it along to another engineer, who set up the camera and encountered this error?
There are lots of classes of error that can hide in a product, but this feels like one that it's nearly impossible not to hit.
My brother gave me his Dropcam after setting it up for himself, and I had to prove my identity and he had to prove his to get them to move the camera to my account. It was a hassle at the time, but I was glad to know that they at least had decent security.
(Notice that they eventually posted a clear screenshot showing the problem and there was still not much of a response)
I still have the old emails from Google Security in my mailbox. June 2, 2015 was when I received the first "received" email. June 3, 2015 is when I received the "triaged" email. June 4, 2015 is when I received the "filed a bug" email. You can see from the Bugzilla bug that it was fixed by June 5, 2015.
The connected smoke detector is useless, since it's only useful in emergencies, and an app-connected thing which runs complex firmware is the absolute last thing I'd trust to save my life. There's a reason why sprinkler heads to put out fires are dead simple.
They did nothing with an unlimited budget for 2 years: http://arstechnica.com/gadgets/2016/06/nests-time-at-alphabe...
God forbid they have a wireless AP with the serial number somehow encoded in the SSID.
How is it that these companies still don't give security a passing concern?
Lack of lawsuits. The kind that bankrupt companies and set binding precedents for everyone else.
These types of things typically play out with lawsuits which increase liability for the producers. The problem is that it's (currently) difficult to prove damages when it's only privacy.
Uh, that's how it sorta works -- for better and for worse.
"On March 22, 1966, GM President James Roche was forced to appear before a United States Senate subcommittee, and to apologize to Nader for the company's campaign of harassment and intimidation. Nader later successfully sued GM for excessive invasion of privacy. It was the money from this case that allowed him to lobby for consumer rights, leading to the creation of the U.S. Environmental Protection Agency and the Clean Air Act, among other things."
* Pushing common actions in a standardized way (e.g. turn on light, flip channel on TV, raise thermostat to 28 degrees).
* Sending / receiving streams of data via UDP.
* Service registration / discovery & authentication.
* Upgrading firmware.
And which has a diverse set of servers which can talk these protocols.
I do however also think the gplv3 sphere needs to offer better netguis for the non cli-heroes.
Basically, someone steals a laptop, wipes it, reinstalls the OS with backdoors, sells the laptop for cash, exploits backdoor access to own other devices, exploits owned devices, etc.
When I returned my lease car I had to have a bit of a think about what might be sync'd from my phone via bluetooth with it, and what functionality existed to erase that. The answers didn't make me feel great.
The fun pastime of buying old HDD's off ebay and carving deleted files off them to see what might be kicking about is going to get a whole lot more interested with everything-connected society moving forward.
Ignoring the privacy implications mentioned here, and that you esentially pay monthly/yearly for storage, if your ISP has an outage your security system is becoming useless. It also is a weak point for smarter thieves (just make sure that Internet access is cut).
The Arlo camera system is secured by design and has been tested by independent auditors and security researchers. NETGEAR also conducts bug bounty programs to further ensure the security of Arlo customer’s video streams and other NETGEAR products.
One of the Common arguments I hear in response is, "But open source doesnt pay, and therefore doesnt innovate as much."
While the lack of funds coming arent ignorable, innovation is always happening in the foss space, often surpassing the proprietary alternatives, often falling far behind as well. It still gives you the power to control your own systems, which is the freedom you can choose to not give up.
The only way you surrender your freedom is voluntarily.
LOL, just look at this vigilant little bastard :p http://www.arducam.com/arducam-porting-raspberry-pi/arducam-... No one is sneaking up on that without leaving a mugshot.