Hacker News new | past | comments | ask | show | jobs | submit login
Alexa shows private Wyze Cam feed to stranger [video] (youtube.com)
215 points by pravda on June 1, 2019 | hide | past | favorite | 81 comments



Wyze was all over this from the get-go. It has already been patched (took about 48 hours): https://reddit.com/r/wyzecam/comments/bvis0f/psa_asking_alex...


> What happened — The incorrect camera being shown was once owned by the same customer. After the customer deleted this camera, there was a bug where we did not completely clean up the user-device association for Alexa viewing. This resulted in the customer being able to view the camera from their Alexa device even though he does not own this camera anymore. Because the bug is related to device deletion from Alexa’s system, this bug ONLY impacts cameras that were Alexa enabled and then transferred to another account.


What a PR disaster. It only takes one incident like this to tank a product. Testing is so important for IOT stuff but it usually gets shipped out way before it’s ready.


This happened to someone with a Netgear Arlo [1] a while ago. Netgear responded to the user that resale isn't 'supposed' to occur and so didn't consider it an issue previously. Nevertheless they claim to have been working on a fix.

Outside of various geeks though I'm not sure that word of most vulnerabilities with internet connected cameras reaches a wider audience.

[1] https://www.reddit.com/r/privacy/comments/4ortwb/i_bought_an...


I would bet that reselling a camera was treated as a corner case and the ticket was probably sitting on the backlog somewhere.


Or just dismissed outright. It's very common see FAANG employees on HN labeling user concerns "edge cases" and blowing them off.


Is it really that common? FAANG have millions to billions of users/customers. Can you at least cite some examples, please?

If you're FAANG-like and have 100M users an edge case that only affects 1% of customers still affects 1M people. Something that affects 0.1% still hits 100,000. Even at 0.01% you've still infuriated a small town's worth of people.

I'd hope that the average FAANG employee is smart/aware enough to realise that even edge and corner cases represent significant problems, even if all you care about is PR. (Although worth thinking about what might happen if you're Netflix and all these people decide to cancel subscriptions.)

(Incidentally, there's one plausible and recent example I'm aware of: issues with Macbook Pro butterfly keyboards.)


> Although worth thinking about what might happen if you're Netflix and all these people decide to cancel subscriptions.

Probably not much. The thing is, from business POV, deprioritizing edge cases can be a rational choice. It depends on how much the impact of an edge case is amplified. At the very base, if you piss off 0.01% of your users, you lose 0.01% of your revenue, which is not much and can be ignored. However, if that 0.01% can get a Twitter shitstorm rolling to the point it reaches mainstream media, suddenly the damage may be far greater.

WRT. examples - any time myself or someone else mentions that a piece of software is designed to prevent productive/efficient use of it, or loses such a feature, you can see responses that amount to "power users are small minority, why would one care about them". The annoying thing is (besides showing the disrespect to other people's time), this is a self-fulfilling prophecy. If you don't leave room for user to get better at using your software, they will never get better at using it and you won't have power users.


That's true w.r.t. power user examples, and I agree that it's infuriating, but OTOH I tend to assume the people making such comments are just parroting ideas. I'm certainly not convinced they're all FAANG employees, as suggested by the comment I was responding to.


This angers me... edge cases are where people get cut... and badly! If anything, they should have more attention paid to them than less. Incentive structures inside these orgs are poorly designed to account for this, however. Trouble is, I don't know a good way to do modify them to do so without introducing even worse perverse incentives.


This is definitely a corner case and the amount of such corner cases balloon into numbers difficult to boggle quickly.

I agree this is an egregious miss, but absolutely understandable from an engineers perspective.

I’m sure no one is happy with this outcome.


Upper/executive management certainly were happy... of course, until it bit the company in the ass. Engineering time not spent handling what would happen when a customer becomes not a customer is development/engineering time spent making features for sales to tout instead of making sure the product was safe for people they no longer thought they had a reason to care about. Reputational risk is really poorly handled because it's such a nebulous problem - and now, hopefully they'll see a massive drop in new customer acquisition and existing customer usage or they'll have absolutely no reason to care at all.


Had a similar experience with an old set-top-box: a Boxee.

The new owner couldn’t add Netflix to it until the old owner removed their “sync”, even though the old owner wasn’t a subscriber anymore!


reselling might not be common but returning is, at least in europe. you could install the camera but then decide to return it.


Eh, I think most people are desensitized. These Wyze cams are dirt cheap, small, look nice, I dont think its going to impact them at all. Higher end products or things like IoT door locks, maybe.


And rightfully so! Privacy is a right of utmost importance and for a surveillance product should be a cornerstone of its design.


It should, but I doubt it will have any significant impact, unfortunately.

It pays to not give a shit about security to get the product out faster. See e.g. WhatsApp. They had monthly breaches but still got big, and then cleaned up their act.

Had they tried to do it right from the start, a more aggressive and reckless competitor might have eaten their lunch.


People have been trained to accept ‘perfect security doesn’t exist’. Most people think they have nothing to hide anyway. They don’t care.


This is why you should never buy/sell electronics used for sensitive stuff like this second hand. Cases like this might seem bad but are still quite isolated and unlikely, imagine if the seller deliberately implanted malware, or buyer trying to get past your 2FA.


Or companies should be required to have a "factory reset" and warrant its removal of all customer data and associations (with heavy minimum compensations for failure).


But the malware could be installed in the hardware.


That's a separate issue, surely. The subject is manufacturers not properly clearing user data and saying the mitigation is "just don't sell it".


So someone deleted their data and surprise surprise it wasn’t really deleted? And only got noticed because the tool stuffed up by using it. What’s the bigger issue the stuff up or the fact Google didn’t honour his deletion properly?


> or the fact Google didn’t honour his deletion properly?

Alexa is an Amazon product. And in this case it's not clear that Amazon could have known the device was transferred. They provide the authentication layer, but the authorization is, as I understand it, in the app.


Alexa smart home skills, such as the one used here, need to be notified explicitly when a device is removed from your backend, otherwise they’ll just carry on as usual.

However, it looks to me that there were two levels of failure here. The first is not notifying Alexa of the device removal, but the second is some clearly broken authorisation code. When Alexa requests a video stream it includes an OAuth token issued when the user first set up the skill, which apparently Wyze weren’t checking was still valid for the camera in question before serving up a stream.


> And in this case it's not clear that Amazon could have known the device was transferred.

Very fair statement. It's been a while but I've done a couple of these[0]. One comes to mind. The product was an internet connected home automation platform that predated the term "IoT". They had APIs with OAuth and their own system for managing layers of device pairing. It's a "hub"-like product; it could pair with devices for control and be controlled by things it is paired with.

In our case, we didn't have to think about this specific problem because of the fact their product handled that (and there was no alternative but to let it), so the service we wrote did little more than pass the data from point A(lexa) to point C(uster API) and back, through point B(loat) which did nothing but alter the inputs/outputs according to what "A" or "C" insisted on. It wasn't that the problem "couldn't happen", it's that it couldn't happen in the code I was responsible for[1]. "C", as you stated, provided authorization -- it was the authority and the only component in the chain with enough information to cause/detect/remediate this sort of problem.

With the additional context of there being a former/current ownership link at play here, it's a crazy common kind of bug -- everyone who's been on the "user" end of technology has experienced it[2]. I'm not surprised it happened. Quite disappointed.

I mean, this isn't hindsight. Do designers of IoT cameras[3] think that their customers are going to be anything less than mobs-and-pitchforks enraged if unauthorized users are able to view it? Sure, it matters a little bit if it's broadcasting the interior of a bedroom versus a rarely changing picture of the doormat, but that's 10% knocked off the mob. It's still a mob.

Handling ownership transfer correctly for privacy-sensitive devices (pretty much anything, anymore) should be the thought shortly after "let's connect a camera to the malware-net". Further, it's not an entirely unsolved problem (if imperfectly). You're always at the peril of your users, but you can make the process active rather than passive.

I purchased a product that had the user side of this handled pretty well (easily the more difficult of the two problems -- "they'll make a better idiot"). It was nothing exotic -- simply applied "default deny" in a manner that fit with the problem. IIRC, it was a SmartThings Hub that I purchased this from my brother-in-law, who had connected in once, used it for a month, disconnected it and tossed it in a box. I did the hardware factory reset (he couldn't remember the password) and expected the usual routine you'd get with a router.

No dice -- after factory reset, it defaulted to denying access until a device-specific code, which is printed on a card, stuffed in the manual, neither of which I had. I had to call their support line, provide information about the previous registered owner as well as a few values from the bottom of the device (guessing it's used to calculate the activation code). It was pretty solid as far as minimizing accidental disclosures like this -- even if the user forgets to factory-reset -- as a hub, it's going to reveal a password-protected interface that's protecting PII about the previous owner and configuration about a bunch of devices they owned that likely wouldn't work even if they were all sold together to the same person. The previous owner could control the hub with their password which would allow them to ... change their PII and see error messages that occur when he/she tries to control the irrelevant devices it's now configured for?

That process isn't good enough for a camera (if it's wired, or previous owner/current owner used open wireless), it could be plugged in, DHCP itself an address and become available to the previous owner through their cloud account.

Unfortunately, that extra step is costly. If the activation code is generated in a cryptographically sound manner (some combination of a hash of the values I had to provide signed with a private key by an HSM?), that's going to be added software development/infrastructure costs. Then there's handling idiot customers (like me) which will require some way to authenticate me -- and you're probably going to end up needing a call center to handle all of them, because all they see is that the damn thing they bought isn't working.

But that's just it -- (other than us) at no point had it crossed the customer's mind that the camera could have a privacy "surprise". They see "Camera" and "For my home" and assumes any product sold at a major retailer must get the basic feature of "don't accidentally expose the customer's genitals" right[5] (joking, though Amazon's Echo Show is often marketed sitting where your bedroom alarm clock would be). Even more fun is that the customer is far more likely to encounter that activation problem than they are a "genital disclosure" problem and are unlikely to understand or appreciate that the little bit of grief is for a "good reason". The business side will be able to calculate the costs of implementing activation but will struggle with the cost of not implementing it. It also leaves room for a gamble. Many things we aim to prevent are things that never happen -- the software isn't interesting enough to attract research into attacking it or the right set of "bad things" didn't happen to discover a vulnerability accidentally. The circumstances are then: (1) Pay a lot of money for something that might not happen or (2) Save the money, cross our fingers. Sometimes the money's not there and (2) is the only option. But for a product like this (1) is the only viable choice -- a single high-enough profile incident for a small enough company is enough to eliminate the company and if the company is successful, it sells more devices, which exceeds the preventative capability of crossed fingers.

[0] I inherited the voice applications that my company developed for others' commercial products - all but one extremely large deployments.

[1] That's not passing the buck. We didn't have their code, access to their code, or any expectation that we could get it if we asked nicely. They gave us documentation to cover the calls that needed to be made to make it work.

[2] And when it's presets on the used car you just bought, it's no problem.

[3] While writing this, I looked over at my desk. Apparently I am ... well ... my parents are/will be Wyze owners. I purchased them a camera to provide a view of Lake Huron from the second floor bay window. Haven't used it, but it had the features I wanted (security wasn't one of them; it's meant for sharing and I'm isolating it with the rest of the IoT gremlins), they're inexpensive as heck but don't look tacky or cheap.

[4] I'm going from memory - I purchased this years ago.

[5] Two Mandatory Features for IoT cameras: (1) Records video customer can watch on any internet-connected device. (2) Ensures customers genitals appear only on authorized devices. The importance of that second feature cannot be underestimated. Might want to do (2) first -- I'd, personally, rather own a camera that took no video at all than one that was good at sharing videos all by itself.


So let's say an account at Wyze "owned" your device, now they can see your private cams.

This sounds like a backdoor.


This is an odd accusation. How else are they supposed to implement access control? It's more of a front door than a back door. It's like being shocked that Facebook servers have access to your private albums. Where else are they supposed to keep that data?


Everything involved could be encrypted with the keys simply re-randomised when somebody else takes ownership. The manufacturer doesn't need those keys, only the owner.

The way they did it was easier, but as we see also more prone to screw-ups. But it was not the only, or best, way to do it.


I’m all for strong encryption, but I want to point out that user experience with strong encryption also leads to customer dissatisfaction (from what I have heard). It seems to me most companies give themselves a golden key in part so they can solve user issues. Most people aren’t demanding strong encryption but they do want easy app setup. I mostly avoid those devices because they aren’t from privacy focused companies. If I do webcams I’d put them on an isolated WiFi network and use a VPN to view data remotely, but that’s not something any normal consumer wants to do.


I doubt the vast majority of users want strong enough security and encryption that it stops companies from being able to debug and fix issues.


Heh.. I'm glad they resolved the issue quickly but this seems like bad security design, whenever a camera is associated with a new account they should have totally new encryption keys so the old account is locked out regardless of leftover information hanging out in Alexa. Not great to hear from a company with thousands+ of cameras streaming 24/7.


This was my thought. Someone didn’t design this system with security and privacy in mind at the outset.


Definitely agreed about making something like designing an encryption key per user account type of mechanism to force an error instead of a privacy violation.

GDPR requires this sort of "security and privacy by design" practice, but it's going to be a long time before there's any real widespread knowledge of how to do it, because it's actually really hard to do. I know developers who don't really understand that this principle goes beyond just 2fa and password best practices.


Funny, I was looking into Wyze recently for some basic front yard security. I'll never enable an indoor security camera if it's internet connected, but recently the idea of a front door outside system seems reasonable.

In addition to this incident, anyone have any bad experiences with Wyze?

Sidenote, anyone know if you can disable sound on Wyze? My installation would likely be indoor pointing outdoor for simplicity, but I don't want it to record audio inside.


Saying that Wyzecam was 'all over this' reminds me of Chris Rock: https://www.youtube.com/watch?v=jkxB15nXRvM


"Wyze was all over this from the get-go."

Sorry - this is a regular software bug. They can't just say 'hey patched that bug'.

If they were 'all over it', it never would have happened.

People's private lives are obviously very important to them, these companies should be super conservative like banks, not willy nilly like startups. Unfortunately, I don't think the business would exist as such, the 'real cost of operational overhead' of doing '5 nines' secure instead of '3 nines secure' might be too much for the business model to sustain.

Not in my home.


> Unfortunately, I don't think the business would exist as such, the 'real cost of operational overhead' of doing '5 nines' secure instead of '3 nines secure' might be too much for the business model to sustain.

Business models are not sapient beings, they don't have some special right to exist. If a business can't be both done right and profitable, it simply shouldn't exist.


" If a business can't be both done right and profitable, it simply shouldn't exist."

Which is my point.

If they can't provide the level of security necessary (which might be excessively expensive) they should not exist.

Sharing of one's personal, intimate moments at home may very well rise above the level of privacy related to banking and medical records.

If my banking records or medical records were somehow leaked, I could care less really. I know nobody else will care. But if private photos of myself or my girlfriend leaked, I'd hope that the offenders went to prison. Certainly if an individual did that, it would be criminal.


If you are a swe, be mindful of what you say. You have your own bugs. They have their own. Wyze is a small company that is trying to stay afloat, and kudos to them for fixing it. I am sure they will have more bugs going forward, but each one will make them stronger.

Cheer for the improvements, not blame them for bugs.

Even non-swe people have bugs in work they do. Banks do mistakes all the time.


[flagged]


The tone of your response is enough to tell me what i am dealing with.

Basically noone should trust you with anything critical (privacy, health, security), which is funny, because that is like 98% of software. Good to know.

Banks do mistake: micro scale: errors in transfers, account closures for wrong reason, identity theft related problems; on a macro scale: exchanges going down, large scale bank going offline nationwide.

I am wary of people that claims they are against bugs. It happens, get over it


i think they deserve some points for responding immediately, patching and explaining transparently. not enough points for me to buy one were i to get security cameras but credit where it's due.


Watching this video turned on my Alexa when he started talking to Alexa.

Feels like a kind of SQL Injection ("voice injection attack"?) .


It's more like having an unauthenticated API open to the network, where in this case the network is sound waves in your local space. The idea that anyone is using voice for priveliged operations ("buy X", change my calendar, etc) is horrifying to me.


> anyone is using voice for priveliged operations

I remember when people started broadcasting to Siri for people who had their headphones plugged in.


I'm always startled to find out that people with very young kids have these in their house. Presumably there's some way from preventing a 3 year old from running up a $50k bill?


Alexa purchasing can be configured with a voice PIN.

I believe purchasing by voice does have to be enabled initially through the app as well. https://www.amazon.com/gp/help/customer/display.html?nodeId=...

Edit: I looked in my Alexa app and there is also a voice recognition option, so you can use it to only allow purchasing via recognized voice patterns and require a PIN for anything else.


I didn't turn on the purchasing option. Seems pointless anyway, I always want to comparison shop things.


>Feels like a kind of SQL Injection ("voice injection attack"?).

We used to take advantage of this on conference calls, where one of the participants was on speaker-phone and had an Alexa.

"Alex, play 'Never going to give you up' by Rick Astley"

Hopefully, people start waking up to this attack surface, as it's taken adventage of more because it's a very dangerous "gotcha".

Consider, for example, saying, "OK, Google, show me my last messages," during a conference call, in which Google will also read the messages aloud.

Fun times...


That’s been a thing since the start of voice assistants. I’ve even seen local TV ads do it to try get the viewers Google Home to activate, and mine has reacted to TV shows and YouTube channels before.


There was an episode of 30 Rock where Jack pitches essentially an Alexa-powered TV, and the joke was TV shows controlling the TV itself, never thought of Tina Fey as a SciFi writer but here we are.

I've been meaning to turn off voice detection on my phone because I'm tired of Google reacting to my conversations (and worried enough about Google as it is).


It is a fun way to mess with your friends who use speaker phone to talk to you while at home.


You got that right: After I got an Echo Dot, my daughter (35, married with a 3-year-old son) in Pittsburgh started saying "Alexa, buy diapers" whenever we were on speaker. Alexa would reply something to the effect of "Diapers added to list," and my daughter would laugh so hard. Drove me nuts, to the point where I unplugged my Echo Dot.


And they deserve no less for recording you, presumably without permission. Illegal in many states.



There was a story a while back about a reporter on the TV news that purposely said something to Alexa to show people how the devices can be activated remotely...


As someone who has worked with Chinese made IoT devices, I have seen this problem many times. The issue is a bad architectural design of the system where a camera is “bound” to a user account and even if the user returns that camera it’s still bound to their account, and adding it to another account afterwards doesn’t disassociate it from the other user. It’s just stupid things like this that are not well thought out that pop up in the multitude of cheaply made Chinese IoT problems that have flooded the market.

Believe it or not, this is the smallest security flaw in some of these systems/devices...


Which ones do you recommend using?


Not sure what I would recommend for non-technical people but if you are a bit tech savvy, rolling your own is very robust nowadays. Buy cameras that don't need to be setup using an app (I prefer DHCP/Ethernet with web console), block them from going out at the firewall level, stop all services like mDNS/uPnP on the cams, install Blue Iris or BlueCherry DVR on a $500 laptop with 8TB USB drive, install the iOS/Android apps on your phone, open the right firewall ports, and now you can remotely monitor and watch your house from anywhere, safely.

I have 30 days of footage for 16 cameras recording to a 4TB HDD. The entire setup cost me around $3k over 7 years (cameras were more expensive in 2013).

There is absolutely no way I will ever trust a camera that needs me enter my home wifi password using an app, which then sends the plaintext password to a .cn domain to generate a QR code that the camera scans to configure itself (looking at you MECO Wifi IP cams). Way too many people around the world are falling for these cheap cams.


Since most people lack your technical know-how or patience or budget to implement such a setup, do you have recommendations for alternative consumer solutions on the market that do it right?


So would you go with something like https://github.com/ccrisan/motioneyeos/wiki with a Pi+camera+enclosure?

What hardware do you recommend?


Are there any camera systems that are internet-enabled (for remote viewing + storage) but use end-to-end encryption for video feeds?

It doesn't seem like too much trouble to have to pair the cameras to your mobile devices on the local network.

If I were an IoT camera company I wouldn't want to have the responsibility of securing video footage of inside peoples' homes.


Unfortunately these cameras and other such devices are far too underpowered to do when the tasks they're given a lot of the time, let alone encrypt media.

They should he connected to a hub machine with more power to do the encryption and uploading there.


I use a Xiaomi Dafang camera with custom firmware. They run on my local network with no access to the WAN. I can view them by tunnelling to a Raspberry Pi and forwarding ports. It works well but I have no solution for recording, yet.


Why you'd want to fill your house with [internet connected] webcams is beyond me, let alone cams in your kids' rooms. People are weird.


This is the first question that comes into my mind.

However I had to scroll down pretty far to find it here. It doesn't look better over at reddit.

No wonder nobody cares about the public space surveillance overload anymore. How could you, if your house is being monitored audio and visually by some company?

This is really frightening.


> However I had to scroll down pretty far to find it here. It doesn't look better over at reddit.

Almost everyone who notices how ridiculously idiotic current IoT is has already spoken about it dozens of times here, on Reddit, or elsewhere. At some point people simply get tired of making the same point over and over and over again.


The kids rooms are pretty easy. Baby monitors have been a thing for a long time. Making them internet connected just means it's easier for the average person to get it to work on their phone (which may be on a different wifi, in a room just out of wifi range, using an AP with client isolation turned on, ...) which means that much more companies opt to build internet-connected than local-only stuff.

The house? For many, it acts as an alarm system. I also assume that the police will react better to "there's two people robbing my home, I can see them on my camera" than "my alarm app told me a window was broken".


I used to do CCTV installs and configs, and it seemed the wealthier folks would want cameras inside their house, particularly in their master bedroom.

Other folks simply wanted the exterior perimeter of their house under surveillance.

This is how the Hulk Hogan sex tape was created.


What is the risk? Someone is going to watch me jerk off?

Serious question.


Why is there no end-to-end encryption for such IoT devices? Does it mean the central service managing all devices have access to all the video streams?


Apparently -- it's the Panopticon :(


DRM from the consumer perspective?


I understand it being an edge case in testing but I think there is a design issue here with “forever authorization”. Once the device is paired with Alexa I don’t see why Alexa would not still need to get authorization each time it accesses the device? It would seem like once the device went to another account Alexa would attempt authorization and then fail.

Authorization is the problem with a lot of security problems today. I don’t think it should be a corner care, it should be in the initial design of most things (internet of). Calls to banks, use of CC, and now IoT.

I read the resolution from wyze and while I’m happy they patched device associations I was looking for the term “authorization” and didn’t find it. I wish this term had more weight and meaning in the practical use cases we see as consumers. And I wish it was in the initial meetings for us tech devs.


I wonder how trivial now it would be for some secret gov org to get a live feed of so many areas if they were so inclined.


They weren’t hacked so much as badly transferred ownership. Basically someone (Alexa or Wyze) didn’t invalidate security tokens after the device was sold and changed hands.


Surely was not the first time a camera was sold second hand.


I know, but if that's trivial so is giving skeleton keys for all your network to the government for a fat contract (Amazon)... or they could just take it like they did with Cisco.


They just have to ask through secret letters... they don't need to try any hacks...


Horrifying.


WARNING: If you watch the video your Alexa devices will respond!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: