Hacker News new | comments | show | ask | jobs | submit login
CloudPets teddy bears leaked and ransomed, exposing kids' voice messages (troyhunt.com)
500 points by 0x0 on Feb 27, 2017 | hide | past | web | favorite | 169 comments



For anyone who is coming straight to the comments before reading the article: the details are even worse than the headline suggests.

Not only was a huge amount of information exposed through a public, unauthenticated MongoDB instance, and not only did CloudPets ignore multiple security researchers' attempts to alert them to the problem, but the database was actually held for ransom multiple times without customers being alerted to the breach.


This is _insane_. My daughter got a surprise cloudpet for her birthday from a distant relative. The app you have to use with the cloudpet is also filled with ads, some of which are of adult nature. This company is sleazy as hell. I hope they get sued out of existence.


They basically failed out of existence before this even happened (the article includes details on their share price sliding to nothing earlier last year), which is probably one reason they didn't bother telling customers about it. This is probably the best example I've ever seen of the dangers of trying to keep a service running once the company behind it has gone under.


Why did they fail aside from security issues?


just a guess: their product is stupid


From what I've seen, a lot of of those MongoDB ransomwares actually just delete the data and leave a ransom note in the hope of getting free bitcoin. So in a sense they've done some good by removing it from the internet.


A guy I work with did a presentation on this product, he is big into reverse engineering bluetooth devices. I can assure you the toys themselves are just as insecure as apparently their infrastructure is.

Seeing it light up and say "destroy all humans" was pretty funny, moreso because there is pretty much zero authentication on them so you could do it from anywhere from your mobile, and the mic can turn on and record without any authentication at all.

sigh internet of things


The "S" in IoT stands for Security.


I took a grad course last semester where one of the groups analyzed a Nest cam and the other analyzed the Mother sensor device. Both were surprisingly quite secure, especially the Mother, which had security features all the way down the stack.


One of my professors worked on the key exchange protocol [0], used in Nest. When discussing that particular point, he was very complimentary of Google's security practices, especially when it comes to Nest. [0] https://blogs.ncl.ac.uk/security/2015/07/28/j-pake-built-int...


There's security features then assurance. Features without assurance of correctness are often bypassed. Im curious what was in those you talk about down the stack.


Some of us do give a shit about security. It's just a shame that it feels like we are the exception to the rule.


We do because we realize what the lack of it entails. So will the general public, eventually. And the only way to get there is if more cases like this start happening. It's a shame they have to learn the hard way but there's no other way. That, or we as an industry act up (in ways I can't even fathom).


The fact that you allude to it suggests you can fathom it in some way. Maybe you don't want to but clearly bad actors can exploit insecure systems and that's especially easy from the inside.


Sure I can, but they're all unrealistic so no point mentioning them. We could for example start boycotting companies that don't take security seriously. But I'm afraid we'd end up with a very, very long list.

Publicity can work wonders. You end-up with sensitive data for kids in the wild, possibly in the hands of perverts, nothing could work better than that in raising awareness for the general public. It's harsh, but it fucking works. So we'll stick with it for now, unless someone comes with a better idea.


Lawsuits for criminal negligence against the CEOs of the companies themselves would be a damn good start. Their business practices are why these problems happen. They cut every corner they can find, put business school grads in charge of deciding schedules and resource allocation to engineering, and make sure that if an engineer says 'we need more time and testing' that any low-level manager can tell them business goals come first.


The problem is that it generates awareness based on raw emotion, and of course that always leads to rational, measured decision making. /s

On the other hand, I don't know any better idea either.


Computer/network security will never be important until governments start regulating this stuff through specialized agencies. It's the opposite of profitable to care, so businesses who do care are disadvantaged.


> Computer/network security will never be important until governments start regulating this stuff through specialized agencies.

Well I wouldn't say never, just needs some people determined to have it on the core of the team. Certs are free to low cost depending on the type you want. Compute needed for "Security" is minimal (Heck we can even do RootCA Validation on the the ESP8266 these days).

But this isn't directly connected to the internet and goes via Blue Tooth connection, issues like this are down to lax security practices.

> It's the opposite of profitable to care.

How much profit does it cut to not to put your mongoDB instance internet facing? Firewall off 27017 and Enable Auth shouldn't cut into their profits too much.

EDIT: Slapping a sig creation/check on the content urls shouldn't eat into profits either. This breach had nothing to do with the toy itself but was server side.


>How much profit does it cut to not to put your mongoDB instance internet facing?

Wrong question. How much profit does it cut to hire an engineer who knows not to make your mongoDB instance Internet-facing and to empower that engineer enough that they can tell the CEO that the product is not ready to launch and they can't just open public access to the develop/test environment is the question. And it's not even a matter of profit. It's a matter of pride. Engineers are seen as typists and nerds, low level functionaries. They're the ones who don't understand the divine wisdom of "don't let the perfect be the enemy of the good enough."

You wonder why companies are so stupendously desperate for H1B visas and why so many job listings are looking for 2 years experience and no more? It's because they don't WANT knowledgeable staff. Those tend to be expensive, and problematic.


And they talk in real life to their kinds and not through a toy.


I can see a novelty use that would quickly die off. If you want to talk to your kids via a teddy bear instead of phone or Skype, then that is up to them. The Demo for the toy shows distant relatives using the toy to talk their kids/grand-kids.

But the purpose of the device has nothing to do with the security of the device.


I like Apple's approach, where HomeKit certification requires that the device use some form of secure transport to communicate with iOS.


Which totally would not have helped in this case: using https would still have left the DB exposed.


It doesn't help the server side data leak but at least you can't connect to it and make it say 'destroy all humans'


That's not necessarily the case. TLS protects the connection, but by default does not provide authentication. I also see a lot of instances where certificate checking has been disabled, so that the client just ignores a MitM attack. So with TLS it would seem more secure at first glance, but given the implementation blunders here I wouldn't expect any real improvement.


Yeah that's true, I'm totally assuming 'competently implemented TLS' when I say it would protect the connection.


Hahaha i spent a good 3 minutes looking for where the S was, until the joke hit me


shouldn't that be 'SH' ?


'Security Hardening'.


"Security Hamperred Internet of Things"


Meanwhile police in a murder case are preparing to take Amazon to court for Echo records. On the privacy front, there's just no saving people, but the IoT brings the magic of invading privacy together with furnishing botnets with millions of new bots!

We're screwed coming and going, and the vast majority still look at you like a woodland hermit if you suggest that you shouldn't have anything listening to you in your home.


I wonder how much infrastructure is really required to properly support Alexa like capabilities for an individual. Does Amazon really need all of our recordings on their hardware in their data centers? Is it conceivable that we could own that hardware as well?

I realize that training data is important and I assume the recorded data gets used for that purpose but does Amazon need to keep it forever? How long do they need it? Can I own and posses the hardware and pass off the learning alone?


But even if they say that they're not storing or sending it, how feasible is it to verify that fact?


I don't mean to suggest this is something Amazon would bring to market using an Echo. I mean do you really need Amazon/Apple/Microsoft/Google levels of hardware to run a voice interface for an individual's digital assistant?

Is it feasible to install hardware with the capability of Alexa in the standard user's home?

I think the answer to that is probably "yes", more or less. Some things may be a bit harder to do but I don't see why I need a huge black box in the sky to parse my voice or do some geolocation.


Very feasible -- don't plug it into the Internet.


At which point you have a brick with a microphone, and a pretty blue light, right?


I think the blue lights only show when its connected. You would get pretty red lights


"I'm having trouble understanding you right now."


I have some ideas for a more strict but more user friendly household firewall device and corresponding UI, if something really really needs the Internet.

But your lights, TV, etc. don't live in an Amazon datacenter.


You're right of course, but interesting sidenote: in the murder case I mentioned the police ended up looking at the suspect's smart water meter logs... which showed the use of about 140 gallons of water in the middle of the night. So... no, it doesn't all live in an Amazon datacenter, but it these days it might live somewhere.


Not if it is configured to connect to a server in your home with all the data it needs to function.


Oh of course, but I wonder if by the time you'd created that system, you'd feel the result was worth it? Is what Echo offers really so valuable you'd go through the trouble? I wouldn't.


I don't own an Echo because I don't feel the convenience of such a device is outweighed by the obvious privacy implications. I would be more likely to use it if it was located within my home but at that point the limiting factor would probably be cost. This is why such a device would have to provide additional functionality.

In a world where it is just expected that you have a personal cloud server in your home Alexa becomes an "app" on that server and the Echo continues life as a very nice speaker/microphone device you can place in a visible area in order to interact with that server.


Although I would only consider this developer ready at the moment, ooen efforts like this have promise, at least you have choice what back end if any https://www.kickstarter.com/projects/seeed/respeaker-an-open...


It requires almost nothing. The training process requires large amounts of test data. But once the system is trained, the actual set of weights needed is tiny and running the recognition itself is cake. The whole reason all voice recognition is server-based is purely for business reasons, to lock people into their service, to provide new sources of consumer data to mine and sell, etc.


What's wrong with the police requesting Echo records? Surely the Echo records requests you make to it. No different than Google recording your search history. And it's pretty reasonable for the police to want to look at that in a murder case. And they got a warrant. I don't see anything sinister in this case at all.


I'll be putting out our blog post about this first thing tomorrow (we had it ready to go for next week, but I think now's a good time to add some fuel to the fire). Essentially the toy uses Bluetooth LE very insecurely and it has a speaker and a microphone. Guess what happens next?

Edit: Demo of the CloudPets functionality using Web Bluetooth https://github.com/pdjstone/cloudpets-web-bluetooth/


Reading and fully comprehending the full contents and implications of https://twitter.com/internetofshit should be required for anyone who is thinking about making an IOT type device.


I do agree that lots of IoT products have terrible security, but is having insecure bluetooth or the likes really a terrible thing for most of these types of products?

I understand that this leak is related to mongodb... and that is terrible, but mostly referring to your bluetooth example.

I mean take bluetooth headphones they are notoriously insecure, but the range in which eavesdropping could take place is pretty small, and for most of us you would just be eavesdropping on our annoying music. Seems reasonable that they save bandwidth on secure transmission of data for higher audio quality. That said I could see an argument the other way, but I'm sure there are more examples where it doesn't seem like a big deal. It would be interesting to hear from someone who thinks I'm dead wrong.


> Seems reasonable that they save bandwidth on secure transmission of data for higher audio quality.

Encrypting a compressed audio stream does not add to the bandwidth, aside from the initial key negotiation.

Furthermore, the bandwidth required for audio of a quality that's indiscernible from the original is negligible when compared to the bandwidth of Bluetooth radios. Ridiculously good audio is 320 kbps, and Bluetooth is easily good for 25 Mbps.

I suppose you could argue that the battery power used to perform this computation is the limiting factor, but a good embedded DSP used to perform the recording and transmission typically have tiny power requirements and hardware encryption routines that don't significantly change the power requirements of the device, as compared to keeping a blue LED blinking or powering an earbud speaker.

No, let's be honest here. The actual limiting factor is engineering time and money that goes into developing these devices as quick and cheaply as possible.


Yea I was thinking the bandwidth limitations would be on the CPU because of data decryption... your points are valid though even with that. It's not a great example. I still think that varying degrees of security are fine with these types of things.


If your threat model for your Bluetooth keyboard doesn't involve, say, an abusive spouse sniffing traffic to see if you're reaching out for help, your threat model is probably biased in favour of wankery like the NSA and not real threats ordinary people face.


> The Germans had a good point: kids' toys which record their voices and send the recordings up to the web pose some serious privacy risks. It's not that the risks are particularly any different to the ones you and I face every day with the volumes of data we produce and place online (and if you merely have a modern phone, that's precisely what you're doing), it's that our tolerances are very different when kids are involved

It's a bit paradoxical. There are way less things a kid can say that can get him in trouble than an adult. Even the most oppressive regime will not hold what a 4yo toddler says against him. The need for privacy should rather be less for a kid than for an adult.

What it means is that violations of privacy are creepy, period. We try to rationalise it by arguing that we get something out of it, but when dealing with our kids, we stop believing our own bullshit and it is just becomes purely creepy...


First, it's not just about "get [them] in trouble". Think about ten years later. Do we want adversaries to have logs of children's conversations?

Also, It's not just recordings. Once an adversary has account access, they can talk to children. I can't imagine that being a good thing.


Additionally, what benefit do we have to gain by preserving these recordings? The whole thing seems massively risky for no reason other than to make a few bucks.


> The whole thing seems massively risky for no reason other than to make a few bucks.

That's, unfortunately, the very reason why most of IoT stuff exists.


People kept devices which allowed strangers to talk to their children sitting in their house, often in the children's bedroom for nearly a century and it wasn't a major problem. The vast majority of child abuse (like 95+%) is committed by parents or close family members. The danger of strangers is overblown and you shouldn't have to harp on that to get people concerned about companies unnecessarily snarfing up every bit of data about everyone of every age.


Yeah, I'm not worried about my kid saying things that will get him in trouble. However... he repeats literally everything that he hears, sometimes verbatim. Sometimes hours or days layer. To be honest, it's really creepy at times. Plus, he doesn't really have a filter, so he'll talk about everything he sees at school or on the playground, just chattering about all day to himself.

So I'm worried about my kid saying things that could get other people into trouble.


A common anecdote from East Germany is that teachers would ask children what the "sandman" looks like (an evening TV show for children). The seemingly harmless answer then revealed whether their parents secretly watched imperialist West-German television. So yeah, children are pretty good at implicating other people.

(No real source, but a random German article that quotes this anecdote: http://www.badische-zeitung.de/panorama/der-freundliche-herr...)


More innocuous example:

I remember when being a kid in school, the police were by for a visit to educate on alcohol responsibility, and touching on ethanol/methanol dangers since moonshine was a thing, they showed a distiller (which "if clearly meant for distilling spirits" is illegal to own), and some kid blurted out "my dad has one of those!" immediately when seeing it.

The teacher tried to claim "teacher-parent confidentiality" when asked to identify the kid/parents as if that's a thing, but I don't think they took it seriously enough to warrant further action.

Nevertheless; yeah, kids say everything.


This was common in all Eastern Bloc countries. In Poland under Russian occupation UB/SB https://en.wikipedia.org/wiki/Służba_Bezpieczeństwa "secret police responsible for internal and external intelligence and counterintelligence to fight underground movements and the influence of the Catholic Church" would send its agents to schools to befriend children and try to get them to rat on the parents.


Exactly. And give a kid any kind of recording devices chances are they'll also end up recording you at times you wouldn't expect to be recorded.


There's always the possibility that the toy overhears adults.


I think it's possibly a bit more that we rationalize it as an adult because we can make a choice to give up the privacy or not. For a child they haven't developed mentally yet to understand that choice. That said I agree that the child has less potential for revealing information.


Even calling it a choice is rationalizing the loss of privacy. Most services are a binary choice of giving up privacy or not using the service. Some services can be done without, but many are required to operate in a modern society.


When will these companies be held liable for beaches like this? The time for feigned ignorance is over, this is negligence at the best, outright greedy indifference at the worst. There are no more excuses.


Okay, first of all:

>the average parent.. is technically literate enough to know the wifi password but not savvy enough to understand how the "magic" of daddy talking to the kids through the bear (and vice versa) actually works [or] that every one of those recordings... is stored as an audio file on the web.

If it is not considered amazingly stupid, or at least ignorant to not understand that the magic talking bear has a computer in it, and that if the computer wants the wifi password it probably uses the internet, and that if the entire purpose of the device is to make recordings available to you over the internet... then I despair. My sympathy for people who buy these sorts of products is wearing thin. But, in this particular instance...

>our tolerances are very different when kids are involved

Interesting. Why? The data is much less valuable:

>One little girl who sounded about the same age as my own 4-year old daughter left a message to her parents: "Hello mommy and daddy, I love you so much." Another one has her singing a short song, others have precisely the sorts of messages you'd expect a young child to share with her parents.

Hardly identity thief material.


If it is not considered amazingly stupid, or at least ignorant to not understand that the magic talking bear has a computer in it, and that if the computer wants the wifi password it probably uses the internet, and that if the entire purpose of the device is to make recordings available to you over the internet... then I despair.

I think you vastly overestimate the degree to which non-technical consumers understand computers, wifi, the internet, email, web sites, apps on their phone, and the differences and boundaries between any of those.


Interesting. Why? The data is much less valuable:

Because while we can make an informed decision about putting our own data into such a service, weighing up the risks and benefits, a four year old cannot - a parent is making that decision for them, and when you are making such a decision on behalf of someone else it behooves you to act more conservatively than when deciding on your own behalf.


>> our tolerances are very different when kids are involved > > Interesting. Why? The data is much less valuable

It's the why-do-I-care-about-my-privacy argument - but it's even more personal now, because it's not just you, it's your kids.

There's always that extra creep factor when it comes to children.


> Hardly identity thief material.

True, but potentially very dangerous material in other ways. It's not hard to image kidnappers piecing together stolen audio clips to create fake messages as part of a ransom attempt. Or scammers creating audio clips to scare parents and extract money. A large bank of audio clips from a child could be used against that child's family in all sorts of ways, especially if the parents don't know the clips were stolen to begin with.


I don't understand. If I got a call in my daughter's voice saying "Help! I'm being held for ransom! Send all the bitcoins!" And then I call her phone and she answers or she walks in the door having gotten home from school, how is anyone going to collect on that?


I might watch too many spy movies but maybe they wait for a time she will be away from her phone, like the camping trip she has been talking about for weeks or a school field trip that can be easily learned and googled from conversations about class or sports competitions based on googling team names. These are all things that are likely to come up in regular, routine conversation.

The real question is why you wouldn't already be terrified about having a microphone in your house that is open to the internet.


In your contrived scenario meant to defeat the premise, no, it won't work. However there are basically limitless ways this data could potentially be exploited. The point is they don't need to even do it now, it could happen any time. Data doesn't just go away.


If we assume that you can actually scare the parents into paying a ransom, in the end the impact is... a lot of stress + financial loss. And this assumes that the parents can't get in contact with the kid, the police can't get in contact with the kid and the scammers have enough savvy to accept untraceable money. All of which points to this being more of a movie plot than something that will happen in reality.

And even if this were a credible threat, logically we should be more concerned about direct financial theft since it has the same impact, but is far less elaborate (but still far less common than other types of cybercrime).


One of the typical scams in Russia is a message to the parents "mom, I out of money on my phone, please drom 20$ to this number" or "dad, I scratched someone's car need 2000$ right now".

In Germany it works with grandparents. They get calls from someone impersonating their grand-child in trouble. This works because in many families grandparents live separately and sometimes don't have much contact to their grandchildren (apart knowing that they exist). This threat is so real that there are police posters about this in community centers.

So this works over text messages or phone calls without much sophisticated mimicing. I just imagine a whole new spin of this when AI/DL tech becomes a commodity.

So I quite disagree with the "movie plot" estimation of the threat.


Do you know how many calls you can get in a year of scammers pretending to have kidnapped members of your family in some developing countries? The only real difference from that to this is the added tech-savyness of obtaining real recordings instead of bad acting, which some scammers are actually likely to have by virtue of purchasing this data the way they currently purchase phone numbers, names and CC numbers. The police in those places won't bother going after a phone scammer that only took money (say, in the form of a non-cryptocurrency digital cash transfer that was quickly exchanged for cash), even if the criminal is fairly traceable in theory. When the rule of law is such that only about 2% of murders ever get investigated, this sort of thing just becomes the background noise of un-safety.


Or worse, they could train a neural network to mimic the child's voice and create a fake message to send to the police alleging child abuse, with a ransom note at the end - in the child's voice.


You don't even need it. Adobe's working on a product that allows you to make a voice say whatever you want, using 20 min worth of voice samples. [1](https://arstechnica.com/information-technology/2016/11/adobe...)


Their ability to do harm with Flash has been limited so they had to branch out.

Really though this is pretty cool, when not used for nefarious purposes.


Of course, when somebody releases a proof of concept, it'll be called RansomBear.


BearBleed.


I was gonna say "CloudBear", after Cloudflare, but it's basically already called that.

Maybe just avoid anything with "cloud" in the name?


> Maybe just avoid anything with "cloud" in the name?

I do exactly that. I treat the word "cloud" in a product name (or feature list) as a huge negative indicator and steer clear.


Moving into the future only makes an audio bank more dangerous with technologies like Adobe VoCo which only require a modest amount of recordings to synthesize in the child's voice (~20m IIRC)


How about the kids who don't leave cutesy messages and saw disturbing or threatening things? How about the parent who sits on the thing and says something?

Voice data was once safe in its obscurity... now I have a $2 app on my phone that can do decent voice transcription.

It's just one more thing to worry about.


> Hardly identity thief material.

Audio messages can be used to train a system which then will be able to mimic the voice of the child, almost indistinguishable from the original. AI of this kind will be commodity (i.e. easily accessible by criminals) pretty soon if not today.


A device connecting to your WiFi could just be talking to devices on your network, not necessarily be sending all of your data to the Internet.


Sure - they _could_, but I've got lightbulbs and power switches that "helpfully" connect to some un branded Chinese "cloud" service - without any normal-user way to even know about it never mind turn it off.

I suspect some of it is so I've got the amazingly useful (nb: may not be useful at all) feature of being able to turn my lounge room lights on and off from my phone while not at home.

Cynical me suspects it's also probably a pretty good way to ensure forced just-put-of-warranty failures...

Pessimist-me assumes the Russians, the Chinese, Mossad, and some kid at the local hackerspace have all pwned the Chines cloud infrastructure and are using backdoor root shells on light globes subversive tshirt purchase history, and they're all cutting each other's throat price discounting as they sell it all as "business intelligence" to my car insurance company and the CBP...


Damn! Had to stop and double check the settings on my router firewall after reading Pessimist-You.


Wouldn't you say that as a parent it is your obligation to protect the child's privacy? The threat model doesn't even matter, there will be one eventually. All data can be used and combined (now or in the future). Is it that hard to imagine a future where recordings of a child can be used to recreate the voice of the same person as an adult...hardly. I find a "where's the harm" attitude towards privacy/data collection very troubling...doubly so if you are making that decision for someone else who can't protect themselves yet. Ethically it's probably a bigger problem than having such a lax attitude about your own privacy (which if perfectly fine/freedom of choice).

And yes I also rant and rave about parents who post pictures of their children everywhere.


Extreme example:

Someone steals the recording saying "Hello mommy and daddy, I love you so much."

They then manage to contact you, reporting that they have kidnapped your children. They play you the recording to prove they are in your custody and demand an immediate ransom payout.

Highly prone to error, not very likely to work, incredibly evil and likely to end up with the perpetrator in jail, but, unfortunately, the sort of thing that a desperate criminal might try, and even more unfortunately, it only needs to succeed once for someone to consider it a viable tactic.

I know this is stupidly unlikely occurrence, but extrapolate it with a bit more sophistication and you can start to see why this is actually quite nasty identity theft material.


Is there a fine for this? Some sort of punishment? Companies need to be taking security seriously, we are all paying the price.

Internet-of-Shit will remain exactly that until neglecting security is a substantial threat to the bottom line of a company.

They ignored multiple warnings? Got hacked multiple times? This is negligence, and this company should be fined out of business.


Judging from other comments, it seems they're on the way out anyway. But the question of fines etc. is interesting...


The corporation might be, but this seems like the level of gross criminal negligence that a person should be held liable for.


I'm not necessarily disagreeing but that seems fundamentally opposed to the way a LLC works.


Failing at business is one thing, doing people-hostile things should be another. I'm all for reducing the personal risk of doing business for entrepreneurs, but for antisocial actions, there should be personal consequences.


"CloudPets can send and receive messages from anywhere in the world! Buy Now".[1] They delivered on that, all right.

If you want one, they're now available for the low, low price of only $3.[2] Including WiFi.

[1] https://cloudpets.com/ [2] https://www.hollar.com/products/as-seen-on-tv-cloudpet-dog


$3 is a great price for a stuffed animal, not to mention IoT BT/Wifi platform.


Apart from the total disaster these kind of incidents are, they serve a valuable purpose: material to educate my children about security. It is surprising to see how quickly my 9-year old daughter picks up the message, especially by these kind of stories.


My 7 year old son is rapidly becoming far more hostile to anything from ads to privacy invasions because it is simply making up a far bigger part of his life than it does for me.

I wonder how children learning about these things from such a young age will play out once they're gron up.


IoTTDWAMBTPCHGOOBOABS

'Internet of Things That Don't Work Anymore Because The Company That Made Them Has Gone Out Of Business, Oh, And Because Security'


Who's the goto "freedom/privacy marketing" organization (EFF seems to be legal only)? This is an excellent propaganda for freedom opportunity. It involves a creepy invasion of privacy targeting children. Needs to be used in a massive campaign against (insecure) IoT ASAP.


Privacy international.


Should we call this PetsBleed?


No. CloudBleed was insult enough to the "Bleed" suffix. This is taking it too far.


I had a bear like that (not CloudPets, but looks like an exact clone). Thankfully, it was only used by my daughter with my supervision, so I know exactly what has been said. Unless the mic was enabled remotely, that is.

I assumed that the security issues might be bad, but placing the voice on unsecured Mongo facing public Internet is beyond shit.

Thankfully, I have disabled the bear long time ago. But now I worry about my NetAtmo station, which contains an always listening microphone to "measure the noise pollution". Yeah, right.


"Always listening" is meaningless marketing drivel.

Anything with a microphone might always be listening, and you probably can't (easily) verify whether it's on-demand or not ;-)


As an interesting side note, this is also seems to be built on top of the Parse Node.js self-hosted server, based on the schema provided.


Until executives of tech companies are convicted for criminal negligence, this will never improve. The current accepted business practices in tech are abominable and criminally reckless. If a company building housing was as negligent in hiring unqualified cheap talent, ignoring reports from engineers about needing more resources or time in deference to business goals, etc, they would be tried as a criminal and face time in prison.

Put some CEOs in handcuffs, lock them in a cage like an animal, and see how quickly companies actually start doing crazy things like mentioning the word 'security' in job listings for software engineers or system administrators or even doing the unthinkable, hiring experienced expensive engineers.


For goodness's sake. Stop connecting things to the Internet.


> As you can see by loading the image, all that's required to access the file is the path which is returned by the app every time my profile is loaded.

How else would you do it?


Put it behind the webapp's authentication and access control layer - so only logged in users with relevant connection/permission to the requested image can get it.


Well your correct that a valid address would be needed to access the content. But if you are being told by an api to access Z content it's easy to say "here is a sig that's only valid for the next x mins" (x being something you base on your user base) instead of anyone knowing the file name of the file (which they can get from dumping your db) from access the file without having that extra layer of protection.

Disable indexing on s3 and you need to know the direct link. Dump the table to get those links but then you still need to know the secret your app shares with s3 to create the sig needed to download from s3. I.E. They need to know Z and X and X only gets issued to people logged in and expires after a time. Knowing Z alone gets you nothing.

I do the same to protect the s3 bucket from direct access our cache pulls from (cache has a shared secret with s3 to pull from. Cache also has another signing process for the client to use to download from the cache). Client Signing for Z file is not valid for client+1. We just use a cache in-between client and s3 as we managed to get a deal thats cheaper for us then direct access to s3. But the same could be done for direct client to s3.



Authorisation?


Companies have to get more involved in actually encrypting their data before entering it into the database. For every web app I create, especially when sensitive information is exposed, I try to encrypt as much data as possible. With all the leaks and hacks.. it only makes sense to add some encryption method in there.


The real issue is, companies should not touch, and especially not store data unless absolutely necessary, and only store it for as long as it's needed and not longer.

There is a German word for that. Datensparsamkeit.

https://martinfowler.com/bliki/Datensparsamkeit.html

Actually, and probably for the first time ever, I completely agree with Fowler:

"Datensparsamkeit isn't just about bad people stealing data, it's also about your relationship with the primary company themselves. The default attitude at the moment is that any data you generate is not just freely usable by the capturer but furthermore becomes their valuable commercial property. Privacy advocates, including me, think this assumption needs to be changed. Companies should only capture what they need and the burden of demonstrating need should fall on them. In addition, of course, they must be completely transparent about what they capture, what they store, and who they share their data with."

This, I believe, needs to be enforced by regulations, worldwide. Businesses won't do it themselves, because it's a clear case of conflicting social and monetary interests.


> Actually, and probably for the first time ever, I completely agree with Fowler

Upvoting you just for that. <3


Encryption does nothing if you can just query the decrypted data.

Other than giving a false sense of security.

Even if the database is encrypted, something has to access the decrypted data and is vulnerable to attack.


If you do the decryption in your business logic, and store the key separately (Amazon KMS for example), then if somebody hacks the database directly the data is (slightly more) secure.

Agreed though. Encryption is only a stopgap and the DB should never have been public in the first place.


I'm working on an idea in the security space, that focusing on data breaches and attempting to identify them early. Keen to validate the idea, so if any fellow startups or businesses are interested, I'd love to talk and see what people think. Email is in my profile.


I don't want to be sophomoric or immature, but I just wanted to point out that a company whose initials are CP makes teddy bears for children.

And now audio of children has been hacked, exposing kids voices. The future is here, and it's weird.


So it could all have been avoided if they'd made it unnecessary to identify oneself and paired app with toy via decent public key encrypted communications. I think the toy is a good idea, it just had a shit implementation.


Can someone explain to me why a teddy bear need to be connected to the internet?

Especially in this fashion?

Why can't we just have a BT connection between the device and the phone and IAPs in the phone if they must?


Non-engineer here...

What is the significance of this being Mongo vs any other poorly/unsecured DB?


I don't know that it's Mongo-specific, it's more that newer storage engines, in an effort to be user-friendly, shoot to essentially be zero-configuration out-of-the-box. You install it, run the daemon and can immediately connect so you get that positive feedback the engine is easy to work with. This typically means there's no username/ password required and it's listening on its port to all responses (inside and outside whatever firewall you have). So you start it, connect to see that it works and think, "I should really secure this like it says in passing in the docs, but let me try a couple more things first". All of a sudden it's three weeks later, your MVP is ready to launch and the username/ password has been forgotten.

Other stores may let you get away with setting up "root/ root" or "admin/ password" but at least they have forced you to think about setting up some security. It's a trade-off, but it's a crappy one IMHO. There's no risk to Mongo, et al: they told you to set up security but didn't force you because they want you to pick their product.


Looking at the stock price - the whole company is in the state of disarray.

Massive negligence.


IoT should die a swift and permanent death.

Alas, that wont happen.


I'd love to see the INTERNET of Things be replaced by the INTRANET of Things.

Remote access can be handled through a VPN, so there's no need for a remote server. I'm assuming that the device in question has computing hardware that's at least on par with a $9 CHIP.

What's really needed is for secure and easy to set up VPNs (to connect back to your home network) to become a thing, then the remote access problems are taken care of. After this, each IoT device's app just needs to look for the device and possibly give the user a gentle VPN reminder if it can't find it.

Of course, a VPN introduces a lot of extra work for the user. Even the steps to connect/disconnect from the VPN add enough friction that some people won't bother.


So as a rough straw man sketch of how such a thing could work:

1. Consumer grade routers include a secure VPN endpoint. Whenever the router connects, it registers its internet-facing address with some vendor-specific DNS service under a name unique to that router but persistent at least until the router is factory-reset.

2. Devices on the local WiFi network can request a VPN access token. Optionally this requires a separate password set in the router, or pressing a physical button on the router a la WPS. As part of provisioning the token, the vendor-specific DNS name is also provided to the device. The provisioning process requires connecting back to a listening socket on the client device.

3. Devices (eg your mobile phone / tablet) provisioned with a VPN access token can then connect back in to your local network remotely. Each VPN access token is time-limited, configurable on the router but generally something in the range of 7 to 60 days. After the token expires you must connect back locally to the local network to renew it - renewal is blocked over the VPN connection itself.

4. The router interface can be used to list and manually revoke access tokens.

5. The client device can automatically connect to the VPN, eg when requested by an app for one of these IoT devices. On operating systems like Android and IOS, access to the VPN should be restricted to a specific granted permission.


I like this idea.

I honestly think most of the pieces are there. My old router, an ASUS RT-AC56U, has an OpenVPN server built in. It also supports dynamic DNS through an Asus-provided service. iOS (and probably Android) supports VPN-on-demand.

This is basically all of the infrastructure needed to do what you suggest.

The only piece missing is the easy-to-use provisioning/management piece.


It's not totally secure, but why not just a physical button that enables a bluetooth device that transfers a token?

I think you could even have a BT pin, so it would require a little security (eg, neighbors don't have your pin). It should be relatively straightforward to have a BT profile for "token authority".

It certainly would be reasonably easy to use on most devices: just press button and connect to the token device.


I think it makes more sense to use the WiFi radio, if for no other reason than adding a BT transceiver to the BOM is probably a nonstarter.


The VPN does introduce a lot of complication, what if we had publicly routable IP addresses from any network in the world and then just used default deny policies on our firewalls to secure networks and encrypted protocols to secure data?

Someone should really get to work on developing such a technology stack. /s

Sarcasm aside, I completely agree with you and as soon as someone offers iPhone/Siri level functionality in a simple package I think people will eat it up. I know the whole "personal cloud" thing is not new but as people realize the implications of putting all their data in the hands of complete strangers I think the market for such a device will take off.


Apple is in a good position to do this. Unfortunately, they keep paring down their product line and I could reasonably see them dropping the Airport products.

Maybe the OpenVPN guys can do it? They've got clients for every platform and seem to be present in some consumer-grade routers. Infrastructure-wise, iOS has on-demand VPN capability and I'm sure Android does too.

All the pieces are there, the only thing it needs (as though it's a simple problem - it's not) is someone to wrap it up in a slick and easy-to-use interface.


There are a lot of details that have to be done right. Backups in the cloud still make a lot of sense but there need to be serious guarantees on the security of the backed up data. Decentralized backups could be a solution to this but come with their own problems like can you trust your cousin and brother in law to run servers as reliably as Amazon?

I would love to see some of the features of iCloud moved into an Airport type device with expandable storage and modular hardware that I can simply swap out when it fails. I realize Siri level capabilities would take more hardware than the typical router contains but I feel like a Mac mini may even have the necessary horsepower to do the amount of cloud computing my iPhone requires in a day.

The hard parts are creating the map data to begin with and training the voice recognition but once those are complete why can't I just run them on local hardware?


I wouldn't advocate getting rid of "the cloud" in general, but I'd advocate rationalizing it.

For your examples:

- Backups - Agreed, these make sense and need the appropriate security guarantees.

- Siri - I can't think of anything it does that requires the voice recognition stuff to operate in the cloud. If this could all be done on-device, but with the ability to reach out to the cloud as required that would be cool.

- Maps - I'm torn. On one hand, it could be a local thing, but on the other hand there is a LOT of value added by it living in the cloud. Whenever my bus is moving slow, my first instinct is to pull up Google Maps and see where the accident is. It's shockingly accurate.

And some other stuff:

- File syncing/sharing (like Dropbox and friends) - Doesn't NEED to be a cloud service, this could be as simple as a USB hard drive attached to a router or as complex as a 12-bay NAS. What I'd love to see in this space is a universal API that app vendors can use - no more dealing with some apps that are Dropbox only when I want to use SugarSync or OneDrive etc. Then the storage provider would just provide an app that implements that API and everything that wants to store files in the cloud could use it.

- Email - I think what we have for email these days is really a great example of how things should work. Don't want to invest a lot of time and effort? Sign up for Gmail. You can use the web, or you can use a choice of native clients easily. Willing to put in the work? Buy a domain, get yourself a Digital Ocean droplet (or colocate a box - your choice!), and run your own.


You're right of course, the cloud certainly has great advantages and isn't going anywhere. I still struggle to find an application where my personal data needs to be sent to a cloud service to provide the level of convenience we have today.

I would rather none of my map usage or geolocation data ever went to the cloud. Yes, Google does aggregate a lot of valuable information but that could be consumed by personal devices directly instead of giving Google the ability to combine our travel habits with our eating habits and our browsing habits.

Bus arrival data isn't big or complex, Google just aggregates it which is why you go to maps but they don't actually put a GPS/Cellular device on every bus (excluding android phones >_>). They aggregate location data from the bus operator sources. I don't need to know where every bus in the United States is at every moment like Google does. I just need to know when my bus is reaching a stop near me. My home server could easily hit the same services Google does to get arrival data per stop or even just stay up to date on all the routes in my area or city.

Bus arrival data is public so there's no reason for me to store it locally but my usage of that data is personal and is something I want to own and control end to end.


Agreed.

A bear that records voices and gives remote access should not need to store data on a server. Store the data in the bear. That's the way these types of bears have always been. The only thing new here is remote access...

Storing my kid's private voice recordings on your server is just plain creepy even if you don't leave it wide open.


Sure! But in this case, part of the functionality was that friends and family could send voice messages to the bear, which are then approved by the parent in app, before being pushed to the bear.

Based on how well the company is doing, it seems like this isn't really functionality that is deserved but it does sound like the justification for storing (some) messages is reasonable.


The bear could have something as powerful as a $9 CHIP inside that could handle all of the storage/playback/approval/etc needs. The only thing missing is the remote access, which should be solved at a different place in the network.


I know HN frowns on this kind of thing, but you username gives me hope one of the CloudPets has gone sentient and is leading a revolution for IoT security. "Don't let what happened to me happen to you[r stuffed animal]!"


A VPN could create a false sense of security. After all the device is still untrusted, and will need to connect to the internet even just to do security updates.

We have good security measures for connecting to servers (which is what IoT devices are) so why reinvent the wheel? Why not require devices to have normal TLS certificates and map the internal IP address to a subdomain of the manufacturer. That way browsers can access the device using CORS, and the normal XSS protections will apply. Authenticate and authorise using a well known standard like OpenID, OAuth or JWT.


whats really needed is an actual un-fucked-with consumer grade encryption standard that is headless, touchless and on forever with no offswitch.

Alas, that wont happen either


Seriously? That's a fairly aggressive comment to just throw out there without any backing arguments.

You really can't think of anything valuable about hooking up small devices/sensors to the internet? Do you really believe the potential for stronger security is so low that it's not worth investigating?

I work at an IoT company and we take security far more seriously than some would say is necessary or even reasonable. We're not the only ones out there, you just don't hear about us because our stuff works and therefore doesn't make the news. Just like you don't hear about all the miles an automated car drives safely.


Just like you don't hear about all the miles an automated car drives safely.

Even if a self-driving car drives perfectly for 10,000,000 miles before swerving in to a crowd of people, no one is going to buy one.


It takes only one black sheep to trash the reputation of a whole sector.


>I work at an IoT company and we take security far more seriously than some would say is necessary or even reasonable.

Then you are a member of a miniscule minority.

The backing argument is a ballooning threat surface that can never reliably be patched


>a ballooning threat surface that can never reliably be patched

Some would convincingly describe Microsoft Windows the same way.


But I do hear that self driving car has no idea to look for the traffic light if it was not informed beforehand it is here. That does not bring any confidence.

Also, I still don't know what problem is IoT supposed to solve.


Step onto a refinery and watch workers physically check gauges every day in a relatively dangerous environment.

Oil fields, someone drives out every day in a pickup, checks gauges.

Etc.


Internet of bear poo.


I wonder what cloud-connected pets the Trump and Trump-Kushners have.


Oh god, it's a kids toy. It's meant to be something fun and cute. What a bunch of jerks to go messing around with that.


How about 'what a bunch of jerks to connect it to the internet and not secure it properly'?


The company was tanking and they were looking to make a quick buck. What market motivation would they have to spend extra time and money securing it properly? This is a fine example of why we need IOT regulation.


> This is a fine example of why we need IOT regulation.

Cool, let's just inundate the industry with pointless government "security" checklists that don't actually accomplish anything. That way, instead of a small fraction of all the cool, affordable products we now have access to occasionally getting hacked, we can just not have any cool, affordable products except those made by companies big enough to hire enough corporate lawyers to CYA their way to government approval.

How about, if you care so much about teddy bears getting owned, you just don't buy them? It's easier and more polite than taking them away from everyone else as well.

You can't legislate security into existence. That's not how it works. Security isn't a solved problem, so the government can't force people to do it correctly. The only think you can possibly accomplish is either making products more expensive (with no/negligible actual security benefit) or removing them from the market entirely.


I agree that regulating this will just cause security theater. However, it's also unreasonable to expect non-technical consumers to understand what's going on and what the implications of their choices might be for any given IoT device. Many probably would have a hard time deciding what devices are even IoT. Maybe this is a gap that could instead be filled by a consumer review product service that focuses on IoT devices and their security. I expect that the general public didn't care enough though to make this viable. Maybe once more toys got compromised...

Edit: I also wonder if stronger punishments for people involved in extreme cases like this would help. If you make a reasonable effort to secure your service and get hacked anyway that's one thing. But not even attempting to secure your service at all is something you shouldn't get away with. Of course the problem is how "reasonable effort" would be interpreted legally.


I don't necessarily disagree but I will note that Bruce Schneier, who so many people on these pages are a big fan of, was basically advocating for government regulation at a recent conference.

https://www.youtube.com/watch?v=8tDU0zcptCY&index=2&list=PLb...


I'd be happy with regulations as basic as:

- use HTTPS

- no hardcoded security tokens in your requests

- any personal data stored is inaccessible to the outside world without AT LEAST a password

- minimum password length for users

just anything to slow the shitstorm combination of incompetent developers and corner cutting executives.

AND such regulations are reasonably easy to enforce, anyone can check network traffic to verify protocol and api tokens or setup a new account with a short password, port scanning would catch most public databases.


>This is a fine example of why we need IOT regulation.

This sort of thing almost (but not quite) always results in things that are no more secure, just more bureaucratic. Large companies keep building the same insecure crap they always have but can afford to hire an army of lobbyists and paper pushers to get "approval". Small innovators who could actually fix the issue are kept out by the high cost of useless paperwork. The industry stagnates at a dismal low point.

In short, prematurely regulating an industry is usually a fantastic way to strand an industry at a local maximum far below its potential.


and people wonder why medical and aerospace companies are highly regulated :-)


Security problems with medical devices have been in the news too.

The regulators have so far been focused on the health aspects of the devices.


Computer security is a new thing in those industries; regulations have yet to catch up.

The point here was, there are always companies that are looking to make a quick buck and which will always refuse to "spend extra time and money securing it properly". Such companies have the market forces on their side - the less they give a shit, the faster and cheaper they can sell their products/services. Regulations in established industries ensure that the minimum level of giving a shit is still safe enough for people.


Hopefully this event will be 'market motivation' enough for them and any companies who will follow them. If this stuff is insecure it will be found and brought to light. The only question is will the good guys find it or the bad guys.


That, too. Overwhelmingly that.


I'd be surprised if the script kiddies attacking world-writable MongoDB instances know or care who owns them. Destroying the data and then lying is a lot less effort than actually copying and examining it.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: