Hacker News new | comments | ask | show | jobs | submit login
Spyware Company Leaves ‘Terabytes’ of Selfies, Messages, Location Data Exposed (vice.com)
356 points by wglb 6 months ago | hide | past | web | favorite | 141 comments



>Motherboard was able to verify that the researcher had access to Spyfone’s monitored devices’ data by creating a trial account, installing the spyware on a phone, and taking some pictures. Hours later, the researcher sent back one of those pictures.

I love that they told us exactly how they verified it. Too often in the news do we just get "was verified" or "an independent source verified," which, I get that you gotta protect sources, but still doesn't give any insight into their methodology. I could ask for example "how do you know you can trust your independent source?" because I have no idea their history together.

Tech news like this is a bit easier to actually show your work for, but even when that's the case so many security articles or people on Twitter don't show the money like this. As much as everyone likes shitting on bitfi, for example, it was quite a while before someone actually demonstrated a hack was done, with reproducible methodology, rather than just blow smoke about how it's insecure.


IMO, you don't trust the independent source, you trust the journalist. Naturally, you'll only trust journalists who have proven to be right most of the time.


I was very loosely following the bitfi drama on twitter - and it seemed like the same day that the bitfi devices actually arrived to the public people were posting exploits (as i recall)


> “Spyfone appears to be a magical combination of shady, irresponsible, and incompetent,” Eva Galperin, the director of cybersecurity at digital rights group Electronic Frontier Foundation, told Motherboard.

Leave it to EFF to have the pithiest quote in the article. Is it possible that companies that are shady are more likely to be irresponsible in their security practices, because they're focused on immediate profit?


Is it more likely that drug dealers that are shady are more likely to be irresponsible in their safety practices because they’re focused on immediate profit?


More likely than (drug dealer who are not shady)? Most likely yes.


That was the point. "Shady" and "irresponsible" pretty much go hand in hand.


Not necessarily. I'd expect the leader of an organized crime group to be very responsible but shady as hell.


>Steve McBroom, a Spyfone representative, told Motherboard on Monday that the company is investigating the leak, and expressed relief that the person who found it had good intentions.

Unlike spyfone whose intentions appeared to be profiting off of violating the privacy of others. I wonder if they have done the market research to determine if their core demo is domestic abusers.


SpyFone can only be used with consent of the device owner. From the Terms of Use (https://spyfone.com/terms-of-use/):

"You will only install the SpyFone software on devices for which you are the owner, or on devices for which you have received consent from the owner of the device."

Their service cannot be used by domestic abusers, obviously. (sarcasm)


Google Maps has a location sharing feature. It goes to great lengths to notify the device owner through both on-device and off-device methods that the phone is being monitored, and by whom. That's the only way to run a legitimate monitoring system.


I'm not sure - given their recent location-privacy related coverage - that Google should be held up as "gold standard" here...


Which is why it is a fantastic example. "Even Google" is extremely explicit that location is being shared on the target device.


At least their Google Maps location sharing is upfront about it.


It does as long as the person tracking you isn't Google.


Indeed.

Also, Spyfone has no clue whether others accessed the data previously. For all we know, there's some .onion site selling access to pedophiles.


Spyfone's response:

"Dear Valued Customers of SpyFone.com,

Within the last 10 days, one of our servers experienced a data breach and an unauthorized third party gained access to the data of approximately 2,200 customer accounts. The potentially exposed personal information of those affected could include pictures, call logs, and emails.

While our team is taking steps to enhance our site’s security and we have since taken action to ensure that all accounts are fully encrypted, we are notifying you that your account may have been one of those negatively impacted by the unauthorized access. In an evolving landscape of online threats, SpyFone.com is committed to the highest standards of accountability and transparency, and proactively works to ensure the safety and security of our users. We will continue to work to address this matter as we partner with leading data security firms to assist in our investigation, and coordinate with law enforcement authorities."


I'm not sure that this discovery will have any effect. While there are claims that this software is marketed to businesses, I'm skeptical that it's anything more than a drop in the bucket compared to use by shady individuals.

Non-business users (like nosy parents and controlling spouses) probably won't ever know about this security breach.

I have no sympathy for anyone who uses software like this, I strongly feel there is no justification for software like this that outweighs the invasion of privacy and other harms, and I don't think this functionality should even be possible at the phone OS level, nor should it be allowed for sale in application stores.

The security failure itself isn't surprising in the least, though. Only bad people write this kind of software. They've already limited their pool of potential hires to people with no ethical or moral standards.

Also, look at the image in the article containing the description of SpyFone. They can't even perform basic copy-editing. Poor grammar in commercial products is a solid barometer for overall product quality.


I have no sympathy for anyone who uses software like this, I strongly feel there is no justification for software like this that outweighs the invasion of privacy and other harms, and I don't think this functionality should even be possible at the phone OS level, nor should it be allowed for sale in application stores.

Most of these features aren’t available on iOS. From what I can tell, you have to know thier iCloud credentials and be in a position to do the 2FA.


Agreed. The app's intended design purpose was to leak private data. The data breach happened as soon as the software was installed; the exposed S3 bucket is irrelevant.


Doesnot mean the people targeted by software fair any better with such data leaked on the internet.


BTW every reputable company has server-side auditable logs and relies on phone apps only to wipe mobile devices or bulk install/uninstall apps.


Wow, S3 is the gift that keeps on giving. My favorite quote from the article: “Spyfone appears to be a magical combination of shady, irresponsible, and incompetent,”

Amazing how people don't lock down S3 data.


Started playing around lately with AWS.

Gosh, getting the access rights to work properly between the various types of controls in place, whether it's the groups, the access policy and whatnot is kind of mind numbing and makes you feel very stupid.

Not that it excuses anything but it is confusing and I can foresee an overworked engineer going "Ah fuck it, no time to read up on that, just go for the easiest stuff to get started on using S3".


That is exactly the problem I see as well. Basically it is so hard to get it working "correctly" (and by that we mean in a way that isn't trivially exposed to the full frontal Internet), that engineers under pressure just punt after getting it to work at all. Sort of "Ok, here it is working, we'll fix the security concerns later, in the mean time try it out." and they get swept into another project because this part is "working" and nobody comes back to to do the hard work of figuring out the right way to make it work.

As a result 'war driving' through the S3 namespace continues to yield up PII, and CUI nuggets of gold to bad actors.


Just AWS's UI confuses me sometimes....


The S3 docs / error responses are so annoying. Very basic questions could be answered with simple examples, but it's almost like they go out of their way to make it hard to do the right thing.

If you give me a couple dozen developers, all of whom has never had to deal with:

1. S3/AWS 2. BackBlaze/B2 3. DigitalOcean/BS/Spaces

And you made a graph for how long it took to do different tasks I would be able to tell which people were assigned to S3 from across the room.


> Steve McBroom, a Spyfone representative, told Motherboard on Monday that the company is investigating the leak, and expressed relief that the person who found it had good intentions.

>“Thank god it is a researcher, someone good trying to protect,” McBroom said in a phone call.

Mr. McBroom needs to be worried about people with ill intentions who have accessed this data in the past and they have no idea about.

Given this is a routine S3 bucket access breach I'd assume lots of bad people have rainbow dictionary found this by now. You can assume people are basically doing that for public buckets all the time now.


"Thank god..." It is most reassuring when divine entity is invoked especially in a data breach context. Especially when the intervention is of a monotheistic kind.


"Wait, hiring a security guy is how much? Fuck that... How much is a Senior engineer to make our API? .... Shit. That would definitely cost me my third corporate lease... How much to hire those kids that emailed us out of the blue?

Yeah, I'm sure they'll do a fine job, just hire them.


First the requirements for installing it on Android phones is a lot less....

Android Requirements:

You Need Physical Access To Device For Install

Supports Android Versions (4.1 - 8.1+)

iOS Requirements:

If 2 factor is on you need physical access to device

iCloud login access is required

Then look at the features....

https://spyfone.com/features/

It says a lot about Android that anyone with physical access to an unlocked phone can put this type of spyware on it.

Even if someone does leave their iPhone unlocked long enough for someone to install the app, they still couldn't do it without authorization and even then you have to know the targets iCloud credentials and have physical access to the phone if it has 2FA.


The way I see it, it is easier to get your hands on someones iCloud credentials than on an unlocked physical device. So iphone without 2FA is less secure than android. E.g. if I wanted to spy on my partner I'd prefer she had an iphone.


But you stil have to have physical access to the person phone to install the app.

If they can do all they claim they can do on a non jail broken iOS device (and they have a disclaimer that all features don't work on all version of iOS), I would be amazed.


I find this level of incompetence hard to believe. Is it possible that the company knowingly and intentionally left the collected data and their customers list unprotected to secretly channel it somewhere else? It would at least give them the opportunity to state incompetence if they should ever be accused of selling their customers private data on the dark market.


It's completely easy to believe. People are releasing crap software under the guise of MVP.


I watched with fascination on Twitch the other day someone programming a point of sale system, the dev was an older guy who said he'd been programming from 85 and was coding in C# (which is similar to me). First thing that struck me was the file he was editing he had a connection string embedded in the code, with what looked like the real plaintext credentials for the database. I then watched with interest as each query he copy pasted the connection string and other code for querying and then tried to create new queries by appending strings together with variables. I tried to give him some constructive advice, but he said "I'm in a bit of a rush, I just want to get this working". I watched for another 30 minutes or so as he tried to get it working. It was trivial what he wanted to do, he annouced how he hated SQL and has spent years writing sql and huge massive queries. Yet his problem was easily solvable with SQL. Looking at the code, it seems to have been done over quite some time and was really really really bad and really insecure. But the "just need to get it done any way possible" attitude means it will get deployed like that.


In a nutshell .. that's what the whole MVP thing is about. The minimal you have to do so business can sell immediately.


MVP - Most vulnerable product? Yeah, it is easy to cut the corners that the public won't notice.


Yeah, it's definitely possible. It's a biz model in the crypto currency world.


Well, they may as well set up a website with subpages per spyed device, and per device's collected content types, and just make it official that they are publishing third party information to the whole f*ing world.

Wow. And their customers are likely not going to know enough or care enough to realize how much privacy they've lost.


> how much privacy they've lost

You mean that those who they've been spying on have lost?


It's not a "breach" when there were no defenses.


Incredibly difficult to even figure out who the company behind Spyfone is. Anyone know?


The most unfortunate side, is that the data isn't exactly that of their customers.


„Every day our team takes great strides to enhance our site’s security and we certainly anticipate that this recent data breach is the last,”

Yeah, granting public access to s3 buckets sounds like a great effort to enhance data security


Here's the irony: For the last few weeks, my Facebook feed has been plastered with scare stories about certain apps being unsafe for kids. People posting image macros about things like Roblox, getting 50+ responses about how scary it is.

This situation, which imo is a real danger, will get 0 attention outside the tech community.

I really wish I had an answer. Maybe I should make of this or something in order to make people share it.


Spyfone Supported devices:

- Android operating system must be Android 4.0 or higher.

- iOS operating system must be iOS 7 to iOS 8.4 or iOS 9.0 to 9.1 (Oct 2015)

Just make sure your target is not using iPhone :)

https://spyfone.com/compatibility-policy/


> “Every day our team takes great strides to enhance our site’s security and we certainly anticipate that this recent data breach is the last,” McBroom said.

Holy shit. They left an Amazon S3 bucket wide open, their admin site was wide open, and their API stream of contacts was wide open.

Their concept of secuity is non-existant and they think this will be the last breach?

I doubt you'd hear any competent IT director ever say they won't experience data breaches in the future.

The incompetency is mind boggling.


AWS might be partly to blame, for not warning them, actively or passively, in my opinion. It might just be infrastructure as a service, but it seems that major cloud providers like AWS act as an IaaS provider when convenient, and a PaaS when convenient. They censor content all the time. I don't think it would be too much to ask to send an email when a big security change is made (the emails and audit log GitHub provides for its users are great). Plus their interfaces (UI and command line) are clunky. They should have a non-clunky dashboard of some sort where people can get a broad level overview of their security choices.

I'm not talking legal blame for the past mistake, just that at some point in the near future their IAM interface should be fixed. For one thing it should be easy to give permissions to view the permissions without giving keys to the kingdom. For another thing, there's a mental overhead to working with multiple AWS accounts [1]. I get the impression that Google Cloud Platform is ahead right now (while lots of other Google properties are not, heh).

Edit: the spyware company is absolutely to blame, but the complexity of AWS permissions including permission to view the permissions seems like a footgun to me. However, leaving open the admin site is something I wouldn't expect AWS to help with.

https://engineering.coinbase.com/you-need-more-than-one-aws-...


AWS defines a pretty clear delineation of responsibilities. They are responsible for security of the cloud, you are responsible for security in the cloud.

https://aws.amazon.com/compliance/shared-responsibility-mode...

I’ve not seen AWS censor content, although I’m sure some cases might warrant it. That has nothing to do with a company that was negligent.

I don’t find AWS’ UI or command line clunky at all. In fact I find them clearer than most Unix tools.

It is not hard to know if you have world readable buckets in S3.

They should have a non-clunky dashboard of some sort where people can get a broad level overview of their security choices.

The AWS CLI allows you to easily pull the policy data attached to any resource and you can process it however you choose.


> I don’t find AWS’ UI or command line clunky at all. In fact I find them clearer than most Unix tools.

That's not setting a terribly high bar ;) those take a while to learn too, people just forget that they ever went through that stage.


You get a BIG INFO by AWS that you are changing the bucket to public access if you do (by default they all are private), how incompetent you have to be to not be able to read?


> but the complexity of AWS permissions

I'm probably the last person who will defend many of Amazon's design choices, but this is really a matter of competence. Internet engineering is hardly unique in having difficult to understand/operate professional tools. While I will feel sympathetic towards someone who hurts themselves with power tools, you blame the tool when it malfunctions, not when the operator ignores basic safety.

Bottom line: this firm is incompetent to operate the tools they chose to use. Worse, they value the private data of their victims so little that they couldn't be arsed even to perform trivial sanity checks.

I wouldn't trust an outfit like this with the time of day.


AWS does actively send warnings of this nature. https://www.reddit.com/r/aws/comments/6ogmbp/interesting_ema...


AWS has all sorts of services to warn you about potentially dangerous settings in your account.


Yeah. I mean, at what point does this level of neglect become criminal?


I actually don't think it needs to be criminal. We just need civil laws that make these kind of leaks incredibly costly.

Maybe liability for private information loss could be $10k a user. So, Equifax would owe the public 1.4 trillion dollars. Of course, they wouldn't be able to pay that, so the company would be chopped up into bits and sold for scrap.

I think that would catch more attention than some mid-level manager fall guy going to jail for six months, as would likely be the case with criminal proceedings.


While I tend to agree with catastrophically steep penalties, there are perhaps unintended consequences.

It wouldn't be hard for an APT type shop to breach just about any average corporation using an arsenal of private exploits, fuck with their security configuration to make it look like gross incompetence, and exfiltrate the data to some seemingly-amateur front organization that actually leaks it.

End result is you could have foreign actors knocking out their country's competition abroad, using their competitor's laws to do so. Not ideal.

Maybe some determination would have to be made to avoid that, like a judgement rendered on the corporate culture. For example, is it obviously a cesspool of incompetence just in a general sense? Great, burn the company down.

Does the company custom-design their own ARM hardware to at least have a fighting chance vs APT-type threats? Maybe they did everything they reasonably could in that case. You could also argue smaller companies did everything they could even if they don't have the resources for that, provided there's not rampant incompetence.


Well, it could be in the case where it looked like sabotage, I'd expect the victim company to go to great lengths to prove it (ie, evidence to the contrary, etc). It probably wouldn't be that hard.

In this case, it was a spyware company. Seems almost fitting that they'd be unconcerned about securing data that was essentially tricked/stolen from their users.


If the APT is good enough, you're talking forensically not provable.


You missed step 0 in this scenario where steep penalties are the default: short the stock.


The market will overlook anything if the company is still profitable. That's where regulatory penalties shine.


More importantly it punishes the organization for something that is an organizational failing, whereas jailing an individual system administrator for a data breach punishes the individual for the incompetency of their superiors.


Maybe liability for private information loss could be $10k a user.

Or maybe up to 4% of the company's revenue.


Up to 4% of the company's revenue doesn't solve much. Depending on the sector, 4% of revenue (are we talking EBITDA?) may potentially be less than a slap on the wrist, and internally middle-management will take the blame for the reduction in sales margin/operating profit.


The above comment is almost certainly a reference to GDPR, for which the maximum penalty for malicious non-compliance is "up to 4 % of the total worldwide annual turnover." It is not net income or profit or EBIDTA or anything else that subtracts operating cost, it is revenue.


Parent said revenue, not EBITDA.


Strange how anti-GDPR HN is... Until something like this happens.


It's almost like this is a disparate community of people with widely varying opinions on a rather important and controversial implementation...


How does GDPR help with this situation? It is unclear to me.


This is what GDPR is for AFAIK.

IMO all the cookie warnings we see are just misguided attempts to ignore it and continue more or less like before and should probably not save anyone in court, again if I've understood it correctly.


4%? That just means worker bees won't be getting a raise this year.


If they can keep the worker bees without giving them a raise, why do they?


Up to 4%? Wouldn't that simply be considered cost of business for some corporations? Pay less in security, etc. and just consider the 4% a smaller tax of sorts?


It is highly unlikely that the chairman and ceo of any corporation that was found guilty under GDPR and had to pay 4% would survive.

Few bank CEOs have survived the various "we will get some payback for 2008" fines over the years.

If you want to change corporate culture, you don't need to destroy the company, just hold a gun to the head of each CEO and see how fast they make sure everyone else dances.

This is one of the best things about Sarbane-Oxley - the CEO actually signs off the accounts and will go to jail if the accounts are misleading. so guess what has had top priority at banks across the globe?


Sorry I meant to add per case, as it is in GDPR.


I see, thank you for the clarification.


Does this company operate in Europe, if so GDPR is exactly the kind of thing that would make this incredibly costly.


Unfortunately, one of those chopped up pieces being sold will be the user data they have. :(


In the US, I'm highly pessimistic that any law that gave people a private right of action against basically any company that touches their data would ever pass without large and noticeable changes to the business and legal landscapes.


But the person affected should get the money, not the government.


While that's fair, it should actually be the other way around.

Let the government keep the money. They'll be more inclined to actually enforce the law. We see how aggressively they police drugs when they stand to benefit from civil asset forfeiture.


This would incentivize people to dish out private information on insecure platforms for the sole purpose of baiting compensation.


If Equifax breach didn't send anyone to jail, nothing will. A breach the size of Equifax should have followed with massive fines and possibly even killed the company but nothing happened.


Nothing happened because it's very difficult to assess damages of personal information.

For example, if a health insurer the size of Equifax lost the equivalent amount of HIPAA related information due to negligence, you can sure bet there would be penalties. That's because HIPAA related info has legally defined protections.

As it is, calculating the damages of releasing your equifax info is a speculative guess at best, which is why at most you got to lock access or ID fraud protection.


> Nothing happened because it's very difficult to assess damages of personal information.

It's also very difficult to assess damages of copyright violations... and so the companies that had it in their interest to get this working pushed for statutory damages.

Maybe we need something like that for privacy.


Which is why damages for this negligence should have statutory minimums. $10,000 mentioned elsewhere in this thread is a nice round number. If given a sufficient period to fix their shit (a year?), no one can complain that they've been harmed by such a requirement. I've seen this in contracts, which said basically, "since it will take years for a court to assess damages if you do this awful thing you agree not to do, we stipulate here that the damages will be $X instead."


Nothing happened because it's very difficult to assess damages of personal information.

It's interesting that you bring up health insurance, because that's an industry that definitely knows how to calculate the value of various pieces of personal information.


The concept of "I doubt you'd hear any competent IT director ever say they won't experience data breaches in the future." should tell you why we don't send people to jail for this type of thing. 100% prevention of breaches cannot be guaranteed ever [due to the infinite number of failure points in software and hardware as we've seen with the recent CPU hardware bugs, etc] so jailing IT people for breaches would only stop once every IT person was in jail because they didn't notice a line of code in millions of lines of code.

There should be some level of competence of course, leaving things wide open doesn't seem safe, lol.


It's about taking a reasonable level of security practice, like you said "some level of competency required."

We require this of our bridges, and our roads, and our buildings. I'm not sure why we don't for our personal information assets. Arguably the Equifax hack will cause far greater economic loss than, say, a hole in the middle of mission street opening up due to lack of review by a civil engineer, so I don't get it.

Is it because politicians are uneducated technically? We didn't have good fire law in America until a room full of seamstresses burned to death when the single exit was blocked off, do we need something similar for infosec? Equifax SHOULD have been that but whoever breached it didn't release yet (as far as I know) so maybe nobody is feeling the pain yet.


It would certainly be good to have some professionalism imposed at critical junctures.

I don't think criminal penalties are appropriate but civil penalties for data breaches should simply have no limit, the possibility of shareholders and debtors forfeiting all value should be included.

At the same time, the problem is "professionalization".

The problem of when one needs "real software engineering" is incredibly hard to solve. It's an incredibly fuzzy line and any organization would have a strong incentive to be on the cheaper, non-professional end of the line.


Plenty of people feel the pain, but do you really think this Congress gives two shits about regular people?


Only when the outrage gets loud enough to affect their chances for reelection (which happens with a lot of issues, just not this one).

Unfortunately, Hackernews seems to be the only place where security is taken seriously. Probably because we understand the severe collective risks involved to everything from banking to healthcare whereas most people as individuals don't care if say, their credit card number is stolen, since they aren't liable for fraud. It's hard to see the bigger picture if you aren't technical.


I don't think it's fair to say only this website takes it seriously, though perhaps this website is a good cross-section of people that do seem to take it seriously. You can also find those people on reddit, twitter, lesswrong, IRC, etc.

The question is, who's failing to make the whole WORLD care about it? Or at the very least, the politicians? I can count on shitty lobbyists at least ensuring that, like, the economy doesn't fucking burn us alive, because they lose money when that happens. Why aren't the fatcats also getting that about netsec? If the NYSE gets hacked, they stand to lose a lot of money. If someone opens the hoover dam gates through a hack, that's a lot of money lost. We can ignore the morality and privacy issues, and just speak their $$$language$$$ here, and it still doesn't make a lick of sense that politicians aren't eviscerating Equifax right now.

So, are we supposed to like, lobby sense into their heads? I mean, why? Because we're patriots? I guess?

Then again I've got motorcycle riding friends in Houston that don't wear their helmets, and I still have to force people in my backseat to buckle up sometimes, so I don't even know. Why don't people take any kind of safety seriously?


There is a bare minimum of precaution some of these cases don't seem to be followed.


Exactly. Yes these issue can be somewhat hard to understand for non security folks, but everyone can get the basics.

1. Do you have sensitive information? 2. Is a Password required to access that information? 3. Is that password set to "password" or something else that would be trivially easy to guess.

It's like saying it's not your fault if a hacker takes extraordinary measures to tunnel into your house from below ground. But it is your responsibility to at least shut your front door.


In the US? No time soon.

The party in power is cutting regulations, not adding them. The customer has to watch their own back.

It seems simple. The free market will kill incompetent companies, right? Customers see data breaches and stop doing business with them.

Sounds good in theory but realistically, customers don't have time to do the research required for a completely free market to self regulate.


It has less to do with research, even.

Equifax, like many Fortune-N companies, has a heavily funded sales and PR team working actively against your individual research.

Should you, as an individual, apply to a company or attempt to buy a product that has been “sold” the Equifax product suite, you’re still beholden to Equifax services(or leave without the job or house).

You’re effectively stuck, unless you have the resources(time/money) to look for employers or products that stay away from Equifax.


> Should you, as an individual, apply to a company or attempt to buy a product that has been “sold” the Equifax product suite, you’re still beholden to Equifax services(or leave without the job or house).

Worse, there's pretty much no way to tell to which companies and products this applies.

This isn't some fast food restaurant poisoning its customers. None of Equifax's "customers" got screwed by their data leak, only the targets of their "product" caught the ramifications.


The best part about the Equifax breach is that the people whose data was released weren't even the customers. Pretty hard to stop doing business if there wasn't any in the first place.


It's not clear how I, an individual consumer, even choose to avoid doing business with Equifax.


Don't do business with them is a bogus missive, but you could lock your credit with all the agencies and only unlock the agencies you care to do business with when requested. But that often makes things difficult and not all users of that data will have an account with an alternative provider. (nor is the locked access the only product for which your data is offered...)


You could make sure you put on all loan applications that they will not check Equifax. Probably they won't even know how to handle that though and will either reject you, or ignore it.


And in practice, we also don´t have completely free markets, and even if we did, it´s highly unlikely that they would function the way current economic theory believes that they would.


The party in power is notorious for hypocrisy. Are no-bid contracts "free market"?


> parties in power

We haven't had a lot of politicians that have been pro-human/worker/consumer rights in a while. We got a consumer regulatory agency. But when did you see them push back against actual troublemakers.


> parties in power

Sounds like the usual "both parties are the same" nonsense.

I saw them push back all the time. They recovered billions for consumers.

The success of the agency is written about in many publications. Read up on it before putting it down.

One party enacted consumer protections and another party is working to rip it apart. There is a clear difference between the two.

Here's an example article highlighting the success of the agency and the Republican desire to end it http://fortune.com/2017/01/27/donald-trump-cfpb-consumer-pro...


The consumer protection agency in the US was making great strides in pushing back against many bad players, including predatory lenders. Then the republicans gutted and defunded it, ending many investigations and siding with the scum feeding off the poor.


Classic strategy, by the way. Defund, then point to it and say "see, it doesn't work!"


And then jump on board? I always knew Timmy was just like Robin Hood... only different.

https://www.cbsnews.com/news/private-equity-firms-are-the-ne...

Note to self: one flavor of Kool Aid is good, the other flavor is bad.


We don't need regulations, necessarily. If people have been harmed, they can sue for damages.


If we take the Equifax breach as an example here, and every person affected sued for even a very modest amount (say $10) and everyone wins their cases then we come to something like $1.5bn in damages, plus legal costs. Sure, that will probably work in that $10 is a gross underestimate, Equifax's market cap is ~$15bn, and their legal costs defending that many cases would be significant.

However, consider that it is very likely a small minority of those ~150m affected people are actually in a position to spend the time, money, and effort in actually suing and you end up in exactly the position you are now: Equifax doing fine and suffering no penalty for their actions. Class action suits aren't really a better suggestion either because they are typically settled for pennies-on-the-dollar, with the lion's share going to the lawyers anyway.

Suing might make sense where there's a small number of affected people, or where the damages per person are much higher, but when we're talking less than $1,000 damages per person it's really just not worth each individual's time or money to do so. This is _exactly_ the kind of thing regulation is good at protecting against.


Things are actually even worse than you describe. There's been constant action to restrict the use of class-action lawsuits and move towards arbitration instead.

https://www.reuters.com/article/us-usa-consumers-arbitration...

https://www.currentaffairs.org/2018/08/this-burrito-includes...


The fact that so many businesses want to kill class-action suits is, to me, a testament to how effective they actually are.


Class-action lawsuits are the best tool we have -- even if the money each individual gets as part of a settlement is a pittance, in aggregate they do give companies at least some disincentive for unethical/illegal conduct.


While I don't disagree that they're effective, I'd say "best" is not true. They're not any more effective than (enforced) regulation. The only difference is where the money ends up -- private sector lawyer pockets or the public coffers. All else being equal, I'd rather the latter.


Yeah that $500 from Equifax will definitely fix broad systemic issues affecting the whole of society. Good call!


Good luck suing Equifax because you couldn't buy a home with a mortgage because Equifax had a record of you having a "low credit score" using data they collected on you that you didn't consent to. The data Equifax collects on you is both damaging and non-consensual.


Assuming the data collected are not in error, you mostly did consent to it. Read the fine print of any lease, loan, or other credit agreement. They almost all say they will report payment history (particularly late or non-payment) to credit bureaus.


They almost all say they will report payment history (particularly late or non-payment) to credit bureaus.

If that was the only data reported to credit bureaus, that would be great.

But "reputation data" is increasingly becoming important in this sphere. Are you Facebook friends with people with a low credit score? Do you drive through a dodge neighborhood on the way to work? Do you watch the wrong kinds of movies? Buy liquor? Stream the wrong shows?

It's all up for grabs, and with the "credit score" formulae locked up as trade secrets, there's no way to determine if your mortgage denial was because you were one day late with a cell phone bill, or because you stop at a red light next to a pawn shop enough times that your phone thinks you're a regular customer.


No, just no, this is complete and total nonsense.

FICO publishes exactly what makes up your credit score, straight from the horse's mouth:

https://www.myfico.com/credit-education/whats-in-your-credit...

The FCRA gives you the right to know what is in your file

In addition, the FCRA gives you the following rights (not inclusive):

-You must be told if information in your file has been used against you. Anyone who uses a credit report or another type of consumer report to deny your application for credit, insurance, or employment – or to take another adverse action against you – must tell you, and must give you the name, address, and phone number of the agency that provided the information.

-You have the right to dispute incomplete or inaccurate information

-Consumer reporting agencies must correct or delete inaccurate, incomplete, or unverifiable information.

-Consumer reporting agencies may not report outdated negative information. In most cases, a consumer reporting agency may not report negative information that is more than seven years old, or bankruptcies that are more than 10 years old

https://www.consumer.ftc.gov/articles/pdf-0096-fair-credit-r...

So if you get denied a mortgage, you'll know why, and it certainly won't be because you drive by a pawn shop.


Hacker News be like:

"Speech should be free and unlimited!!!.... Well, unless the topic of the speech is me, then I should be able to control 100% what what other people are saying about me, of course."

How you gonna sue someone for saying things about you that are true? That's not a thing (for good reason). Would you also sue a friend if you borrowed money from them and didn't pay it back and they warned others not to lend to you? Imagine how much a judge would laugh if you showed up to court saying "I didn't get a mortgage because I have a history of not paying my bills, I deserve compensation."

Nonsensical.

That's ignoring the fact that you actually consent to data sharing as a condition of obtaining credit products. And that's a reasonable condition with a business justification.

In the US it's politically infeasible to have any sort of government agency to collect this information, so it falls on the shoulders of private companies.

The data Equifax has on me is the exact opposite of "damaging." Because of data sharing I'm eligible for a broad range of credit products that I have gotten tens of thousands of dollars in value from. On top of that the information they collect about me allows me to pay a very small premium for car and home owner's insurance.


> Hacker News be like: "Speech should be free and unlimited!!!.... Well, unless...

This isn't a real argument unless you can show that the one person you're actually responding to has held both these positions. This is just a forum where a bunch of people opine; it's not a political party with a documented set of beliefs.


Would the same apply to laws against assault/battery? If not, why shouldn't they?


No. Civil harm is something that can (for the most part) be remedied by payment of damages to the harmed person.

Assult/battery is a physical violent crime that cannot be "undone" with payments.


I don't think the payout in wrongful death or personal injury lawsuits really "undoes" the harm any more than a payout in an assault case would.


So we need regulations that prevent physical harm from businesses?


AWS ceryfication should be mandatory in some industries.


I think you overestimate the abilities of “certified AWS Architects”. You can get one without ever logging into AWS. At least for the “Architect Associate”. You could probably get away without any practical experience for the Developer Associate. By the time I got that one though, I had practical experience.

I did just that. I was a “software architect” for a company that was completely on prem that was moving to the cloud. Before I actually started working with AWS, I actually wanted to do it correctly and wanted an overview of the services offered.

For $reasons, I left that company about a year ago before ever touching the AWS console, and based partially on my “certificate”, I got another job and was given admin access to AWS. I spent the next year actually getting practical experience.


At about this level


Not soon enough.


The embodiment of an oxymoron :)

The concept of "incompetent and unaware of it" comes to mind.

Edit: the Dunning-Kruger effect, https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect


> The concept of "incompetent and unaware of it" comes to mind.

Well, I see that you also invoked Hanlon's Razor. I don't know if I want to give these guys that benefit of doubt - I think they genuinely don't care. Mainly because if they did they had those kinds of standards to begin with, I would expect them to also have the kinds of standards that would make it unlikely to be in this line of business.

[0] https://en.wikipedia.org/wiki/Hanlon%27s_razor


Unknown unknowns?


Its hollow, phony Corp-Speak. Just like the “We at Company X take Y very seriously.” PR statement that always gets made right after Company X is caught not taking Y seriously.


My gf was stalked by an ex using this type of crap. Had to wipe her phone and change her cloud credentails. Still worry my personal photos are now floating around on the web.


[dead]


Did you bother to read the article? The company admitted fault.


Well at least this company is trying to solve the problems! So many times this ends up with the company attacking the researcher.


The copy on that "EULA" is in poor English.

Also, anyone who uses an app/service like this, and that service has the word "spy" in their company/product name deserves to have their data and identity compromised.


Perhaps. But those that they spy on (spouses, children, employees..) don't.


Not even perhaps. Stop blaming the victims.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: