I really wish we had less kneejerk cliche-recycling (eg "don't be evil" comments) commenters here just looking for confirmation bias because this is a difficult issue.
First, kids use sites and apps when they're below the ToS age (usually 18 or sometimes 13, at least in the US). How are you meant to know that as a service provider? Don't get me wrong: it doesn't excuse you from doing due diligence. There's such a thing as wanton disregard for the truth. But we all know under 18s use IG, Snapchat, Tinder, etc. I'm not sure fining a company billions of dollars and putting executives in jail is the outcome anyone really wants if a 17 year old uses Tinder.
Second, what is personal information, exactly? It's not as obvious as you might think to the point where lawyers who work for these companies can't agree among themselves. There are obvious things (eg IP addresses, dates of birth, SSNs, addresses, names). But what about media? Photos, videos and sound files may all contain PII, technically.
Third, some will say "well don't store anything". Well, if a photo contains PII (which it can) then you can't operate a photo-sharing service at all for risk of storing PII of children. Also, this ignores legal obligations. If the Feds show up with a warrant about a particular user's actions then the argument "we log/store nothing" won't get you as far as you think.
So I'm not saying Google didn't do wrong here but knowing the complexities I am sympathetic to the idea of resisting broad legislative overreach as you can end up with bad laws that do just that.
And if we're going to start singling out tech giants for lobbying Congress, my question is why them in particular? We currently have a Senate that is essentially beholden to the personal interests of one unelected rich guy: Charles Koch. Koch lobbies against health care, taxes on the rich and infrastructure in the Senate. Around the country, he has been instrumental in defeating public transit and many other initiatives in many cities.
> I'm not sure fining a company billions of dollars and putting executives in jail is the outcome anyone really wants if a 17 year old uses Tinder.
A provider offers a service is responsible to ensure it functions within the law. The "How" is the responsibility of the provider. If they can't do it, they can explicitly state they can not do so and work with the government to find constructive solutions. If they absolutely can not, it means that they have to bear the consequence of the law. If they can't accept the consequence, they can decide to shut shop.
In tech, we’ve gone so far from manual work that we’ve all forgotten what’s possible. If you run a bar and let in a 14 year old, you’re responsible for that. If you decide to automate your work, you are not in any way absolved of that responsibility just because it might be impossible to automate with the same level of reliability as doing it by hand.
In other words, doing the very best that’s possible doesn’t count for shit if it’s impossible to do what you have to do. Because it’s always possible to comply, just not always possible to do it digitally. Tough shit. Then don’t do it digitally.
> A provider offers a service is responsible to ensure it functions within the law.
This is what I like to call "engineer thinking" (and I say this as an engineer). By this I mean it's absolute.
The law doesn't work that way and when it does, it tends to be terrible unjust and illogical (eg mandatory minimum sentences, three strikes laws).
Here's how it actually works: because of the nature of your service users need to be able to give consent for legal reasons. It may be possible to provide such a service to under-18s but the legal compliance may be a significant amount of effort. Or it may be impossible.
Already we've run into a problem because what about emancipated minors? They can, by definition, give legal consent. You may choose to ignore this corner case because it's too hard. That may be fine or it might not. You may be accused of unfairly denying services in certain segments of the population. Depending on your service, you may even run into legal trouble.
Assuming you're a good actor, you make a "best effort" to comply with the age restriction. You may ask for certain personal information or even run a credit check. Certain things can only be done in person, perhaps just the first time, perhaps every time.
If it ever comes up that you provided services to minors and there were legal consequences (civil or criminal), a court would ultimately look at the balance of evidence to decide if your actions were sufficient given the consequences of noncompliance, the harm you could've done and so on (note: the standards are different criminally and civilly too).
So if you provide a hook up app and just ask "Are you over 18? Y/N" and have lots of profile pictures of suspiciously young people, lots of mentions of high school and so forth then it can be decided that you had a reckless disregard for your legal obligations and be fined or charged criminally. A court may also decide your actions were "reasonable" (this word comes up a lot). For example: you tried to verify them with a Facebook profile (with efforts to make sure it wasn't fake), you monitored photos and profiles for certain keywords and so on. That could satisfy your legal requirements even if it wasn't perfect.
It's the difference between a bank who sometimes unknowingly facilitates criminal financial transactions vs a bank that has flagrant disregard for the law. There are banks that will charge you 15-20% for a wire transfer, for example. No legitimate customer would ever pay that. It's clearly aimed at people who don't want any questions or scrutiny. This will land you in jail. The first won't.
> This is what I like to call "engineer thinking" (and I say this as an engineer). By this I mean it's absolute.
Ironically, you called my thinking as absolute and went ahead to do the very same thing you called "Engineer Thinking". Nothing in my comment above contradicts your comment. Your effort to explain HOW in detail is admirable. However, I wish you hadn't created a label on my way of thinking. It doesn't help a discussion.
The law is responsible for being just and reasonable, and not fining people for things they have no control over. There's no sure way to verify that a user is over 18[1], and I do believe that there's responsibility on parents to be monitoring their children - That if a 13 year old is creating accounts on over-18 websites, it is partially the parent's failing for allowing them too much unmonitored access.
Quite frankly: I want 17 year olds to use Tinder, figure out how it works, and understand the problems therein. I want kids to be able to tip their toe in anonymously to adult activities. Any hard-line requirements create a nice hard line - People who have no experience with X, and people who are expected to be experts with X - And that's how you create a feeding ground for predators. 17 year olds don't have to be able to recognize that a 40-year old man who's hitting on them despite the fact the 17 year old lied about their age is not a quality catch - They have to have support networks and enough opportunity to test the waters to figure that out before they get caught in a predator's web. One of the most important tools towards that is the ability to create anonymous accounts and explore, with understanding but without consequences.
[1] I won't iterate over the ways that is true, and I won't iterate over the ways that your perfect age verification is either a) broken or b) totalitarian and broken.
> There's no sure way to verify that a user is over 18
This is like a bar complaining that they shouldn’t have to check ids because counterfeits exist. You can’t serve your customers At Scale if you have to comply with those pesky laws, so maybe don’t do it at scale.
> Quite frankly: I want 17 year olds to use Tinder...
How about 16 yr olds? 15? 13? 10? 7? Should we draw a line for "dipping their toe anonymously in adult activities" somewhere, or should that line simply not exist?
Do you believe that every 13 yr old is in fact being appropriately monitored by parents? If not, should there be some safeguards to mitigate the fact that in practice this is not reliable?
> How about 16 yr olds? 15? 13? 10? 7? Should we draw a line for "dipping their toe anonymously in adult activities" somewhere [...]?
Age of consent differs across countries. So yes, different countries and jurisdictions already draw a line for various adult activities. They also disagree what those lines should be and where they should be drawn.
Quite frankly: that line should be parents to draw and enforce, within reason. The liability should not be on the dating sites to prevent it from happening, not should they be forgiven for ignoring it when it does happen and it's brought to their attention.
I'm in the unenviable situation of having had one of my big projects shut down for precisely this reason: it's too expensive to moderate sites that kids may use. The reason for that is actually very simple, proved by data, and verboten to say.
My car isn't supposed to be driven by someone without a license. It's kept locked when not in use.
Yet, that might not stop a 17 years old without a license from stealing my keys and taking it for a joy ride. Should we then outlaw cars since evidently it doesn't function within the law?
While I love a good car analogy as much as the next person this really doesn't fly.
First of all, the TOS click through flow is more like leaving a car running, unlocked, and adding a piece of masking tape that says "please don't drive me" on the door handle. TOS operate entirely on the honor system.
Second, you have no incentive for random children to come drive your car. For the tech giants, more users are great! So their incentives here are all in favor of letting kids sign up and doing the bare minimum to try and prevent this required to appease regulators.
> I'm not sure fining a company billions of dollars and putting executives in jail is the outcome anyone really wants if a 17 year old uses Tinder.
Jail, would be disproportionate, but, those giants are perfectly aware that kids under certain age have absolutely no idea of the value of money and how much it costs to bring it home, then spend it carefully so there will be some available for less important stuff. So you show them ads of the latest cellphone or pair of famous-rapper-sponsored shoes that cost half of what their parents bring home in a month, and they'll want that crap no matter the cost. This is no that different from targeting very old people with phone calls, among which some can be lured into buying unnecessary stuff, sign fraudulent contracts, or worse, just because a well dressed drone rang their door and spent the evening talking with them to conquer their trust.
Quite frankly I'm disgusted by those manipulative people, and tech giants dwelling in the dark side to maximize their profits make no difference.
>So you show them ads of the latest cellphone or pair of famous-rapper-sponsored shoes that cost half of what their parents bring home in a month, and they'll want that crap no matter the cost. This is no that different from targeting very old people with phone calls, among which some can be lured into buying unnecessary stuff, sign fraudulent contracts, or worse, just because a well dressed drone rang their door and spent the evening talking with them to conquer their trust.
I don't see how companies not properly filtering out minors from their platform equate to them being similar to fraudsters who target the elderly. One is an example of (potentially) willful ignorance of the age of users on their platform, and the other is willful malice targeted at a vulnerable group.
It's also basically no different from the standard for broadcast media advertising. The US has cycled back and forth on whether it should be illegal from decade to decade, but the jury is pretty out on that point of philosophy as a whole.
> I'm not sure fining a company billions of dollars and putting executives in jail is the outcome anyone really wants if a 17 year old uses Tinder.
Honestly, why not? If society deems that illegal, and the enablers are responsible, then so be it.
If said company can’t figure out how to profit while preventing this then why do I care? Lot’s of businesses aren’t viable without illegal practices.
I’m doubtful this will be the actual fallout, but it wouldn’t bother me at all if it was.
If your businesses success relies on scale, but that scale in turn presents insurmountable issues, then congratulations, you found an unsustainable business model. No big deal.
Is there absolutely no signal determinalble through the global expert on GD/ML/AI based behavioural signal extraction within the social and behavioural signatures of a profile which might indicate that it is principally used by an individual below the age of 16?
If systems cannot be designed to comply with regulatory and legal requirements, then perhaps they should not be built.
Alternatively, reasonable alternatives and methods should be defined in the law.
And you should disclose your employer affiliation in comments such as this.
> A week before the meeting, the Federal Trade Commission announced that it was considering making changes to its interpretation of the Children’s Online Privacy Protection Act, which prohibits companies from collecting information about kids under the age of 13 without parental permission.
How would that work in practice? A lot of us here operate websites. Your website gets a request. How are you supposed to tell the age of the person that request came from? Maybe Google / Facebook have enough data to identify the person and their age, but ironically, they would have to collect information about them to determine that it's a person they shouldn't collect information about.
Certainly no child in the history of the Internet has ever lied to a website when asked their age.
It depends on what they consider "collecting information". Does your server log IP addresses? Do you know the age of the person behind each IP address? Does that count as "collecting information" about them? You know an IP, the time and date they accessed the site, what pages they viewed, you can infer a general location. If the law is not clear enough, it can be interpreted to make basically any website infringing.
If you want to have a newsletter, or offer a service where people log in and want to recover their password, you'll need at least an email address. Can you verify that every email address is exclusively used by people 13+?
I agree with the grandparent poster that not collecting information should be the default.
re: collecting IP addresses: I run my web sites behind Nginx and does have rolling logs, but I ignore IP addresses.
For your e-mail use case, I think a reasonable compromise is asking if the user is older than 13 and what alternative is there but to trust them? Personally, I think that collecting e-mail addresses for people signing up for a newsletter is a great example of when it is perfectly fine to collect private data.
I think duration and consent have to be part of the equation. Collecting an IP for somewhere in the range of milliseconds to 24 hours in order to gather statistics about site traffic is one thing, putting it in perpetual storage for cross correlation is entire different. Asking for an email address to create an account is one thing, using 3rd party cookies to figure out someone’s email address to cross correlate with their browsing history is different. Blurring the line between these innocent practices and mass surveillance by international corporations is quite disingenuous.
I am not trying to blur the line or say they’re equivalent. I’m saying that if the law, as written, doesn’t distinguish between these things then it becomes a cudgel that can be selectively wielded against any website that happens to draw the ire of anyone with power to launch investigations.
We all see how incompetent the government is when it comes to understanding technology and how it works (see the Missouri governor clamoring to prosecute someone for “hacking” because they used view source), so remember when you’re asking for greater regulation who it is that’s going to be doing the regulating.
This is basically what I do with my hobby sites. I have HAProxy sprinkled around the internet here and there. I set the custom header with the client IP address to something that my web server doesn't recognize. This means everyone on the internet shows up as my VPN address. This also forces me to ignore IP addresses and design protection ACL's to be generalized. It has been a fun learning exercise.
That said, I am not suggesting that businesses should or could do this. Some businesses are required through regulations to capture IP addresses of their customers at a minimum.
Requiring payment for access would also work. Maybe $10 for a lifetime membership, otherwise its read-only.
Children can only participate with these companies because the product is 'free'. Same with p*rn and free2play games: eliminate the 'free' component to effectively block them from children (at least without parental consent).
Not too hard to get around. I have friends who used to buy American Express gift cards to get around having to pay online with a credit card so kids could easily get around any payment restrictions. I think all a paywall would do is make life more inconvenient for the average user.
>How would that work in practice? A lot of us here operate websites. Your website gets a request. How are you supposed to tell the age of the person that request came from?
Idea, browsers would implement parent control, so when you setup a child account on your kid device the OS will store the age of the user, the browsers would read this. Then in the request (maybe depending or region or whatever protocol) the browser can tell you if the user age is <13 or other interval.
So this means the browsers need to collaborate with the OS and the parent has the responsibility to setup the correct user account for the user in the OS, for older OS the parent will have to setup this setting directly in browser and setup the browser in a "child" mode.
I think that websites generally do this based on whether the content is primarily intended for children (e.g. youtube doesn't collect this type of data when users are viewing videos marked as being for children).
And this ends up breaking the site for adults as well.
I have a YouTube playlist of videos I'd like to show my kids. It turns out that YouTube will not allow videos marked as "intended for children" to be added to a playlist, even by an account that belongs to a adult. So the only videos I can put in that playlist for my kids are ones YouTube thinks are NOT for kids.
It's not a bug, it's a direct and intentional consequence of the FTC's efforts to strengthen COPPA which this article complains that Google didn't just go along with, and required by the settlement mentioned in the article. COPPA effectively bans companies from collecting data on under-13s (there's a parental consent provision which is basically infeasable to comply with), which includes stuff like letting them add videos to playlists. Google didn't allow under-13s to have accounts, but parents let kids watch videos on their accounts and the FTC's position is that this was a violation of COPPA when the videos weren't aimed at kids. So now Google cannot, by law, allow people to comment on videos that seem like they're aimed at kids, add them to playlists, or do other things that collect data from the person viewing them.
"We have been successful in slowing down and delaying the [ePrivacy Regulation] process and have been working behind the scenes hand in hand with the other companies," the complaint quoted Google executives as saying in a memo ahead of the meeting. The August meeting could "find areas of alignment and narrow gaps in our positions and priorities on child privacy and safety."
This sounds like a starting point for further investigation.
Perhaps the litigation will reveal more about what is going on inside Big Tech.
Why were they successful? Why can't politicians just do the opposite of what these lobbyists want them to do? Whatever it is they're pushing is likely good for them and bad for society. Just do the opposite.
They want less privacy protections for the humans they're exploiting? Add more. Add so many protections they probably won't be able to make any money anymore.
It seems that they did not do anything illegal, so not sure what you are proposing to litigate?
Lobbying lawmakers is a basic practice that happens all the time. And a law is not right on its own right, there is always supporters and detractors.
Just that we agree as a society yo abide by the laws we currently have.
If anything we should invest in more public awareness to make sure strong privacy requirements are signed into the law.
On the other hand, strong privacy laws is a sizable buerocratic burden on the society. So we should be cautious to not over regulate.
Laws are difficult to change or revoke even if they are found ineffective or too stringent later on.
A high trust environment where most of the players do the right thing without the buerocracy and enforcement is way more efficient.
NB. This is ongoing antitrust litigation, not privacy litigation. Journalists and the public are not only interested in illegal acts (the law has not kept up with the internet), they are also interested in unethical and harmful behaviour (which, who knows, might someday be made illegal). The judge ordered documents filed in this antitrust case to be unsealed. Thus, irrespective of the success of the states' antitrust claims, the public benefits by learning about Big Tech's conduct.
Twitter threads on this are rather eye opening, if folks haven't seen them, check them out. If this shit is proven in court (and if the judge doesn't hold a large position in defendants, as the case may be), Google / FB could face some _serious_, life changing repercussions.
I knew FB / Google Ads was a hive of scum and villainy (but would, nonetheless, accept the paycheck generated by all that), but did not think it was quite this bad. Much like when the varnish started cracking on both Google and Apple when Eric Schmidt fired the recruiter due to Steve Jobs' displeasure.
You can already see how Google/FB PR shills are trying to narrow down the fallout in the press (and on this site) by pretending this is only (or mostly) about kids' privacy, whereas the real story is rather severely anticompetitive behavior, collusion, and price fixing in ad exchanges, and privacy is merely a footnote in a War and Peace thick tome of alleged wrongdoing. The actual story is 100x bigger, but you wouldn't even know if you hadn't read that twitter thread. This is called "shaping".
Title suffix of "... states allege" removed due to HN length limits. Full title: Google sought fellow tech giants' help in stalling kids' privacy protections, states allege
This seems like a bad day for Google (and 'fellow tech giants'.) I wonder if there will, for once, be any actual consequences that will lead to change.
If not, then I don't know if these mega-corporations are perhaps to be considered untouchable, which would cast serious doubt on our government; or rather, erase most doubt remaining about the degree of corruption thereof (at least from my perspective.)
Google hasn't been shy about their own cash for kids(' data) schemes. It's also pretty wild how similar their market creation tactics are (and how beneficial they would be) to groomers other child exploiters.
What are advertising and privacy protection laws regarding children like in other counties? Could Google have happened somewhere with stronger consumer and child protection laws?
Feels like something that should be handled at the client, rather than having even more personal info sent to servers. Perhaps something like a face-id that automatically switches the entire device a kids' profile.
First, kids use sites and apps when they're below the ToS age (usually 18 or sometimes 13, at least in the US). How are you meant to know that as a service provider? Don't get me wrong: it doesn't excuse you from doing due diligence. There's such a thing as wanton disregard for the truth. But we all know under 18s use IG, Snapchat, Tinder, etc. I'm not sure fining a company billions of dollars and putting executives in jail is the outcome anyone really wants if a 17 year old uses Tinder.
Second, what is personal information, exactly? It's not as obvious as you might think to the point where lawyers who work for these companies can't agree among themselves. There are obvious things (eg IP addresses, dates of birth, SSNs, addresses, names). But what about media? Photos, videos and sound files may all contain PII, technically.
Third, some will say "well don't store anything". Well, if a photo contains PII (which it can) then you can't operate a photo-sharing service at all for risk of storing PII of children. Also, this ignores legal obligations. If the Feds show up with a warrant about a particular user's actions then the argument "we log/store nothing" won't get you as far as you think.
So I'm not saying Google didn't do wrong here but knowing the complexities I am sympathetic to the idea of resisting broad legislative overreach as you can end up with bad laws that do just that.
And if we're going to start singling out tech giants for lobbying Congress, my question is why them in particular? We currently have a Senate that is essentially beholden to the personal interests of one unelected rich guy: Charles Koch. Koch lobbies against health care, taxes on the rich and infrastructure in the Senate. Around the country, he has been instrumental in defeating public transit and many other initiatives in many cities.