Hacker News new | past | comments | ask | show | jobs | submit login

From that PR it seems partial compliance is what got them burned. They obviously knew what ads were political because they put some information about them up, but not all.

If they had simply not complied at all and tried to blame their customers for not classifying the ads correctly, it might have gone a bit better in their favor.

Related, it's still insane that companies can basically argue "this ad that thousands upon thousands will see cannot be reviewed by hand even once because that would be too hard".




Funnily enough, I was involved in a project that did dynamically generated real-time adverts (specifically updating betting odds in real time). It crashed and burned because Facebook insisted every advert has to be reviewed and approved beforehand.


Yeah I think that would only really work as an in-house feature.


> If they had simply not complied at all and tried to blame their customers for not classifying the ads correctly, it might have gone a bit better in their favor

This would have been blatant contempt of court. What they did was slightly less blatant.


How so?


> How so?

Facebook and Washington "entered into a stipulated judgment that covered Meta’s violations through November 30, 2018" in 2018 [1]. (It required Facebook "no longer 'accept ads that relate to Washington’s state or local elected officials, candidates, elections or ballot initiatives.'")

Not complying at all with requests from the AG, i.e. arguing they had no knowledge about the situation (versus that they misunderstood the rules), could thus be interpreted as willful violation of the judgement.

[1] https://agportal-s3bucket.s3.amazonaws.com/174_StatesMSJ.PDF page 3


> contempt of court

what


They got done on not publishing data that, from my quick skim, would not come from human review. e.g. targeting data etc.

I can only assume Meta feels the risk/costs of fines are less than the bennefit of not disclosing who pays them big dollars for targeting certain demographics. I.e not provide ammunition for law makers to come after them in more costly area.


This is more a "they don't want the public to know" than law makers. If law makers want that data, they can subpoena it. Companies don't have any sort of privacy rights to prevent governments from pulling their data.

What would happen if meta defied a subpoena? IDK TBH. Perhaps a total shutdown until they comply? IDK if a company has ever defied a congressional subpoena.

This case was about civil liability so the rules are all a little different. Not giving data during discovery generally means adverse inferences are made by the court (IE, this data is so bad that we have no choice but to believe what the plaintiff is saying about it).

Defying subpoenas, though, generally results in contempt and fines. For an individual, that'd mean jail time.


Hard = cut into our profits.

If it can’t be automated, Facebook isn’t interested in complying.


Because it is too hard while still letting the SMB market access to the platform.

Requiring a human to review each ad increases the fixed cost of serving a campaign.


Surely a cutoff in impression #s for hand review would be feasible? Even if the cutoff started absurdly high as logistics were ironed out. And then no cutoff for certain classes of ads, like political.


They need to argue that, because otherwise it would eat into their bottom line to the point of unprofitability. You can't run a service for billions of people with less than 100k employees that also has an editor review everything.


If it's unable to comply with the law and stay profitable, then maybe they should go out of business.


Yeah, I completely agree. I wasn't trying to imply that I condone their behavior.


Or change their business model.


Fully agree! See also "loud pipes save lives".


"We can't follow legal requirements at this scale" isn't an excuse we should be accepting from companies that want to continue operating.


That's a terrible argument as the thing to be reviewed (advertisement (not "everything")) is directly connected to a revenue stream and the level of time spent on reviews scales with the amount of income.


How do they charge per advert? If I was paying someone hundreds - or thousands - of dollars, I'd certainly expect some degree of personal service.


The "personal service" is that they have created an entire social media platform just to serve your ads.


Billions of people are not buying Facebook ads.


I am curious about how other similar platforms handle this problem (like youtube) handles this kind of problems. If someone could provide some insight it would be great


They are all the same way, maybe even worse. Google is notoriously allergic to having real people in any step of a workflow, even for paying customers. Twitter is currently going through its own battle with the fact that a huge chunk of the userbase isn't even real. Amazon product quality is going down due to lack of manual quality checking of their sellers. And so on. The culprit is always the same: real people don't scale to the expectations of shareholders, so everything gets automated, and then some very real people show up and exploit the system.


If they can afford to buy personal information about its users, they can afford basic compliance.


You: "If they complied with the law, then they wouldn't be profitable."

It seems your conclusion is, "Therefore they should be above the law."

No, that isn't how it works. They *must* comply with the law, and if they cannot be profitable when they are not a criminal entity, then they should go out of business.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: