Hacker News new | past | comments | ask | show | jobs | submit login

It feels like debate is a generation behind practice here. The greatest danger isn't from data collection, but from algorithmic recommendation.

The data collection horse is out of the barn, has legitimate uses, and is an incredibly thorny issue to legislate appropriately. And will be adjusted for the next 20+ years.

What needs to be done now is to say that (1) at a certain user count ("too big to democratically ignore") then (2) production recommendation algorithms must be auditable.

Allow the details of audits to be kept secret from public record, but the DoJ should be able to go to Google, Facebook, Netflix, or TikTok and say "Show me how this works, now."

It's burying your head in the sand to suggest this doesn't have a clear impact on democracy and is currently solely in control of private companies and completely opaque. And it's the opaqueness that's the biggest danger.

Fundamentally, modern media/social is different from everything that came before because of the economic feasibility of microtargeting. Newspapers couldn't afford to track their customers individually & print a unique paper for each of them, which resulted in an auditable public record. All of the companies above can do exactly that, without leaving any public record.




> Show me how this works, now.

Let's say we had this. We had some ML/AI team go up to Congress/DoJ and show them some tensors, some "back propagating LSTM" horseshit. How does that help the people that don't understand how the ad model works?

Hell, I bet there is _no one_ using AI or tensors or any of that shit that can tell you _why_ their model output Foo instead of Bar.


Just because the 84-year-old Sen. Hatch (who retired in 2019) asked a dumb question doesn't mean there aren't smart people in government. (Edit: Although I might have been misreading your ad model sentence as the above)

If explainability is a problem (e.g. "We don't know why our algorithm recommends higher-interest rate loan products to African Americans"), then that's a rabbit hole that needs to have funding put towards digging up as well.


> doesn't mean there aren't smart people in government.

agreed and I was flippant to make a point. I did not mean to say there aren't smart people in government. But I would venture to say that people who understand ML/AI are _not_ working in the government. Or put more correctly, the _odds_ of a person working in government being up to date on how LSTM works today is such a low number, that I would be fair to say it is extremely unlikely that anyone on any board would be smarter than Sen Hatch _in this domain_.

> then that's a rabbit hole that needs to have funding put towards digging up as well.

Agreed and is the larger point I was trying to make: we currently, no matter how smart you are, have _any_ way of knowing what your "algorithm" is doing. Or if you do, it's a rule-based system and isn't AI/ML. We should though! And I think pursuing that will give us much higher rate of return than our black-box strategy everyone seems to be using.


The government can retain its own experts for opinions, just like how in court they can get expert testimony rather than relying on the judge or jury to know and decide all.


Many government agencies use AI/ML heavily. It's possible that the US Dept of Defense spends more on AI than any other institution in the world.


Instead of "the government" how about an independent NGO that was comprised of people who included industry or ex-industry ML/AI folks as well as academic or government people? There's a fairly robust emerging group of "ethics in tech" people who have worked on the industry algorithms and could be resources if their were policy requirements for audits.


Completely agree. Just like we have financial audits for a reason, we also need some form of algorithm audit process. Present day algorithms optimised for engagement and ad revenue growth are a clear and present danger to democracy as we have known it.


Auditable for what, though? What would you find there that you would have the government ban?


Preferential treatment of anti-democratic messages and rhetoric or anti-US sentiment.

This should be measurable.


That is a massive rabbit hole. "anti-democratic messages" and "anti-us sentiment" is going to inevitably be used to further undemocratic process and imperialism. An article "The US blew up a hospital" gets flagged as anti-us sentiment, and buried. A journalist covering political contributions gets buried by "anti-democratic messages". This is basically inevitable.


I totally get you and absolutely agree, but we're entering into in a vastly new world and your concern isn't the only one anymore.

If a foreign nation's service uses algorithms to promote its interests and quell that of its adversaries or attempts to alter election outcomes to be favorable, then we're in for trouble.

Imagine social media creating worker strikes, riots, getting certain politicians elected, influencing the next generation of kids to prefer limits on free speech, influencing kids to not pursue science and engineering, etc.

This is a complicated issue.

The algorithm can be used for warfare.


But can the government legally restrict that kind of activity? It would be pretty cut-and-dried viewpoint discrimination triggering the most severe First Amendment scrutiny.

Beyond that, I'm not sure how easy it would really be to detect. Seems like the method for that kind of thing would more likely be to have networks of users/bots/whatever that take advantage of facially neutral recommendation algorithms. So would you catch that by auditing the algorithm?


yea these convos really make me think 'all hope is lost' when you get some decent idea fleshed out re: adtech algorithm auditing and your first thought is: yea lets root out anti american sentiment in the algo. facepalm strong fox news vibes around here these days


> lets root out anti american sentiment in the algo. facepalm strong fox news vibes around here these days

I said measure. We need to understand if we're being manipulated, and if so, figure out what an appropriate response should be. Maybe it's as simple as releasing a report and letting the media talk about it. Knowing gives us the ability to develop a response.

Filter bubbles are relatively new, and it's not yet clear the extent to which they can be used to shape a democratic society.


> We need to understand if we're being manipulated

We don’t need to find that out because we are being manipulated. These are algorithms designed to maintain attention and sell ads. It is impossible for that to not be manipulative. The goal shouldn’t be to understand the algorithms, but to stop them from ruining our lives.


Isn't this an issue that US First Amendment law has dealt with since the founding of the country?

My understanding (vastly simplifying) is that the current standard is: does not infringe on rights of protected classes, does not incite imminent violence, & subject to national security concerns.

All of which are fuzzy lines, subject to court interpretation on a case by case basis, as they should be. (Even national security, which courts have traditionally granted wider latitude to)

All three are plausible rationales under which to bring a hypothetical "antidemocratic" case (to use a catch-all term that encompasses what people seem to be implying).

At core, and first principles, what "we" probably want as a nation is a sliding scale that moves from primacy of shareholder / owner to primacy of public, according to user count & market percentage.

Aka if you're a 10,000 user app, and you think (non-inciteful) racist content will make you the most money, that's your (terribly immoral) business.

But if you're a 100,000,000+ user multi-app company, then public and democratic interest takes primacy over your shareholders. Don't like that? Spin off some products.


That sounds like a recipe for authoritarianism. Sorry, X thing that I don't like is "anti-US sentiment".


That's why in matters of redress, I always skew towards visibility over modification. The latter is too powerful, in the hands of too few.

I've had this same argument with military friends lamenting the prevalence of ex-military citizens among right wing nationalist groups. IMHO, "make people attend classes to fight indoctrination" is too easily repurposed by a future administration to do the exact opposite.


Agreed. Well, critical thinking classes might be a good idea! Just spoken word Carl Sagan's Baloney Detection Kit on repeat for 8 hours a day.


I hadn't ever listened to that. Queued! Potential downside: no one could unhear "billions"


You're absolutely right, but I don't think you go far enough. If a company is too large to ignore - if they insinuate themselves into "infrastructure" roles - then we need to treat them like government. They need to be subject to FOI requests. They need to justify that their work is in the public interest. They must not be permitted to "lobby" (i.e. bribe officials).

I can't see exactly how we do all this, but the root cause of many a social ill is that corporations are taking over social roles from government. This must be arrested, somehow, or we'll sleepwalk into the dystopian future of every damn 70s scifi film.


> The data collection horse is out of the barn

That metaphor means that there is no stopping it, you can't get the horse back in the barn. How is data collection like that? Why can't we make a law stopping it? The EU limits data collection. Apple restricted it. Nobody told them we couldn't put the horse back in the barn.

A good thing about 'software eating the world' is that it's not like saying, 'train rail gauge should be 1 meter wider', which would take incredible capital and labor. The software can be quickly and easily changed.


No, we should not give up on regulating data collection. The GDPR is a straightforward legal framework that gives individuals the ability to opt out of corporate surveillance, including post-hoc auditing and deletion of surveillance records. The US desperately needs an equivalent if we're to have any digital trust that lets us exist between the extremes of "complete abstinence" and "totalitarian free for all".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: