Hacker News new | past | comments | ask | show | jobs | submit login

Yikes, while I understand there is some harm done by the AI leading people down "rabbit holes" I'd still rather have the AI stay independent than have humans manually biasing the results. The results of the AI reveals the worst instincts of humans but a manually tuned AI can be turned into a weapon for propaganda. That would be a worse result in every way. A naturally conspiracy theorist AI can be escaped and controlled on a per-person level by simply being a logically thinking person and controlling your feed and likes while a propaganda machine that serves a political purpose cannot. That really sucks.



You don't think an untuned AI couldn't be used as the same kind of weapon? Figure out how to fake the inputs it wants, spin up thousands of virtual machines watching your propaganda, pay people to "watch" it, whatever blackhat/whitehat SEO techniques you feel like putting time and money into, and get it stuffed into everyone's recommendations.


Someone coded the AI, the AI is biased by default. There is no such thing as an "independent" AI.


The danger is when the state or a political party or some other organizarion with the states levers of power controls the algorithm. It won't be flat earth videos that get deranked it will be anything that contradicts selling us trash and invading other countries.

That warship wasn't actually bombed by the enemy, they probably didn't use those chemical weapons, why is our state department funding the same terrorists we were just fighting yesterday, actually those moderate rebels aren't so moderate will become the new flat earth.


That's not necessarily the case, the current trendy deep learning model has mostly black box AIs that are not manually tuned.


... Pardon me, but are you under the impression that most neural networks are purely unsupervised learning with no opportunities for operators to bias their results?

If so, this is is a misapprehension. Even unsupervised learning systems are subject to data based biasing. The selection of input, the decisions on how to partition the dataset, the decisions made on how to judge and measure overfitting, and potentially hundreds of other small hyperparameter decisions made by operators can substantially change the output of the system. In the case of the top post, it's very clear that "does this increase time-on-site" was an operator-chosen scoring function for the results of the system in question.

Frighteningly, these decisions are often unexamined and unrecorded. This has lead to a series of impressive-sounding systems that can neither be reproduced nor truly audited. [0]

And that's just the approximated function. The tensors it outputs are then further processed in some way, as they're often of no value in isolation. The decisions on how to use those outputs also have a profound impact on how we view the results of learning systems.

If anything, we're not cautious enough as a society about using this technology. We see all sorts of crazy in the news that's wrapped up in the over-hype. We see law enforcement failing to use even basic tools [1] because they're so excited.

[0]: http://science.sciencemag.org/content/359/6377/725

[1]: https://gizmodo.com/defense-of-amazons-face-recognition-tool...


You're right of course, though that's still quite a bit of distance from the deliberate political interference explained in the OP. I find business goals like increasing engagement to be morally neutral vs political goals like shaping people's views.


The OP didn’t say they we deliberately political. He said they were by default.


Here, however, it explicitly is the case.

From the linked article:

> “We designed YT’s AI to increase the time people spend online, because it leads to more ads.”


That isn't a matter of manual tuning, though. Its a matter of putting engagement metrics inside the cost function, such that the viewership is part of the training loop.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: