(The deliberate construction of an AI that pays people to produce lies to damage other people's health and welfare sounds like a farfetched science fiction plot. Accidentally building one, and then trying to deny responsibility for it, is a far more plausible thing for humans to do to ourselves. Just as Schneier talked about the "Exxon Valdez of privacy", what we have now is the tetraethyl lead of video entertainment.)
I think as a bunch of humans who understand this we have to really start working on saying "No to the grey" area of the predictions.
 - https://nulldata.substack.com/p/can-artificial-intelligence-...
It'd also be great if it didn't recommend for every video I watch, Jordan Peterson on his social justice sprees complaining about the newest thing that he's offended by this week. That got old last year.
No, it did not. By some takes, society is in its most dire crisis ever and he's telling people they can do something other than panic. That's popular, get over it.
The crisis is beneath your nose.
> No, I just want to watch my nature docs, I don't need people trying to recruit me to fight in their own personal culture wars.
That is a healthy attitude, good for you! Jordan Peterson's not reaching out to people who are well adjusted and who have something going on in their lives. He's presenting an alternative to resentment, the particular kind of resentment that lends all too well to scapegoating.
If you are not put upon by his ideological opponents, that's great; but many people are, and I like personal responsibility as a message a heck of a lot better than blaming a merchant class, a race, or an unfalsifiable conspiracy: the latter of which seems to be the popular alternative to a narrative of self-ownership.
The fact that you, a particularly unusual person, are not interested in his work is no indication of whether or not it is popular enough to warrant promotion by YouTube's algorithm. Because people YouTube considers similar to the natural audience of that work, will tend to watch it at length when given the choice, YouTube figures it ought to promote it. That seems like the way a recommendation engine ought to work. If not that way, then how?
Added: to sum; I think it is at least morally acceptable that YouTube has a recommendation system based largely on how much time you are likely to spend watching the content. If that content is monetized, it is not pleasant that they would prefer monetized content, but it is at least defensible. The fact that you do not like the particular content mentioned by the parent comment is neither here, nor there.
Who do you think the book is for? Successful, productive people who know why they wake up in the morning?
If you had nothing to learn nothing from his mass-market self-help book, good for you! Congratulations on being well-adjusted.
I wonder to what degree YouTube is responsible for the current political climate.
The same with Twitter really. When you only follow and see posts from people you choose, how are you supposed to give important issues a fair shake and debate viewpoints that different from yours.
It's all just rumor and innuendo these days. Those news sources trust and find credible "anonymous sources" that say for example that Trump colluded with Russia or the Clintons are doing various unspeakable things and then you're arguing with these supposedly credible anonymous sources that each person reads and trusts. Since the sources are anonymous and nobody actually goes on the record, it's just a bunch of back and forth with no basis in facts except which news sources people trust and find authoritative.
If not... what's your point caller? All I can see in your statement is someone trying to avoid challenging an idea they don't like on its merits.
Actually, I'm surprised at how many participants in public discourse today, especially amongst the so called "educated" classes, actually are willing to make such arguments with a straight face and even give them credence. All it is is ad hominem... a logical fallacy. Shameful state of affairs. (...and maybe it was never different and I'm just noticing more now...)
I'm saying that he might not be right all the time, not that he is definitely wrong all of the time. It is possible that his "democracy is fundamentally broken" beliefs are correct, but we shouldn't just say "ooh Aristotle was smart and believed this so its a reasonable claim" and move on.
I think he is wrong about democracy and I bring up other cases where he is wrong so people will think twice about his beliefs about democracy.
When appeal to authority is invoked, it is perfectly reasonable to question the validity of said authority.
edit: 6 years isn't long enough to figure it out. Let's say a decade.
Everybody is allowed to vote, as long as they are white: Felt like a real democracy but is a fake
Everybody is allowed to vote, except women. Felt like "the best democracy" once upon a time, but is also fake. Not even a half democracy.
Many of the things that today are normal could (and will) be deeply embarrasing in the future and so similar to a real democracy as a plastic imitation of a turd coated in glitter.
A real democracy is difficult to find, but really easy to spot. Must fulfill this three rules: 1) One adult citizen, one vote 2) All citizens are equal and therefore all of their (legally issued) votes score the same. No votes can count double or x100. The only vote scoring less than other is the x0 blank vote. AND 3) everybody can apply for candidate if they desire so, and participate on an equal footing.
If one of this three rules can't apply you have spotted a fake: Everybody can vote freely and is encouraged to vote, but there is only a party allowed to show... rule 3, you have found a dictatorship.
If there is a group of people legally allowed to vote, that put a valid vote, but their votes are then zeroed and removed in postproduction then is the rule 2.
1: Why adults? Why citizen? How do you define each?
2: Why make them equal when you just excluded minors and non-citizens entirely — e.g. you could make it so 13-18 year olds get counted as (years less than 18)/5 of a vote, why is that bad?
3: why is that important, why not require passing a basic civics qualification?
Meta: when non-white people were disenfranchised, were they also treated as “not people”? If so, what other categories currently considered “not people” should in future be treated as people? (If any)
There is a worlwide consensus about that people from other nations are not allowed to participate in elections of the government of a country. We would need to change the legal definition of a sovereign country for allowing it, and the benefits are unclear (to start, the president of all small and middle sized countries would be chinese).
There is a worldwide consensus also about that laws have to treat minors in a different way than adults. More indulgent. They have different rights and duties. Minors are expected to commit mistakes.
Allowing a toddler to vote for a party would imply explaining first what means terms like "left", "my left, not your left", "rigth", "communism", "liberalism", "populism" or "fascism", when they should be crashing windows with baseball balls instead. This would hit close to adoctrinating boys and girls and forcing them to choose one side (and deal with the unavoidable consequences in a yet enough confuse and complex phase of their lives).
> 3: why not require passing a basic civics qualification?
Lets people decide with their vote if is a politician or a clown. This is the purpose of voting. If the semi-automatic lonely boy is democratically elected by the majority, people has spoken, and I'm fine with that.
US fails due to the electoral vote issue. And the EU on the fact that smaller countries have a more representatives/citizen in the EU parliament than larger countries.
It would be interesting to know if there actually is a democracy that fulfills those rules...
AI is an extinction-level threat to humanity, but when you say that, people think you're talking about Terminator robots laying waste to the landscape. The scenario this article talks about is much more realistic: hundreds of little stupid AIs tearing apart the inner workings of our humanity by giving us exactly what we want (but not what we need).
I keep on hearing AI claimed as an extinction level threat yet never has an actual mechanism been given - just a pile of tropes taken as dogma.
Let alone the fact that if humanity does something stupid enough with basically anything it could be an extinction level threat. A comet and a sufficently large number reenacting Heavens Gate would be an extinction level threat with no inherent technology even communications.
Mass adoption of plutonium codpieces/IUDs and deliberate refusal to recognize the effects of radiation poisoning for fear of effect on commerce or pride could be an extinction level event.
AI won't save us from being a pile of complete idiots unless it is vastly superhuman but it cannot be blamed for death by stupidity.
I believe that is the point I'm making: by digitizing the human experience and optimizing certain easily-optimized chains of thought, you will never see the mechanism; nor will I.
There is no mechanism in the sense you seem to be asking for.
This is a preponderance of the evidence argument, not a geometric one, so even if we completely understand one another, it's perfectly fine for you to feel the point hasn't been made and I to feel like it has.
We all know various situations where people are given what they ask for and it ends up destroying them. People who suddenly win the lottery don't generally have a bright future ahead of them. People with paranoia issues who spend a lot of time off medications alone researching things usually don't end up in a good place. Rebellious youths experimenting with opiates are in a dangerous place. Isolated social groups with tight moral strictures have problems in a larger secular society.
I don't know how many of these situations you'd like listed, but there are easily dozens, and that's speaking in a generic sense. Once you start individually customizing the scenarios, say an isolated youth with some tendencies towards paranoia living in a tightly-controlled social group, the scenarios expand without limitations.
And that's what current AI promises, customized experiences in various situations based on all sorts of variables you and I may never have considered. You do this with every person, in more and more situations, and the impact is undecidable. Yes, you don't get it. The reasoning doesn't hold up. That's because if I could make a specific case about one particular scenario, it wouldn't be applicable to the argument I'm making.
I wish I could say we're performing a wide scale social experiment that we've never seen before. But the word "experiment" implies a lot of agency that isn't there. We're just mucking around with millions of variables simultaneously across a population of billions and telling people that because there's nothing obviously bad to be seen, nothing bad must be there. Then we end up reading these vague studies about how teens who use their cell phones more are more unhappy than those who don't -- and we're unable to process that information in any reasonable context. We're expecting to be able to reason about AI, but if we could do that, we wouldn't need the AI in the first place.
There will always be a certain segment of the population that are into drugs, into religion, into politics, into the mindless entertainment provided by youtube, ad nauseum.
But it will never be all of humanity, or even most of humanity. The trash still needs to be collected, the electricity still needs to be generated, the food still needs to be made and distributed. And the country still needs to be run.
The ones doing these things are going to understand the reality enough to value doing them.
The danger of AI isn't in a long slow destruction of humanity, it's in a flash event that wrests control from us such that we can never regain it.
Now, whether or not that can, or will, happen is up for debate.
But this argument about how AI is slowly going to destroy us because we're all going to slowly start valuing what it tells us over "real life" is just the same old morality arguments surrounding religion reskinned. It just means you understand their perspective, or their need to enforce their world vision on others.
Religions know how things work, you just can't reason with them. I'm arguing from ignorance: we cannot know. My only additional point is that not only can we not know, we can not know in a billion different scenarios. Odds are many of these scenarios will work out poorly. That's the only "point of faith" my argument calls for. It seems to me to be a reasonable thing to believe.
You seem to feel that this will be a disastrous thing. It's interesting to me how people who don't see problems with AI keep insisting that there must be some huge, horrible result. If there were, as you point out, people wouldn't do it.
You also seem to assume that I'm making some sort of moral value judgment. That's interesting to me as a drug-legalization, open-borders libertarian. I wonder what sorts of morals I am supposed to be having?
No morals or religion is required to understand my argument. We humans work as best we can in various-sized social groups based on each of our understandings of cause-and-effect, as flawed as it all is. If we change that in a massive way, the obvious conclusion is that we cannot continue to reason about the results, not that they would be morally good or bad. Then, it logically follows that for whatever definition of good or bad you have, moral, utilitarian, whatnot, there's going to be a lot of bad things happen for which our society has no prior experience. That doesn't seem workable to me.
We gotta stop expecting these arguments to play out in some grand fashion. Boundless optimism vs. religious fear might be a great plot for a movie, but it's highly doubtful the future is going to play out like that at all.
1. you're tilting at windmills here, I gave no opinion on what I think the result of AI will be.
2. You didn't understand the comparison to religion.
You could literally take your arguments and reskin them as religious points.
One could even imagine this exact discussion happening when humanity first discovered drugs. Because they feel good all of humanity will eventually be hooked on them, yada yada yada. Only that presupposes that there's no value in procuring the drugs themselves, because the second you have to have a certain segment of the population procuring those drugs you have people who: 1) have a lot of power, and 2) have a reason for existing outside of simply taking drugs. In other words, the argument is a contradiction itself.
Now, if an external force had been able to get all of humanity hooked on drugs in a very short amount of time (and takes care of the procurement), then the predictions would be possible because procuring the drugs is no longer valuable for humanity.
The dangers of AI are not that we're slowly going to lose ourselves as we all become mindless zombies watching entertainment. The danger is that, like the drug example, those who procure AI are going to have a lot of power, and if AI itself ever becomes independent of humanity then we could lose all control over our own destiny.
And to loop this back to the religion comparison, there are always people in this world wanting to impress their worldview on others. Which is why your arguments can be reskinned so easily as religious arguments. They use the same techniques you're using here.
Out of curiosity I decided to install every IRC client I could find. Connect to a few servers, open a good number of channels and learn to use them one by one while looking at memory and cpu usage.
Like many I have deep thoughts. Mine are as hard to find for their potential audience as the many are for me. We do however pay a lot of attention to people who make a lot of noise.
I had this hypothesis that people would copy other peoples political ideology that are copies from that of others in long chains that, rather than start with a persons deep objective thoughts, are just connected in loops long enough for us not to notice - with a number of thinking nodes insignificant to the result. [lets call them dictator nodes for laughs]
In order to test this rather absurd hypothesis I took the entire list of US presidential candidates and looked at their social media.
This quickly confirmed the loops to exist, fuck, my hypothesis was optimistic compared to reality. I found that close to 100% had facebook pages and youtube channels that didn't enjoy enough traffic to account for friends and close relatives of the person.
Eventually I worked my way up to the green party, they had 250 views on their youtube channel. I wondered about the meaning of it.... what does it mean?
I think it means even journalists didn't bother to look at it. The huge apparatus of international journalism did not bother to look at the top 5 candidates most screamed about while I took a look at a really large number of them.
For democracy to work we ALL need to look at the menu then make up our own mind. In stead what we got is NO ONE looking at the ideas. If 99% of the population had exactly the same opinion about everything and one of us would write it into a political program no one would vote for it.
The point I'm trying to get to is this: We've already build the machine that contains us. Small groups of people who cared about something gathered and implemented their ideas. These things are now bolted down so firmly that unmaking their actions takes such an absurd unrealistic amount of effort that we can at best imagine doing it. We have millions of implementations like that and they are here to stay. It doesn't even need to be stupid. The idea could have been brilliant 200 years ago.
The stupidity in Artificial-stupidity will not be in the AI, the system will continue to "liberate" humans from having to think deeply which will move us further from a position of influence. If it does a bad job it will actually be beneficial to the end result.
I'd suggest it's less a challenge to democracy and more an affirmation of the criticality of a fact-based, unbiased 4th estate, and the scary effects of anyone with a comb-over and green screen being able to make their own "news" show.
You’re right as well of course about anonymous sources being an issue, but your “politicians in power” comment is an unnecessary straw man to the conversation.
I've even blocked people on Twitter who I've never interacted with simply because people I follow keep retweeting their US politics stuff and I don't want to read it.
I do follow a few financial and international relations analysis people who I trust for reliable news. A manually, not automatically, curated feed. Otherwise I'm mostly reverse-engineering the news from the jokes made about it. And I get zero of it from Youtube or Facebook.
Maybe some people do fine with that, even though to some other people perspective they call it illusion.
I don't really care if my happiness is perceived as "illusion" by some other people as long as I my self consider it happy.
as long as these people still allowed to vote, its fine to me. They may vote for "stupid" decision but democracy should still allow for that.
Not the failure of democracy.
As long as people have the right to vote, it is a democracy.
Its not the concern of democracy if the people vote for the "wrong" politician.
If they are not allowed to vote then they didn't exercise power.
(Is it just me, or does our disagreement sound like two people arguing if a tree falling alone in a forest makes a noise, because one thinks “noise” is the vibration and the other thinks it is the qualia?)
The decision or the intention behind the vote is nothing to do with democracy itself.
Someone who cast a random vote in the ballot is as valid as someone who put a lot of thought in the ballot.
Doesn't matter if let say majority of the people vote by throwing the dice, it still democracy.
I wonder to what degree the mass media are responsible for the current political climate.
"AI that fails" currently will improve in the future and is currently being mitigated in use.
Then there's the toothless "dumbed down" section:
"Some artificial intelligence machines and programs are deliberately ‘dumbed-down’. This marks an entirely different take on the term artificial stupidity. By putting spelling errors in typed messages, not adhering to strict grammar and so on, AI seems less intelligent. These (fully intentional) errors are coded into the system with the goal of creating AI that appears human."
Fact remains, the current AI paradigm provides tremendous economic value and is merely an extension of the economic striving for automation. Harping on AI has become a bit of a trend and the author calls for cynicism but ends up with a pretty toothless analysis.
On top of all that forcing a certain context across the board can have countless unintended consequences.
I have no doubt that one day someone may win big at poker in a casino. Someone may; or they may not - of this I have no doubt.
I looked around for a Duolingo-style "report bad translation", "rate" or "give feedback" action but couldn't find one anywhere on that screen.
Artificial stupidity seems like a natural step on the way to artificial intelligence, but we would likely benefit from building in some training wheels.
If you're using the application it's hamburger button, "help & feedback" then "Feedback" will be located at the top right.
Source Carlo M. Cipolla, Professor of Economics at UC Berkeley:
Despite widespread attempts by humans to eradicate it, it exists everywhere; ignoring race/class/education/gender/orientation/profession or any other sub-division you can think of. You'll find roughly the same amount of stupidity in a gaggle of university professors as you will in a humdrum of railway attendants (I made up those collective terms, in case it wasn't obvious).
No more please (covers eyes and ears)!
Another issue is how we humans emotionally relate education with self worth in society. Ignorant people don't like to be called ignorant. Educated people telling them they need to be smarter is insulting. So there is a tendency to distrust educated people because the ignorant persons perception is that educated people think they are bad or broken and need to be fixed. If they don't feel bad or broken then there must be something more to this, why are we being manipulated... (conspiracy can then take over)
So perhaps we should rethink how to approach the problem and realize that ignorant impressionable people are a given. Instead of trying to fight propaganda with education we should disguise education as easy to digest propaganda. Don't fight fiction with facts because you're always going to lose. Just start your own propaganda machine and hope you can out smart the stupid. Sure it's manipulation but how much longer can educated people stay righteous while the world is being burned down around them by the ignorant?
How is this different from what we are already doing?
I do not just mean that as mindless snark. There's a selection effect, where children can only handle even more radically simplified versions of reality than even our feeble adult minds can handle, so by necessity, it's already been processed down to that level because nothing else would stick even a little. A common example is "The civil war was caused by slavery", which gets mocked, but it's not like there's any explanation you can fit into a middle-school textbook that isn't just as simplified and loaded with the biases of the simplifier. (Which themselves are the result of the bias-loaded simplifications that they were themselves fed....)