Hacker News new | past | comments | ask | show | jobs | submit login

"I keep on hearing AI claimed as an extinction level threat yet never has an actual mechanism been given - just a pile of tropes taken as dogma."

I believe that is the point I'm making: by digitizing the human experience and optimizing certain easily-optimized chains of thought, you will never see the mechanism; nor will I.

There is no mechanism in the sense you seem to be asking for.

This is a preponderance of the evidence argument, not a geometric one, so even if we completely understand one another, it's perfectly fine for you to feel the point hasn't been made and I to feel like it has.

We all know various situations where people are given what they ask for and it ends up destroying them. People who suddenly win the lottery don't generally have a bright future ahead of them. People with paranoia issues who spend a lot of time off medications alone researching things usually don't end up in a good place. Rebellious youths experimenting with opiates are in a dangerous place. Isolated social groups with tight moral strictures have problems in a larger secular society.

I don't know how many of these situations you'd like listed, but there are easily dozens, and that's speaking in a generic sense. Once you start individually customizing the scenarios, say an isolated youth with some tendencies towards paranoia living in a tightly-controlled social group, the scenarios expand without limitations.

And that's what current AI promises, customized experiences in various situations based on all sorts of variables you and I may never have considered. You do this with every person, in more and more situations, and the impact is undecidable. Yes, you don't get it. The reasoning doesn't hold up. That's because if I could make a specific case about one particular scenario, it wouldn't be applicable to the argument I'm making.

I wish I could say we're performing a wide scale social experiment that we've never seen before. But the word "experiment" implies a lot of agency that isn't there. We're just mucking around with millions of variables simultaneously across a population of billions and telling people that because there's nothing obviously bad to be seen, nothing bad must be there. Then we end up reading these vague studies about how teens who use their cell phones more are more unhappy than those who don't -- and we're unable to process that information in any reasonable context. We're expecting to be able to reason about AI, but if we could do that, we wouldn't need the AI in the first place.




what you're spouting is just religion all over again.

There will always be a certain segment of the population that are into drugs, into religion, into politics, into the mindless entertainment provided by youtube, ad nauseum.

But it will never be all of humanity, or even most of humanity. The trash still needs to be collected, the electricity still needs to be generated, the food still needs to be made and distributed. And the country still needs to be run.

The ones doing these things are going to understand the reality enough to value doing them.

The danger of AI isn't in a long slow destruction of humanity, it's in a flash event that wrests control from us such that we can never regain it.

Now, whether or not that can, or will, happen is up for debate.

But this argument about how AI is slowly going to destroy us because we're all going to slowly start valuing what it tells us over "real life" is just the same old morality arguments surrounding religion reskinned. It just means you understand their perspective, or their need to enforce their world vision on others.


No, I'm not. Religion is a formalized system of causality about things we do not understand: the sky god wants us to eat grapes, we do not eat grapes, there are floods, therefore we must eat more grapes. It's not wrong or right, it's non-rational.

Religions know how things work, you just can't reason with them. I'm arguing from ignorance: we cannot know. My only additional point is that not only can we not know, we can not know in a billion different scenarios. Odds are many of these scenarios will work out poorly. That's the only "point of faith" my argument calls for. It seems to me to be a reasonable thing to believe.

You seem to feel that this will be a disastrous thing. It's interesting to me how people who don't see problems with AI keep insisting that there must be some huge, horrible result. If there were, as you point out, people wouldn't do it.

You also seem to assume that I'm making some sort of moral value judgment. That's interesting to me as a drug-legalization, open-borders libertarian. I wonder what sorts of morals I am supposed to be having?

No morals or religion is required to understand my argument. We humans work as best we can in various-sized social groups based on each of our understandings of cause-and-effect, as flawed as it all is. If we change that in a massive way, the obvious conclusion is that we cannot continue to reason about the results, not that they would be morally good or bad. Then, it logically follows that for whatever definition of good or bad you have, moral, utilitarian, whatnot, there's going to be a lot of bad things happen for which our society has no prior experience. That doesn't seem workable to me.

We gotta stop expecting these arguments to play out in some grand fashion. Boundless optimism vs. religious fear might be a great plot for a movie, but it's highly doubtful the future is going to play out like that at all.


Do you have a blog? If so you should consider writing a post that synthesizes your last few comments in this thread - AI, democracy, and religion. I say that, selfishly, because I would quite like to read it.


2 things.

1. you're tilting at windmills here, I gave no opinion on what I think the result of AI will be.

2. You didn't understand the comparison to religion.

You could literally take your arguments and reskin them as religious points.

One could even imagine this exact discussion happening when humanity first discovered drugs. Because they feel good all of humanity will eventually be hooked on them, yada yada yada. Only that presupposes that there's no value in procuring the drugs themselves, because the second you have to have a certain segment of the population procuring those drugs you have people who: 1) have a lot of power, and 2) have a reason for existing outside of simply taking drugs. In other words, the argument is a contradiction itself.

Now, if an external force had been able to get all of humanity hooked on drugs in a very short amount of time (and takes care of the procurement), then the predictions would be possible because procuring the drugs is no longer valuable for humanity.

The dangers of AI are not that we're slowly going to lose ourselves as we all become mindless zombies watching entertainment. The danger is that, like the drug example, those who procure AI are going to have a lot of power, and if AI itself ever becomes independent of humanity then we could lose all control over our own destiny.

And to loop this back to the religion comparison, there are always people in this world wanting to impress their worldview on others. Which is why your arguments can be reskinned so easily as religious arguments. They use the same techniques you're using here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: