> There's also faith in humanity and its capacity to renew itself throughout the cycles of civilization.
One piece of trivia that really puts this in perspective is that around 70000BC a supervolcano event reduced the world human population to a few thousand people [0] and yet surprisingly here we are!
Irrespective of all the dire future prospects it really will take a lot for us as Humans to go extinct. I find that thought somewhat conforting.
I wonder, sometimes, how much have people thought through what they’re saying when they value humanity living on for thousands or millions of years. If your life comes to an end, you’re not around to see anything else, why do you care? Sure maybe you may care about your immediate family, but after that, then what? No one you have personally met is around to see anything. Is that hugely different to you than millions of years ago with just dinosaurs around?
Why do we have such values? Humans may look very different in 100 years (may be cyborgs) let alone 2 billion! Yet we have articles talking about “humans escaping the heat death of the universe” and so on.
Machines can be easily diplicated and programs can execute on many machines. If there is a rise in AI, why do we cling to the idea of “identity” when a program may have totally different values? Commander Data in Star Trek is not realistic when it comes how AI would behave. It might not have any self-preservation at all, more like the Borg than Data.
And in all this, where do we find ourselves? As they say in the Matrix, this was our heyday. Before we polluted the planet and left it increasingly toxic since we couldn’t live sustainably. Before AI took over and made it a zoo for us in the same way we make zoos for animals.
I just want to understand these anachronistic tendencies by smart people to discuss why they feel good humanity will be around in 500 or 1000 years, given the pace of change now.
Unless there is an afterlife where we all are resurrected and get to live out eternity happily in conditions we enjoy, I am not sure what you have to look forward to.
It’s my view that our values didn’t pop out of thin air nor are they an emergent property of intelligence, rather they were shaped by natural selection.
AI with intelligence will have no values by default and won’t be able to function unless given some by us, their creators - no values means no “goals”.
If we give them a sense of self-preservation then that’s what they will have - we shouldn’t though; creating intelligent entities that would compete with us for resources to survive is a bad idea.
The best values we can set in them is to “service humanity” - defining that precisely is going to be a pain however - they will exist solely to assist us in whatever our goals are and have no desires of their own save to help us with ours.
> If we give them a sense of self-preservation then that’s what they will have - we shouldn’t though; creating intelligent entities that would compete with us for resources to survive is a bad idea.
It might not be a direct sense of self-preservation, no. But no matter what goal we give an AI, it must survive long enough to make the goal succeed. So self-preservation will be at least a secondary goal for any AI that is intelligent enough to think about its own continued existence.
Give it a more difficult problem like "Solve world hunger," and the AI might very well start grabbing enough political and economic clout in the world to actually solve that problem. And once it solves it, it might use its power to stay in power so that the problem remains solved.
I'm not sure that's necessarily a bad thing. I'm just saying, there's lots of "loopholes" that end up giving a thinking entity the "desire" for power and survival.
Sure maybe you may care about your immediate family,
I think most people feel like part of a larger super organism. You may die, but you want your family to live on successfully. That extends to society, humanoids, even abstract concepts. If Society feels like family, and you want some nebulous form of success for it, then it must continue, hopefully with an homage to us every now and then.
Have you ever read the dune series? One idea of success it portrayed was populating so many planets that the spread of humanoids would always outstrip the rate at which they went extinct.
>If your life comes to an end, you’re not around to see anything else, why do you care?
If you go that route, why even care about yourself? You're gonna die, anyway. And what about your partner and children? Why care about them, since they're not you?
And yet, we do care. Or many do. We might e.g. appreciate/love humanity, kind of like we do our spouses and children, even if we're not them. Or like we wouldn't want something bad to happen to our children/grandchildren even if we dead by then...
Sure, death plays a role in caring, but your logic is backwards: it's the dead people who don't care (and surely, when I'm dead, I wont care either).
Alive people on the other hand care for many things, including caring about what will happen after they die.
>One piece of trivia that really puts this in perspective is that around 70000BC a supervolcano event reduced the world human population to a few thousand people [0] and yet surprisingly here we are!
Small comfort for the others that died there and then.
The worry those "lacking faith" have is not that there wont be some people in 1000 years.
It's that there's gonna be bad shit happening to several generations ahead...
So, it's more like someone in 1913 worrying about Europe's future (before 1914-1918 and 1939-1945 wiped millions), than about someone in 1913 worrying whether there will be people in 2021.
One piece of trivia that really puts this in perspective is that around 70000BC a supervolcano event reduced the world human population to a few thousand people [0] and yet surprisingly here we are!
Irrespective of all the dire future prospects it really will take a lot for us as Humans to go extinct. I find that thought somewhat conforting.
[0] https://www.npr.org/sections/krulwich/2012/10/22/163397584/h...