Hacker News new | past | comments | ask | show | jobs | submit login

We are not very careful and have never been. People plowed into new territories, sampled flora and fauna, tried substances and so forth. To be extra careful (in considering it an existential risk) for a hypothetical is historically atypical and I so far have not seen a convincing reason for it. Like any technology there are risks, but that is business as usual.



Yes, we have often not been careful but that does not imply we are never careful and or should not be careful. Arguably we are getting more careful all the time. So just pointing out that we have not been careful in the past does not really help your argument. What you actually have to argue is that not being careful with AI is a good thing and maybe you can support that argument by arguing that not being careful in the past was a good thing, but that is still more work than just pointing out that we were not careful in the past.


I would say there a two things: 1) yes, not being overly careful did work out by allowing things to progress fast and the same might hold here, 2) those who want to use hypotheticals about non-existent technology to steer current behavior are the ones needing to do the explaining as to why. And that why needs to address why hypotheticals are more pressing than actuals.


What is the value of moving quickly? What does it matter in the grand scheme of things whether it takes ten years more or less? And as said before, there is a possibility that we build something that we can not control and that acts against our interests at a global scale. If you start tinkering with Uranium to build an atomic bomb, the worst that can happen is that you blow yourself up and irradiate a few thousand square kilometers of land. All things considered not too bad.

An advanced AI could be more like tinkering with deadly viruses, if one accidentally escapes your bio weapons lab, there is a chance that you completely lose control if you are not prepared, if you don't have an antidote, that it spreads globally and causes death everywhere. That is why we are exceptionally careful with bio labs, it is the fact that incidents have a real chance of not remaining localized, that they can not be contained.


Having millions of people more survive because of earlier availability of vaccines and antibiotics has value at least to me.

We are careful with biolabs because we understand and have observed the danger. With AI we do not. The discussion at present around AGI is more theological in nature.


For things like vaccines and antibiotics it seems much more likely that we use narrow AI that it is good at predicting the behavior of chemicals in the human body, I don't think anybody is really worried about that kind of AI.

If you actually mean convincing an AGI to do medical research for us, what is your evidence that this will happen? There are many possible things an AGI could do, some good, some bad. I do not see that you are in any better position to argue that it will do good things but not bad things as compared to someone arguing that it might do bad things, quite to the contrary.

And you repeatedly make the argument that we first experienced trouble and then became more cautious. This are two positions, two mentalities, and neither is wrong in general, they just have different preferences. You can go fast and clean up after you ran into trouble, you can go slow and avoid trouble to begin with. Neither is wrong, one might be more appropriate in one situation, the other in another situation.

You can not just dismiss going fast as you can not just dismiss going slow, you have to make an careful argument about potential risks and rewards. And the people that are worried about AI safety make this argument, they don't deny that there might be huge benefits but they demonstrate the risks. They are also not pulling their arguments out of thin air, we have experience with goal functions going wrong leading to undesired behavior.


Sorry, my point on vaccines and antibiotics was meant to be historical on moving fast.

As AGI does not exist, it is moot do discuss how we might or might not convince it to do something.

And, yes, stacking hypotheses on top of each other is effectively pulling arguments out of thin air.


Look, you can not really take the position we don't have AGI, we don't know what it does, let's move quickly and not worry. If we don't know anything, then expecting good things to happen and therefore moving quickly is as well justified as expecting bad things to happen and therefore being cautious.

But it is just not true that we know nothing, the very definition of what an AGI is defines some of it properties. By definition it will be able to reason about all kind of things. Not sure if it would necessarily have to have goals. If it only is a reasoning machine that can solve complex problems, then there is probably not too much to be worried about.

But if it is an agent like a human with its own goals, then we have something to worry. We know that humans can have disagreeable goals or can use disagreeable methods for achieving them. We know that it is hard to make sure that artificial agents have goals we like and use methods we agree with. Why would we not want to ensure that we are creating a superhuman scientist instead of a superhuman terrorist?

So if you want to build something that can figure out a plan for world piece, go ahead, make it happen as quickly as possible. If you want to build an agent that wants to achieves world piece, then you should maybe be a bit more careful, killing all humans will also make the world peaceful.


I think there is a lot more speculation than knowledge, even about the timing of the existence of AGI. As our discussion shows, it is very difficult to agree on some common ground truth of the situation.

Btw., we also don't have special controls around very smart people at present (but countries did at times and we generally frown upon that). The fear here combines some very high unspecified level of intelligence, some ability to evade, some ability to direct the physical world and more - so complex set of circumstances.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: