Hacker News new | past | comments | ask | show | jobs | submit login

We’re headed for dystopia.

We desperately go transhuman with Musk’s Neuralink tech or we die as slaves.

Once AI approaches AGI it will enable a highly dangerous form of fascism. Society can be controlled top down by a dictatorship using AI to oppress whoever it doesn’t like. That means the recently impoverished masses will be at the mercy of whichever fascist or totalitarian government is in charge.

You have to do transhumanism in order to make yourself strong enough to resist machine fascism, or you somehow slow technological progress so any mass technological unemployment can’t happen fast enough to cause the civil war meets economic riots scenario that I’ve alluded to in my last post.

Note that I’m not saying the riots inherently lead to fascism, but that fascism may emerge as a reaction by current economic elites against the uprisings, or as a result of an extremist takeover caused by the crisis. The latter would be like the Bolshevik or Nazi takeovers in Russia or Weimar Germany.

If you slow progress down enough there also can’t be a point where machines progress past human level without humans also upgrading themselves at the same speed.

The crux of my argument is that progress in AI is dangerous because of the direct economic consequences and second, third order risks from political chaos.

If I had my way GPT-3 wouldn’t have happened until we already had brain computer interfaces at an economically useful level. Unfortunately neuroscience looks way to hard for that to happen.

I would not at all be surprised if Neuralink isn’t even economically useful by the time AGI happens.

We don’t have the equivalent of the scaling hypothesis for transhumanism yet. Scaling neural networks is simply the best way to make them better. It’s dead simple. You just throw money at the problem.

We need something like that for whatever tech can preserve human agency i.e. Neuralink.




We are moving too fast to put in adequate safeguards unfortunately. Demis Hassabis, founder of DeepMind, recently mentioned something interesting on a Lex Fridman interview. They are going to double down on safety since the algorithm side has moved much faster than expected.

Moreover, Max Tegmark, MIT scientist and author of the famous book Life 3.0 said this on a podcast last month: "Frankly, this is to me the worst-case scenario we’re on right now — the one I had hoped wouldn’t happen. I had hoped that it was going to be harder to get here, so it would take longer. So we would have more time to do some AI safety. I also hoped that the way we would ultimately get here would be a way where we had more insight into how the system actually worked, so that we could trust it more because we understood it. Instead, what we’re faced with is these humongous black boxes with 200 billion knobs on them and it magically does this stuff."


Do you have a link to this interview? Would be interesting to hear more about what DeepMind is currently up to.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: