Hacker News new | past | comments | ask | show | jobs | submit login

Absolutely. It's going to absolutely shred the trademark and copyright systems, if they even apply (or are extended to apply) which is a murky area right now. And even then, the sheer volume of material created by a geometric improvement and subsequent cost destruction of virtually every intellectual and artistic endeavor or product means that even if you hold the copyright or trademark, good luck paying for enforcement on the vast ocean of violations intrinsic in the shift.

What people also fail to understand is that AI is largely seen by the military industrial complex as a weapon to control culture and influence. The most obvious risk of AI — the risk of manipulating human behavior towards favored ends — has been shown to be quite effective right out the gate. So, the back channel conversation has to be to put it under regulation because of it's weaponization potential, especially considering the difficulty in identifying anyone (which of course is exactly what Elon is doing with X 2.0 — it's a KYC id platform to deal with this exact issue with a 220M user 40B head start).

I mean, the dead internet theory is turning true, and half the traffic on the Web is already bot driven. Imagine when it's 99%, as proliferation of this technology will inevitably generate simply for the economics.

Starting with open source is the only way to get enough people looking at the products to create any meaningful oversight, but I fear the weaponization fears will mean that everything is locked away in license clouds with politically influential regulatory boards simply on the proliferation arguments. Think of all the AI technologists who won't be versed in this technology unless they work at a "licensed company" as well — this is going to make the smaller population of the West much less influential in the AI arms race, which is already underway.

To me, it's clear that nobody in Silicon Valley or the Hill has learned a damn thing from the prosecution of hackers and the subsequent bloodbath of cybersecurity as a result of the exact same kinds of behavior back in the early to mid-2000s. We ended up driving out best and brightest into the grey and black areas of infosec and security, instead of out in the open running companies where they belong. This move would do almost the exact same thing to AI, though I think you have to be a tad of an Asimov or Bradbury fan to see it right now.

I don't know, that's just how I see it, but I'm still forming my opinions. LOVE LOVE LOVE your comment though. Spot on.

Relevant articles:

https://www.independent.co.uk/tech/internet-bots-web-traffic...

https://theconversation.com/ai-can-now-learn-to-manipulate-h....




> What people also fail to understand is that AI is largely seen by the military industrial complex as a weapon to control culture and influence.

Could you share the minutes from the Military Industrial Complex strategy meetings this was discussed at. Thanks.


"Hello, is this Lockheed? Yea? I'm an intern for happytiger on Hackernews. Some guy named Simon H. wants the meeting minutes for the meeting where we discussed the weaponization potential for AI."

[pause]

"No? Ok, I'll tell him."


The other weaponisation plans. The one about undermining western democracy and society. Yes that’s it, the one where we target our own population. No not intelligence gathering, yes that’s it, democratic discourse itself. Narrative shaping on Twitter, the Facebook groups bots, that stuff. The material happytiger was talking about as fact because obviously they wouldn’t make that up. Thanks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: