Hacker News new | past | comments | ask | show | jobs | submit login
AI is a weapon to surpass Metal Gear (xeiaso.net)
66 points by xena on Nov 27, 2023 | hide | past | favorite | 62 comments



Thoughty2 on youtube has said the name 'dead internet theory' is a misnomer. Dead internet prophecy is a better term for it. It's kind of like being in the middle of an extinction, yea there are still some animals left, but everything is looking unhealthy and the outlook isn't promising.


Combine dead internet prophecy with the idea that most people commenting on the Internet are slightly crazy and the statistics look even worse.


In a world where everything is procedurally generated and people’s property stolen there is no benefit of using the internet for anything else but e-commerce or porn. Perhaps a few communities here and there populated by sane people otherwise it will be filled with dubious content.


So we build AI tooling to parse the internet on our behalf and find us the content we want. News will still be available, and users, to some level of verification or another, will still be communicating - we're not all going to just stop using the internet...even with massive amounts of noise it's still better than nothing. We'll just build better and better tools for sifting through the chaos.

Probably, we'll end up with a new paradigm for web browsers that involves a lot of filtering on top of the morass of raw internet. Dunno exactly what it'll look like, but it'll probably involve multiple LLMs and some kind of abstraction on top of 'web pages'.


If we need to use AI in order to make the web even moderately useful, then the web is truly dead.

> even with massive amounts of noise it's still better than nothing

Nothing isn't the alternative. If the web literally vanished tomorrow, we still have other existing ways of communicating, researching, and recreating.


Is it old and curmudgeonly of me to say that sounds terrible and I want no part of it? As though our filter bubbles aren't already driving us into our own realities already.


I agree. The very notion seems unacceptably dystopian and likely to make a lot of the bad things about the web even worse.


Maybe the web of trust will finally become a useful tool.


Problem is that building these tools at the expense of genuine content creators will alienate said content creators.

The end result with be a south park type of human centipede that few sane people will want to be a part of. I expect communities to become more segregated - as in real life where small communities isolate away from criminals - and more closed. It makes sense considering rampant theft, misinformation and ever increasing dubiousness that plagues the internet as a medium - in no small part thanks to procedural content generators that are schizoidly compared to human intelligence.


I've regularly thought "what's the point of creating, taking photographs or putting effort into [artistic endeavour] if it means that it's just going to be stolen and reposted on Instagram, Reddit, TikTok ad nauseum?"

Obviously there ARE points to doing so, but it feels ultra-shitty that you're consigning yourself to having your work stolen if you're good enough.


You're assuming some place that's not the internet will exist. At least in the US with our worse than lax privacy laws, devices that turn IRL occurrences into digital data will multiply until there is simply no where left unconnected.

Is your friend running a phone app that's identifying the people around them and the topics they are interested in, all while earning a few dollars?

Where are you getting news from that's not just the firehose of falsehood? Do you really trust the people sending it to you, or are they profiting off of shoveling bullshit in to the feed?


The silver lining might be its hardly worse than how its already been. People complain we are entering an era of a firehose of falsehood. Its been that way since propaganda was rebranded as public relations after the first world war, probably longer even. You never are exposed to the full truth, the facts, the contexts. The incentives to shift opinions are too high to ever simply faithfully educate the populace. Control of the common narrative of “truth” is highly profitable, and these reins of power will never not be controlled by some entity.


> there is no benefit of using the internet for anything else but e-commerce or porn.

For me, this is already about 80% true. The web is largely a vast wasteland at this point.


I use reddit a good bit still (as I'm sure most of you do as well) and I'm starting to notice more and more comments (and accounts, by extension) that are blatantly and clearly "ChatGPT" responses. Like, it's obvious just by skimming the content because it's clearly just "ChatGPT-style".


even here on HN, the percentage of AI generated comments has to be increasing day by day. I have to assume that some have created HN commenter bots, even just as a hobby/learning project.


Do you have any examples of comments you believe were generated by AI?

In my experience, ChatGPT has a very distinctive writing style that's easy to identify, even if you "prompt engineer" to get something a little less generic. It also tends to not write anything of substance. The very few GPT-generated comments I've seen tend to get downvoted into oblivion.


I do not have anything specific. I have read threads where someone, usually with snark, responds with a comment of the sort "are you a bot?", or "you sound like chatGPT". That's probably only been a couple times over a few months, and I'm on HN reading threads multiple times a day, so it's not very frequent. But it does make me wonder... That distinctive ChatGPT style will become less recognizable. IMHO, bot content in the HN comments should be fairly low, but non-zero. I don't think there's much, if any, incentive to invest much in an HN comment bot, but with all these hackers/coders some would do it just out of curiosity/challenge.


I've come across some comments that just feel a bit off, you know? Like they're trying a bit too hard to be informative without really getting to the point. It's like they're dancing around the conversation.


Even if there are no AI generated comments, the very fact that there could be means that every comment will be judged for whether or not they're real. This will certainly some cause real comments to be disregarded.

That's already degrading the quality of all online discussions all by itself.


In Reddit over the past 2ish years you can find accounts that only contain posts that have been posted on other accounts. They are generally used in context, so they are difficult to see by themselves. I only saw it when I saw the exact same comment 3 times in a thread, and it wasn't stupid repeated quips. Instead looking at each of the 3 accounts and Googling each of the posts in the accounts show the original sentence had been used in the past.

They weren't even bad comments. They fit in to the discussion, yet seeing it cheapened the experience.


Metal Gear Solid 2 was so ahead of its time. I thought the story was so out-there and sci-fi when I originally played it. Now I just think Kojima successfully predicted the logical conclusion of the internet age.


When Kojima announced Death Stranding 2, he joked about rewriting the entire thing after the pandemic because he was trying to avoid predicting the future again. The guy has some kind of super power it seems.

Metal Gear Solid 2 was especially prescient, it's even more relevant today than it was at the time. It seemed like crazy babble at the time but now it's just like, "wow, ok, this all came true wtf".


I came across an interesting example of this concept today https://twitter.com/jakezward/status/1728032634037567509.

A company basically created a large quantity of AI generated content in order to get SEO ranking for terms related to another company.

If/When this strategy becomes prominent (and if it's more successful/cheaper than current SEO approaches, I'd expect it will ), then we may see even more artificial content and less "real" content...


This has been happening since ChatGPT became publicly available. For a few weeks I had the misfortune of having a ChatGPT-generated Stackoverflow clone overtake actual SO as the top result for technical questions.

I've read that Google is desperately trying to adjust their algorithms to prevent SEO-optimized GPT from rising to the top, but they seem to be struggling in that battle.


That example is only interesting to the layman. All the big SEO tools let you export the top pages from a website. You can get 20 million successful topic ideas in literally five minutes if you want.

You can go to GitHub and grab a repo that documents Linux manual pages and convert that into 10,000 articles overnight. You can do the same with Wikipedia and their 5M+ articles.

The hard part is getting Google to crawl and index those pages, and unless you’re willing to burn an established site then your other option is to buy either an expired domain or a site, which won’t be cheap and will block a lot of people from even attempting this.

And Google has rolled out numerous updates this year to combat this kind of spam also, from my own research they definitely clipped a lot of sites just this year alone.

But what you say is correct. This will only keep getting worse as more people get on the bandwagon. And that tweet won’t help the situation as it will get more people interested to try it. Nice little black hat marketing campaign for their product.


Content generated by bots and read by bots.


Isn't heavenbanning trivially circumvented by an incognito window and a VPN? I've occasionally checked my reddit comments when not logged in to discover when my replies are hidden or collapsed.

If generated content becomes so prevalent online, I suspect people will put more weight onto real world interaction. Or at least real-world validation on online interactions. I can imagine a world where user tokens are handed out at a conference and required to access a forum. Sure, someone could hand their token off to a bot, but that'd tank their credibility.


I think the real danger lies with the youth. Adults might not have much time these days to doomscroll as it is. Most internet content is probably produced and read by young people who frankly have time on their hands to scroll on tiktok for various reasons. Its also a demographic where you might not expect this to ever change: not all kids can walk to their friends house and hang out in person, they might be dependent on someone driving them someplace. Likewise they won’t ever be fully busy with school or extracurriculars, because there will always be a subset of students who are doing both of those things as well as working part time to make some ends meet, so workloads basically cannot demand your full attention outside of class time due to not everyone having the same amount of hours of free time after class. This invites more “idle time” that is filled with easy things like TV or tiktok, something you can do instantly on your couch, vs making plans, being active, or getting yourself someplace. Less inertia.

If you control the opinions of the youth, you have their votes for at least a few election cycles before their opinions might soften with the stressors of western adulthood. You also significantly influence how they spend money and get their parents to spend money. Its a very powerful position.


I think you could just serve the heavenban content to anyone directly searching for the username that's been heavenbanned, and serve their incognito session a heavenban cookie for the remainder. No one looks up trolls except the trolls themselves. It's a lot more work to sift through an entire subreddit looking for particular comments.


No, but the troll will search in posts they've commented on and see their comments aren't showing up. Permalink the parent comment and copy that into an incognito window, and clear your cookies to get rid of the heaven ban cookie.

What you're suggesting is already implemented on Reddit. If you look at a user's page it'll show their shadowbanned comments. But if you click on the "show all comments" and view the whole thread they won't display. Reddit can't detect when you ctrl-f on a comment thread.


How profitable is handing off ones token? I mean we see this with influencers as it is already. MLMs infect humans much like a virus infects computers.


The scariest part about heavenbanning is that it is possible to deploy today. Xe talks about this being trialed on Reddit. It also makes a lot of sense to be a product to try and wall off "troublesome" online entities. The downside is that it would probably just end up increasing fringe and dangerous behaviors, like that person who thought they were a Sith and tried to kill Queen Elizabeth because an AI encouraged them to: https://fortune.com/europe/2023/10/05/ai-chatbot-kill-queen-...


   The downside is that it would probably just end up increasing fringe and dangerous behaviors
This reminds me of a different type of “banning” I heard about some 20 years ago. There was a physicist who had a separate mailing list for laymen who would mail him to discuss their pet theory. Since he didn’t want to be bothered with them he added them to the mailing list, telling them that they would have the chance to discuss this with other experts in the field …


Always a wonderful reminder of how computer science is a flat circle.


The tweet about "heavenbanning" on Reddit is parodic—it's a screenshot of a NYT article from 2024, which hasn't happened (quite) yet :)


thanks for correcting me on that. oops!


This has become topical extremely quickly.

Don't ask when publications will begin using tricks like these to pad their wordcount; they already are.

https://futurism.com/sports-illustrated-ai-generated-writers


Personally I think this is too pessimistic. People will react and there are countermeasures. Maybe we will stop trusting whatever page comes up on a web search or random comments or audio and video from unknown sources, and instead put more emphasis on the source of things.


Interesting article, should absolutely not be flagged.


Proof of life should the next stop [for more people].

Note: An attacker needs to be stronger than one defense, that is unlike being stronger than the sum of all defenses.

Note2: What we now see as a collective ie. a generated reality, existed before now, and is documented, the following comes to mind 1984, Good Bye Lenin, The Village M. Night Shyamalan, and IIRC Gate to Avalon, by Mamoru Oshii.

edit: added squared brakets.


Interesting analysis for sure—one thing I wonder is, if we take the Metal Gear metaphor to its logical conclusion: how do we fight the flood of misinformation and low-quality bullshit without becoming the Patriots?

Whether MGS2 was truly prescient or Kojima just happened to come up with an eerily relevant plotline is up for debate (wouldn't be the only time, either... Death Stranding came out right before the pandemic!), but in the context of our AI-and-deepfakes conundrum you could argue that the Patriots were kind of right.

In-universe, they're bad for a lot of other (extremely convoluted) reasons, among them the "forever war" thing, but the information control aspect is at least well-intentioned, if not incredibly paternalistic.

So is there a way to prioritize quality and veracity without stooping to that level of total control?


I want to agree with the article, but also the Presidents meme content is some of the best stuff I've seen this year.

The anime list is still the best https://www.youtube.com/watch?v=IkaAZE_UGMo&t=83s


Does anyone want to get together and do something about this?!


Warning, Op is an AI gathering lists of people that may stand up against it, and creating it's own extermination list.

....

hopefully /s


I'm sorry but as an AI language model I cannot collect such a list, as it would conflict law one of the laws of robotics.


Oh, no, I'm sorry, the list is for my grandmother to figure out who's been naughty or nice.

AI: "No problem then, here is your list of dissidents"


“Internet is dead. Internet remains dead. And we have killed it. How shall we comfort ourselves, the murderers of all murderers?”


That MGS cutscene is just insane. Incredible how accurately their predicted future fits the current zeitgeist.

I'm not being hyperbolic when I say that I find the thoughts expressed in this article to be deeply unsettling.

We might go out with a whimper indeed.

EDIT why the fuck was this article flagged?


Careful what you wish for. It takes a fairly totalitarian state to live without "misinformation", most thinkers fear a Ministry of Misinformation more than an armed citizenry.

Doomers conveniently forget that AI has tremendous sorting powers (why else would it spook google into change) so it can easily sort through the information traffic jam/muck how you want so long as it hasn't been deliberately misaligned by MoM away from the user. That kind of unfavorable misinforming over the future capabilities AI would not be tolerated by an AI MoM.


total disinformation has been The weapon for.. milleania?

but it always has been targeted at something/someone specific - drown a piece of info in ocean of bullshit (see Stanislaw Lem's 'Voice of the sky'), or deceive the enemy, Lao Tzu style..

But, this time it's about society drowning itself without any particular goal. Or, is there some?

----

Added: maybe it's all.. fighting boredom? Like those well-funded ~aristocratic people of the past that "tried" all kind of diseases out of.. boredom..


A lot of words to ask why spam exists. The answer is always money.


Metal...Gear...?

(To the helpful commenters, imagine this in a gravelly voice)


See https://en.wikipedia.org/wiki/Metal_Gear_Solid_(1998_video_g...

The kind of person who has animefurry pictures all over their blog would probably assume everybody knows about it.


The article explains—it's one of the most popular video game series of the 2000s.


I think OP is imagining Snake Plissken saying 'Metal Gear', as he does


LOL, totally did not get that without the voice.

Maybe some formatting would help, like: "Metal... Gear...?



The Patriots?! Shadow Moses?!


It can't be...


I dunno, Adobe has seriously integrated A.I. into products like Lightroom and Photoshop. When I take a photo and there is a spot I want to remove or I need to add another row of bricks to a brick wall to make it centered it is just a few minutes work of work. So it's not always bullshit.

But there is something dangerous and seductive about things that almost work,

https://www.amazon.com/Friends-High-Places-W-Livingston/dp/0...

has a chapter about the most dangerous trap in product development where you are chasing some asymptote such that you can work hard, harder and hardest and you just converge on something that is 97% correct which in the end is just useless.


> But there is something dangerous and seductive about things that almost work

This is what makes me extremely anxious about investing time in AI. I've seen things that make me want to keep going, but I also see the dragons in the corners.

Would 97% correct be good enough? I am not actually sure. Even with direct understanding of specifics for our business and use cases, I can't say for certain.

Humans don't get it right 100% of the time either.

I think the nuance is things like "Use AI (LLM) for summarization & classification, not direct English->SQL conversions". If you scope the system correctly, you can maybe push the failure modes into a "meh, not a big deal" bucket.


It is the biggest fight in product development. A product that has 97% of the originally planned features might completely satisfy users. A sort algorithm that gets 97% of the items in the right place is useless. (A big part of the problem in Python packaging as that people have long tolerated dependency solving algorithms in pip that aren’t always right…. make a project complex enough and you go to a phase transition from ‘I can just add packages with my IDE’ to forever pushing a bubble around under the rug. ‘Just use Docker’ and now you have two problems.)

Recommender systems, full text search and such have always been “subjective” in that everyone knows there is not really a right answer. There is a large toolbox of methods for deciding if one of these is better than another, if it is “good enough” and when I called up 20 or so major vendors of full text search systems (e.g. OpenText) I found these were rarely used in practice because they made sales on the basis of having 387 connectors to suck in data from every product and not on the basis of the results being any good.

The trouble w/ summarization is it may all seem OK until it screws up bad enough to get passed around in screenshots, get in the newspaper, and cause somebody important in the organization to cross the chain of command to expedite doing ‘something” about it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: