Hacker News new | past | comments | ask | show | jobs | submit login

What kind of silliness is this?

AI generated crap is one thing. But human generated crap is there - just because human wrote something it is not making it good.

Had a friend who thought that if it is written in a book it is for sure true. Well NO!

There was exactly the same sentiment with stuff on the internet and it is still the same sentiment about Wikipedia that “it is just some kids writing bs, get a paper book or real encyclopedia to look stuff up”.

Not defending gen AI - but still you have to make useful proxy measures what to read and what not, it was always an effort and nothing is going substitute critical thinking and putting in effort to separate wheat from the chaff.






> nothing is going substitute critical thinking and putting in effort to separate wheat from the chaff.

The problem is that wheat:chaff ratio used to be 1:100, and soon it's going to become 1:100 million. I think you're severely underestimating the amount of effort it's going to take to find real information in the sea of AI generated content.


Just like people were able to find real information on topics like: "tobacco is not that bad, it is just nice like coffee" and "fat is bad for you, here have some sugar", "alcohol is fun, having beer is nice"?

Fundamentally AI changes nothing for masses and for individuals alike and there was nothing you could "just trust" because it was written on a website or in a book. That is why I call it silly.

It also doesn't make it easier or harder for people in power that have vast resources to plant what they want - they have money and power and will do so anyway.


No one claimed humans are perfect. But gen AI is a force multiplier for every problem we had to deal with. It's just completely different scale. Your brain is about to be DDOSed by junk content.

Of course, gen AI is just a tool that can be used for good or bad, but spam, targeted misinformation campaigns, and garbage content in general is one area that will be most amplified because it became so low effort and they don't care about doing any review, double-checking, etc. They can completely automate their process to whatever goal they've in mind. So where sensible humans enjoy 10x productivity, these spam farms will be enjoying 10000x scale.

So I don't think downplaying it and acting like nothing changed, is the brightest idea. I hope you see now how that's a completely different game, one that's already here but we aren't prepared for yet, certainly not with traditional tools we have.


> Your brain is about to be DDOSed by junk content.

It's not the best analogy because there's already more junk out there than can fit through the limited bandwidth available to my brain, and yet I'm still (vaguely) functional.

So how do I avoid the junk now? Rough and ready trust metrics, I guess. Which of those will still work when the spam's 10x more human?

I think the recommendations of friends will still work, and we'll increasingly retreat to walled gardens where obvious spammers (of both the digital and human variety) can be booted out. I'm still on facebook, but I'm only interested in a few well-moderated groups. The main timeline is dead to me. Those moderators are my content curators for facebook content.


That is something I agree with.

One cannot be DDOSed with junk when not actively trying to stuff as much junk into ones head.


> One cannot be DDOSed with junk when not actively trying to stuff as much junk into ones head.

The junk gets thrown at you in mass volume at low cost without your permission. What you gonna do? keep dodging it? waste your time evaluating every piece of information you come across?

If one of the results on the first page in search deviate from others, it's easy to notice. But if all of them agree, they became the truth. Of course your first thought is to say search engines are shit or whatever off-hand remarks, but this example is just to illustrate how the volume alone can change things. The medium doesn't matter, these things could come in many forms: book reviews, posts on social media, ads, false product description on amazon, etc.

Of course, these things exist today but the scale is different, the customization is different. It's like the difference between firearms and drones. If you think it's the same old game and you can defend against the new threat using your old arsenal, I admire your confidence but you're in for a surprise.


So you're basically sheltering yourself and seeking human curated content? Good for you, I follow similar strategy. How do you propose we apply this solution for the masses in today's digital age? or you're just saying 'each on their own'?

Sadly, you seem to not be looking further than your nose. We are not talking about just you and me here. Less tech literate people are the ones at a disadvantage and need protection the most.


> How do you propose we apply this solution for the masses in today's digital age?

The social media algorithms are the content curators for the technically illiterate.

Ok, they suck and they're actively user-hostile, but they sucked before AI. Maybe (maybe!) AI's the straw that breaks the camel's back, and people leave those algorithm-curated spaces in droves. I hope that, one way and another, they'll drift back towards human-curated spaces. Maybe without even realizing it.


What's with the fud

That is pretty much my reaction summed up :)

Feels like people drop into spreading fear, uncertainty and doubt too quickly.


> you have to make useful proxy measures what to read and what not

yes, obviously. But AI slop makes those proxy measures significantly more complicated. Critical thinking is not magic - it is still a guess, and people are obviously worse at distinguishing AI bullshit from human bullshit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: