
Idea against HN-induced smartphone addiction - wrongonthenet
Inspired by yesterday&#x27;s article on smartphone addiction, I would like to 
propose one idea and ask for other ideas about making HN in particular, and 
the thinkpiece&#x2F;newspaper WWW in general, more useful to the persons reading 
them.<p>I feel like I spent a large amount of time reading texts on the internet in a 
way that didn&#x27;t serve me, or by extension any community I want to be part of. 
I can imagine many people here feel the same. I think I can understand the 
reason: The interest of any person writing a text online is not to contribute 
to a view of the world that is accurate and useful to the reader, but to make 
the text being paid attention and&#x2F;or money to. This is often accomplished by 
making readers outraged about things that don&#x27;t concern them and that they 
won&#x27;t change anything about.<p>Of course, making this attempt transparent to the reader would be 
counterproductive, so an excuse with some level of subtility will be used (as 
an example, &quot;a 20 page slatestarcodex article about studies into in how far 
the gender pay gap is caused&#x2F;not caused by women&#x27;s choices&quot;). This is adapted 
to the target group. But in general, if a professional journalist (or 
political advocate) tries to do this against someone with a different career 
(like me or you), they can probably win. And unsurprisingly, such a discussion 
isn&#x27;t contained e.g. in New York Times articles about smartphone addiction.<p>I feel HN (and the WWW in general) is useful to me, but the use is diminished 
by articles of this form. I want to ask for ideas helping to change this. One 
idea would be the following: A reader on HN can not only upvote articles for 
quality, but, if they have a suspicion that the author makes (significant) use 
of such a trick, they can upvote for outrageousness as well. Then we could set 
to hide articles above a certain outrageousness level. What do you think will happen if such a proposal is implemented?
======
wrongonthenet
In one sentence: Let users vote on outrageousness separately from quality, and
give the option to hide articles with a too-high outrageousness voting.

~~~
fosco
interesting thought, or to provide a shared 'community-driven'
extension/plugin where people can tag items as clickbait etc. the only issue
is taking care of bots or bad actors tagging
inappropriately/incorrectly/trolling.

having a long debate out this I am confident a good group of minds can come up
with a more solid solution...

to finish, the plugin would auto-hide specific parts of web pages that were
marked as clickbait or otherwise

~~~
wrongonthenet
Update: Sorry, I somehow wanted to edit my text and couldn't.

Hi, I think a plug-in that gives an opportunity to give feedback about
emotional manipulation is a good idea as well. As you said, such a plugin
could hide content in non-cooperative websites directly, like an adblocker.
Some thoughts on what I think can increase the probability that something like
this will be successful:

1\. I think it is useful to have a "voting system for manipulation", instead
of just trying to flag "clickbait" vs. "non-clickbait". With apologies:

"If only it were all so simple! If only there were evil people somewhere
insidiously writing clickbait, and it were necessary only to separate their
articles from the rest of us and destroy them. But the line dividing emotional
manipulation and value cuts through the heart of every internet discussion.
And who is willing to destroy a piece of his own heart?"

Every person who ever said or wrote something, online or offline, tried to use
emotional manipulation (unconsciously) to be paid attention to (of course, you
can find this in any of the messages written here). This doesn't mean the
texts don't provide value to the readers or the world, just that the
distribution of incentives is never the same for writer and reader. So I think
the problem is not so much "articles that are obviously useless to everyone"
(because if that was clear in the first place, they wouldn't appear on HN and
noone would read them), but "articles that are useful to some people, but read
by more people than that because they successfully manipulate their readers
into acting against their own best interests".

The realistic goal is not "identify and eliminate clickbait/people writing
evil things once and for all", but "shift the balance of power towards readers
in our community" in the short term, and "improve alignment of the incentives
of writers and readers" in the longer term. At the end of the day, all social
progress (hunter-gatherer tribes, kingdoms, free speech, free press,
separation of powers, democracy...) comes from (social or material)
technologies that improve alignment of incentives between people. Internet
communities had upvotes/karma for some time, but I think right now that these
can be improved upon.

2\. Such a system should initially be focussed on some adversarial (i.e.
existing in order to make money/get attention for something else) website that
is relatively small (more research/deliberation needed to actually determine a
good place to start) and expand from there. This is for the same reason that
Facebook was initially restricted to specific universities: If someone writes
such a plugin and releases it with the immediate stated intent that the whole
world uses it, noone will actually be concerned and sign up and no network
effect can be achieved. If one first advertises it to a very specific
community, people become more convinced that a network effect can be achieved,
and are driven to sign up themselves. Once one has achieved success for some
community, one could take

-feedback on improving the system, -social status of the idea,

and go on to fry bigger fish. And conversely, if one fails to get a network
effect, one has "burnt" the basic idea and makes it less likely that such a
project will be started/succeed in the near future.

3\. I think it could be useful if the system allows users to organize into
groups as well, and see only votes by people in their group. As in 1, articles
are always useful or useless relative to the person potentially reading them.
And all online communities degrade towards the lowest common denominator among
their members, so it is desirable for the readers to keep that denominator
high by encouraging separation into reasonable subgroups.

4\. I am actually not all that concerned about people really behaving
"obviously badly" \- basically the only people who might be motivated enough
to pay a sufficient amount of attention would be those working for the website
directly, and for these, the risk to their reputation if they get caught is
quite high. So giving users profiles (with the opportunity to convince other
users of their humanity by a unique profile text etc.) and allowing to
"downvote users" for suspicions of voting behaviour not in the interests of
the group might be enough. Furthermore, I guess if this problem occurs, it
will actually be a sign that the project was somewhat successful, so it is
better not to be too worried about it right now.

So to conclude: Doing what you propose is not in contradiction to doing
something specific to HN. In fact, if we come up with something new, and
convince pg to implement it, and it turns out to be a success, this will gain
information about whether/what/how such ideas work, and status for them, and
actually increase the success chances of a larger project.

~~~
fosco
exactly.

I think this is an idea that should be explored.

on a side note, I was thinking about a way to prevent bots from ruining it by
marking everything as bad or good etc.

have user ID's on the plugin and as a user I can approve people I know and
only accept the votings they make rather than the entire community, this
enforces the level of trust that I already have of the users (ideally I know
them) and it would allow me to ignore potential bots or people who attempt to
sabotage the goal.

throwing ideas around... but I am liking the sound of this... getting a good
group of people together to play devils advocate and improve upon this might
be worth something and even change the way traditional user-interaction over
the internet and in general happens.

~~~
wrongonthenet
Hi, [https://pastebin.com/aVyRfRNdg](https://pastebin.com/aVyRfRNdg) contains
my e-mail address.

~~~
fosco
hop on IRC freenode at ##hackernews - I'll try to be around.

both on here and pastbin you were deleted :-)

------
skrebbel
I like this a lot. Hell, I'd even return to Twitter if it had that feature.

~~~
wrongonthenet
Unfortunately, I don't think this can happen on twitter right now because it
is not in the interests of the twitter execs to calm down (i.e. reduce) the
discussion there. But for this website (whose stated goal is exactly what I
hope to work towards), I have hope that the tradeoffs can be different and we
can improve on the status quo.

