Hacker News new | past | comments | ask | show | jobs | submit login
Neal Stephenson Explains His Vision of the Digital Afterlife (pcmag.com)
152 points by MilnerRoute 6 months ago | hide | past | web | favorite | 53 comments



>Q: How would you describe the current state of the internet? Just in a general sense of its role in our daily lives, and where that concept of the Miasma came from for you.

> Neal Stephenson: I ended up having a pretty dark view of it, as you can kind of tell from the book. I saw someone recently describe social media in its current state as a doomsday machine, and I think that's not far off. We've turned over our perception of what's real to algorithmically driven systems that are designed not to have humans in the loop, because if humans are in the loop they're not scalable and if they're not scalable they can't make tons and tons of money.

How is this any different from turning our perception of what's real over to mechanical turk systems like newspapers and TV news?

This line about perception of reality somehow being different today than it was, say, in the 1980s keeps coming up. The main shift is that the number of (dis)information sources has risen sharply and they've become much more efficient. Secondarily, entire media channels (newspapers and TV) have lost relevance to all but the older generation. Increasingly, those news outlets just track social media, which may be the oddest development of all.

In this context, the effective discontinuation of the White House press conference was inevitable. For better or worse, the president talks directly, unfiltered, to everyone. That's the real doomsday machine if put into the wrong hands.


>How is this any different from turning our perception of what's real over to mechanical turk systems like newspapers and TV news?

Accountability. Someone is ultimately responsible for every newspaper article and every TV program. If a national newspaper decided to publish a piece of paid foreign propaganda, someone would have a lot of explaining to do, possibly in a court of law.

There's no such accountability for algorithmically-driven systems. It's not our fault that our platform keeps promoting propaganda, it's just a quirk of the algorithm; we don't really know why our algorithm is doing this and we're not going to let you review the source code.

I don't know what the possible solutions might be, but there's clearly a problem. Social media companies have largely abnegated responsibility for what goes on their platform and their platforms can't function without that abnegation of responsibility.


> How is this any different from turning our perception of what's real over to mechanical turk systems like newspapers and TV news?

Because size matters, and in this case social media is a lot more pervasive (I'd say at least an order of magnitude more) compared to TV news of the 1980s or to newspapers. I knew of lots and lots of people who to didn't read newspapers 30 years ago, also the majority of the people around me didn't give that much thought on what was presented as "news" on TV because we knew they were mostly lies anyway (I grew in an Eastern-European country located on the wrong side of the Wall), but nowadays even the remotest shepherd has a cheap Android phone with an Internet connection which he mostly uses for Facebook and Whatsapp.

In other words social media is a quantitative change (billions of people are using it) which has transformed into a qualitative one (because billions of people are using social media things are different this time).


Newspapers and TV couldn't (a) Collect as much, if any, individual data (b) They couldnt tailor their content on an individual basis (c) There were not as many sources of data tracking your likes, location, buying patterns, eating patterns, entertainment patterns, writing patterns, voting patterns, reading patterns, etc. (d) Those sources of data could not be collated together

I think fundamentally the issue is that too much personal data is being collected, and put together, making profiles on people that can be used and abused. Social media is the first major area where that data is being employed and often being deployed irresponsibly (although I question the idea that so much personal information could ever be deployed responsibly).


Human beings have always been mythological beings. It's just that the myths that they now believe are amplifications of their own beliefs, or what they want to believe, rather than the collective beliefs of their societies, elites, thought leaders, and media masters. So yeah that's doomsday for society, in that there are no longer any shared myths holding everyone together. Those myths were never really true... they were just a tool of control, maybe even self-control of society. Without them the chaos amplifies.

Heck even without social media it seems like there was a progression in politics where things were becoming unhinged. Things were becoming more and more polarized and what is/was considered progress was more further and further from many people's ideas of progress. So there is a series of issues that are coalescing together in our time with a background in the century of self, identity politics, etc.


its actually terrifying how accurate this is. most people live extremely mundane lives so they latch onto some lore whether its a book or their twitter feed. just the recent game of thrones petition/outrage shitstorm, the show gave people something to war over twitter about, with their coworkers, their family

its definitely why politics now is incredibly toxic and i dont really understand why anyone contributes or even cares about political twitter dunking, its definitely not healthy

a community in chaos, full of micro tribes following their own lore used to be cause for war and now we have it, in extremely quick replies at all times anywhere


>How is this any different from turning our perception of what's real over to mechanical turk systems like newspapers and TV news?

In the sense that having some hot oil spill on your hands while frying something is not the same as being submerged in burning oil as a medieval punishment.

Network and scale effects affect the impact of technological innovations. Even quantitative increase, over time, leads to different qualitative impact. But here we also have qualitative differences (e.g. TV networks didn't know shit about you, social networks know everything about your preferences, searches, friends, and so on).


> In this context, the effective discontinuation of the White House press conference was inevitable. For better or worse, the president talks directly, unfiltered, to everyone. That's the real doomsday machine if put into the wrong hands.

Interestingly, that shift seems to have happened everywhere, not just in politics. Many companies have also realised that they don't need to go through the media/journalists any more, and started advertising directly to their customers/users/fans too.

Wonder if there's a term for that sort of social shift?


Decentralization. Although this term does not yet apply to the avenues through which this outreach occurs, but give it ten to twenty years and that too will change.


How is this any different from turning our perception of what's real over to mechanical turk systems like newspapers and TV news?

This is like saying why being scared of atom bombs when we already had swords.


He later relativizes that with:

> So that means that access to that kind of higher-quality view of the world becomes a class-based situation where people who've got the money to pay for or partially pay for human editors and curators are getting higher-quality info, which I think is just a slight kind of magnification or intensification of the way things are now anyway.

I repeat:

> …intensification of the way things are now anyway.

Now, here I disagree with Mr. Stephenson, these things were much more intense in prior times.

What he observes seems to be the development in the U.S. in terms of tonality of political discussions.

If you look at Asian countries as a counterexample, there is no such "intensification" happen because of SN at all. To the contrary, even the house of suckage, FB, is contributing positively, because a lot of people would not go online without FB or other easy ways. Operating keyword here is easy. For many people, who just made the jump into being alphabetised, or being the first in their family to be so, many SN outlets are a somewhat more comprehensible access point to info, then "Google Fu".


>If you look at Asian countries as a counterexample, there is no such "intensification" happen because of SN at all. To the contrary, even the house of suckage, FB, is contributing positively, because a lot of people would not go online without FB or other easy ways.

That's hardly an argument in favour of social media. The US is a democracy where people used to have civic debates around a shared reality.

That social media isn't going to ruin the debate climate in corporate-governmental societies where civilians don't have any input anyway doesn't really calm my nerves. In fact it frightens me. If the future of the internet is a cluster of entertainment bubbles with business and government bending reality to their will, and China is an example of the future, I'm not really keen on duplicating it. It's actually a very prescient observation: There seems to be a strong correlation between societies with high technological development but no democratic tradition, and the demise of traditional channels of media in favour of social media.


Which society "with high technological development but no democratic tradition" witnesses a "demise of traditional channels of media in favour of social media?"


I don't think I can actually name you a Chinese mainland newspaper that isn't government owned and controlled. So calling it a demise might have actually undersold it, I don't think independent journalism plays a role at all.


This is a highly questionable idea.

If anything, social media is worse in Asian countries, being used for massive scams, for genocides, for vigilante justice (which often targets completely innocent people), massive amounts of fake news, etc.


He is referring to increasing economic inequality. This information inequality helps the rich get richer.


I'm not convinced that the impact of SM is much different from the impact of 3-network TV programming back in the 50s-60s.

'We've turned over our perception of what's real to algorithmically driven systems ....' Like Nielson ratings?

We certainly weren't allowed to play a greater part then. Are we now, really? So, if once again we've allowed corporations to spoon-feed us, in return for watching advertising ... how's that any more of a doomsday than the first time around?

Stephenson's a remarkable story-teller but this comment is not helpful. If things look darker, that doesn't mean that they are. TV and Newspaper headlines were essentially algorithmically predictable. 'Darker' stuff wound up in magazines.

Today, anyone with the time can see it all ... unfiltered by news 'experts'. If it looks manipululated, it always was.

"The business of the journalists is to destroy the truth, to lie outright, to pervert, to vilify, to fawn at the feet of mammon, and to sell his country and his race for his daily bread." - John Swinton, 1880



Also, title should be fixed to say "Neal" instead of "Neil".


Part of the problem is the participation trophy and helicopter parent mentality.

Who are you to tell me my beliefs are wrong? I can be anything I want, I am qualified to be an expert on something because I want to be. The Earth is flat, who are you to tell me otherwise?

The lack of humility is the real doomsday machine.


Funny you mention flat earthers, I think it's because of social media that the number of times they're mentioned dwarfs the actual number of flat earthers alive today by something like a few hundred thousand to one. Just one example where something gets blown way out of proportion.


I worry that the online coverage of things like flat earthers might feed the phenomenon. I used to think flat earthers was just some internet nonsense and that you'd be hard-pressed to find someone offline who believed that, but then I found out a long-time coworker does. One of my neighbors is also a climate change denier and it took a long time for us to hit that topic, so I wouldn't be surprised to find out more people I know who are like that.


I believe it is the desire for hidden truths, so that one may begin to understand themselves. Mysticism used to play that role for many. "Here is a practice you can do daily to find hidden truths about oneself." But in the modern world, conspiracy theories fill that need.


https://www.nbcsports.com/washington/washington-wizards/kyri...

Even if it's a miniscule amount, coupled with antivaxxer's, climate change deniers, etc. leads to pain points in society even having to prove them wrong takes a toll.


Why do you feel compelled to prove flat earthers wrong? Most of them don't even earnestly believe it, but bait people like you into bad faith arguments for sport.


I would assume the same holds true for the AntiVax movements. While I constantly hear about it online, I don't know a single parent with that mindset.


We know there are plenty of antivaxers because they sign certificates of immunization exemption. Plus we keep track of immunization rates. In WA there are about 4% kindergarten kids with non-medical immunization exemptions [0]. In some communities, like Vashon Island, the exemption rate goes upwards 25% [1], and the non-vaccination rate upwards to 44% of kindergartners in 2012 [2].

[0] https://www.doh.wa.gov/Portals/1/Documents/Pubs/348-682-SY20...

[1] https://www.thenewstribune.com/news/local/article26252716.ht...

[2] https://www.nytimes.com/2019/05/20/health/measles-outbreak-w...


To a lesser extent there’s also an anti multilevel marketing movement.

Groups whose soul purpose is to trash talk something are loathsome, even if the target deserves it.


All of the hot-button issues are massively blown out of proportion to the actual numbers. That's the salient thing you start to notice when you start drilling into the facts and disregard the rhetoric. We're wound to 11 at each other's throats fussing about the 0.1% issues.


I'm vaguely interested in the article, which this wasn't, but wow Slashdot's gotten really scummy since I last visited.


I left ahortly after Malda, tr days of conversations with Bruce Perens and Wil Wheaton are long gone, it’s mainly Infowars fans now.


Too bad Malda sold it before the social media valuations like of Facebook.

Slashdot had some commercial awareness (e.g., there was a lot of predictable "take my money!" meme about products), but wasn't thinking along the most profitable lines, and maybe never could've. (I've been wondering how much/often dotcom founders of successful startups reshape the world in their image, and I doubt that Malda saw the world with a Zuckerberg eye, for example.)

Another big property that was bought by the same people was SourceForge. Which didn't fetch anywhere near the price of GitHub.


I would agree. I used to really enjoy that community. But it's gone downhill. I think the moderation rules were brilliant. And effective. The problem is, there needs to be a core set of user setting good norms, and they've all left. What's there now is kind of a mess.


What's interesting is that your observation directly relates to the article. "We've turned over our perception of what's real to algorithmically driven systems that are designed not to have [paid] humans in the loop, because if humans are in the loop they're not scalable and if they're not scalable they can't make tons and tons of money."


Except that Slashdot isn't using engagement-driving AI algos in the sense of FB, YT, or Twitter. It's a simple capped-upvotes system.

And it's still a dumpster fire.

If you want intelligent discussion, you go elsewhere. HN is pretty good (though with numerous glaring exceptions). Any part of Reddit that's not ruthlessly moderated (and/or still highly obscure) fares poorly. I've got small groups on Mastodon and Diaspora which work fairly well.

But clue flees idiocy and mindless conflict hard. And if your system breeds either, intelligence won't loiter long.


Indeed! I'm actually writing about that in another community right now.

I am reluctant to put it here, but maybe just a taste?

[flame suit on :D ]

I disagree. Honestly, one can make tons of money, and incorporate the humans too. That can all scale, though the margins are not as good. So what? There is still tons of money to be made, just not the absolute maximum tons of money to be made.

The difference between those two is profound when it comes to the potential impact of social media on society today.

A while back, some friends and I did an experiment. What we did was set the expectation that people need to treat other people well, assume good intent, and in general, don't be a dick.

Anything else is a risk.

Being a dick comes with risk. It may be small, it may be significant. It's hard to know, by design.

See, when we define how shitty people can be to one another, they end up being exactly that shitty, plus a little.

Give people targets and they will meet them. Give them rules, and they will game them.

So then, make the only clear target real, human, discussion, mutual respect and consideration, and as time passes, the community will see more and more of that.

What does this have to do with the discussion we just started? Hang with me for a minute. I have to lay some foundation first.

Here is the money part:

Agency. When someone says something shitty to us, we are in control over how we respond. The number one response in most communities, is righteous indignation. The number two is to return that with equally shitty discussion.

And from there, it's a meta mess. They said, you said, he said...

Everyone wants "the platform" to handle this for them, to protect them, to enforce "the rules", etc... Ask people this, and they will go on at length about it all.

You can also give them scenarios, and they will also tell you indirectly, "the rules are great, but I need an exception, because other people are shitty to me."

This behavior dynamic is chronic. Rule based systems won't fix it. Machine learning won't either.

Now, what happens when people understand this is on them?

Someone calls us an asshole. They are strangers, we don't know them, they have established little to no credence. Or, they are trolls and are seen as such. Maybe they are clowns.

With agency in play, we have many options!

Righteous indignation Laughter Ignore them Score their insult (C+ Weak sauce, try harder)

On Quora, I will often apply this. Someone goes off, and I will say, "You should edit that away before someone reports it. Then we can try again."

Doing that is sometimes amazing. I've had some awesome exchanges with others that started out on a very bad basis.

We are in control of our end of the conversation. We manage whether it goes bad or not.

So much of the angst we see related to all this social media boils down to the fact that many people do not recognize their agency in dialog, and they expect the system to take care of them, relieve them of the burden of actually interacting with other people.

Now, where it comes to the real / unreal, there is a dynamic in play I find extremely disturbing, and this is the part I am reluctant to put here, so again, just a taste:

Our body politic is ill. Our mainstream media is not delivering a robust, real, inclusive dialog on policy. Tons of Americans have come to realize this, and it's a growing problem for the big money, establishment media.

A couple examples:

Where is the anti-war movement in the US, and what coverage does it get on mainstream media?

What percentage of news and opinion produced by mainstream media in the US do so from the labor, poverty point of view?

When is the last time any of us saw a reasonable debate on mainstream media about Net Neutrality, Copyright, and the basic problems and dynamics we are seeing where society, the law, and technology intersect?

There are many more. I just picked a few to make a point here.

All of those topics suffer from conflict of interest. The owners of mainstream, big media see those topics as threats to income, and coverage, editorial influence, all reflect that reality.

Now, ordinary people actually have an interest in those topics, and do discuss them. Where?

Social media.

In addition, there are tons of great discussions going on, here, mailing lists, forums, Reddit, you name it. They are happening, and bubble up to platforms like Twitter and FB, which actually hosts a ton of "secret groups" all focused on specific topics, causes, and ways to action, activism.

(one does wonder how secret those are, but that is a discussion for another time)

Right along with that we have less informed dialog, conspiracy theories, crazy people, angry people, etc... And we've got some level of nefarious activity too. Many think "the russians" are producing content that is actually impacting elections, for example. I am not going to state truth or false on that, just recognize that sentiment is out there.

Every single day our mainstream media delivers "Fake News" The easiest to see are war related propaganda going out as news. It's constant.

Every single day, our mainstream media talks about all "those other people" on social media producing "Fake News" and what are we going to do about it?

See the trouble?

We can't resolve this "Fake News" on social media problem without also recognizing the mainstream media problem we have today as well. Without that real conversation, the solutions will be incomplete, untrustworthy, draconian, and the people overall will not see the outcome as just and the process legitimate.

That's rough, but real.

In addition, we've got many Americans looking for strong left economic policy reform. Often, that gets lumped in with "Fake News" which makes a mess of things in that draconian solutions are seen as chilling to otherwise reasonable, permissible dialog.

That's it, I will stop there, save for one last observation:

In a given community, one setup with the expectations I detailed above, or even some reasonable set of expectations that are clear and do not get in the way of most reasonable, real conversations, if pointed ones:

All it takes is a small percentage of regular contributors modeling the norms and the options to spread them throughout the membership, basically innoculating them against most basic meta and trolling bullshit.

The resulting conversation is healthy enough to filter truth from bullshit very reasonably.

Why?

Because people learn how to be wrong, how to handle attacks without exacerbating things, how to be secure in their conversation and seek enlightenment not always just validation and ego stroking. Spread that, and bullshit is displaced and or quickly marginalized.

(and often, bullshitters get sucked in, will own it, and begin to contribute too. Happens more than you think. And that happens, because the door for it to happen is held open, and appeals made. Come on, join us. We are all better for it...)

No machine learning needed. And scaling happens by natural community distribution. The very big platforms, Twitter for example, will be a mess because size. But, what nobody realizes is Twitter feeds tons of other communities. If those discussions are robust, the laughingstocks on Twitter will be known as such, and that bubbles right back to Twitter!

But there is a catch! It will do that for established big money interests same as it will nefarious politicians and bad actors trying to manipulate things.

So, what is worth what?

Do we really want robust, real, honest dialog, or is this really just about a cleaner, managed one like the status quo interests would prefer?

No joke.

How do the people make the big money? That costs money to convey. Also no joke. Hard won information.

But, I have put enough here for others to go and do the work. If they do it, they can get the positive outcomes.


I'm not sure I agree with everything you said, but I thank you for making a well thought out post that isn't yet another overly cynical "delete FB" post.


That's how to play it :D

Honestly, we don't need to delete FB. The core of FB is a good thing. I'll leave my comment as it is for now, food for thought.


I think your comment is gold. Because I've been also thinking about the construction, organization, and mechanism of online communities in general, most of my comments were posted to another community, but I posted some related comments on Hacker News as well. I'll start dumping comments as well, I hope you find them interesting. Feedback is appreciated.

1. First, I've been observing various online trolling culture and fringe reactionary political movements for a few years. Mostly 4chan, but I think it applies to the web in general. While I don't agree with their intentions or motivations, but I found it's an extremely interesting phenomenon and pretty post-modernist in itself.

* The cyclist behind an anti-cyclist Facebook group

https://news.ycombinator.com/item?id=17731220

* Recommended comments (read the link below before you continue):

https://news.ycombinator.com/item?id=17731914

Here's a story: a biking hobbyist used his spare time and launched an influential anti-bike movement on Facebook. He posted numerous sensationalized commentaries, posters and memes to demonize bikers, including the various "data" and "charts" for the "bikes are a great threat of traffic order" propaganda, and occasionally advocated violence towards bikers in forms of half-jokes. He also proposed political reforms to introduce restrictive and/or discriminatory traffic laws for bikers. Soon, his campaign has gone viral and attracted a large number of supporters who drive cars. Well, not all bikers follow the traffic rules and some problems surely exist, but they are only as bad as the average day-to-day traffic and usually not seen as something people need to act upon, but this campaign exaggerates it, radicalized it and polarized the issue and created lot of hate. What is his intention then? His only intention is laughing at this fact: he was able to manipulate the car drivers easily and even create a raged populist political movement with a mere keyboard.

We hardly know who's behind the populist political movements online, but this is a well-documented case, and I believe it can be a representative sample which largely reflects the origin, emotions and motivation of the 4chan-like "for-the-lulz" trolling culture. Also, I can immediately point out the similarities to the populist reactionary political movements: the nationalism propaganda used many of the tactics the biking troll has used. Also, Russia is accused by the media to create an army of bots on social media, such as pro-LGBT, anti-LGBT, pro-life, pro-choice, religion fundamentalism, atheism, nationalistic, liberal, etc, with the intention of creating polarized argument and social conflicts in the United State.

On the other hand, the "for-the-lulz" trolling culture itself, even can be used as propaganda, is not propaganda in itself. It is a form of complicated web culture. It's basic definition is "doing some pointless activities to make others to suffer, and laugh at it", but it stands for a lot of things. It has a reactionary, or anti-establishment element, as previously mentioned, but it also includes: (1) cultural jamming, similar to the 70s counter-cultural movement (see those hoaxes in Fight Club), (2) It understands the art of exploiting a vulnerability in the system for one's own advantage, similar to parts of the hacker culture, (3) Also, it has a somewhat nihilist, post-modernist component, sometimes artistic (The Game was popular on early 4chan: once you remember playing it, you lose it), and (4), it reflects the worrying, helplessness and alienating aspects of modern life, and as an entertainment against them using humor and hate, then finally (5) a populist movement. It's core logic is: everyone, look how evil/degenerate this thing/person is, the justice needs to be done! Sometimes, it can be a legitimate popular movement and serve the right purpose, e.g. Occupy Movement, but other times, it's just a bunch of angry mobs lynching people. For example, a list of people on Encyclopedia Dramatica.

Especially, when I grasped these points, I found things went pretty stupid when, for example, the mainstream media started reporting Pepe the Frog as a hate symbol in 2016. Yes, it has connections to reactionary political movements, but itself is only a symbol of the trolling cultural. When the media reports it, it actually gives it a platform of exposure, and encouraged trolls to use it as a genuinely political symbol, and personally I think it makes the issue worse.

What I'm trying to say is, to solve problems like this, we must study and understand the entire machinery of it, including the underlying social-economical basis of this conflict using sociology and psychology, and to provide social solutions to the problem. However, the only proposal the mainstream media (or us, as online members of online communities) has right now is "Twitter and Facebook should ban hate speech", or "we should not feed the trolls", which hardly touches deeper problems.

If you want to understand more about the trolling culture and its mechanism, I recommend "This Is Why We Can't Have Nice Things: Mapping the Relationship between Online Trolling and Mainstream Culture", by Whitney Phillips, published by The MIT Press.

Here's are related comments on this issue I have posted before,

* Twitter has barred hundreds of accounts for posing as liberal activists

My comment: https://news.ycombinator.com/item?id=18239216

* RIP Culture War Thread

My comment: https://news.ycombinator.com/item?id=19231994

This one is mostly a repetition, but it includes one more book recommendation. I also recommend to read the original web article. It has covered a lot of related discussion.

2. I was also trying to characterize online communities in terms of its medium, feedback mechanisms, and filtering mechanism. A online platform has a lot of content, but you only see a subset of content. The filtering mechanism determines which subset it is. We could say, the first generation of online community is mailing list, Usenet newsgroups, and web forums. It only has a simple filtering mechanism: grouping the content by topics, and a kill file (a people/keyword blacklist, sometime shared). It works relatively well, and has the advantage of producing diverse topics, free-flowing discussions, but there's nothing to stop low-quality discussion, so a good signal-to-noise radio must be carefully maintained. Another problem is the lack of feedback: when nobody reply for post, it means five different possibilities, see Warnock's dilemma (https://en.wikipedia.org/wiki/Warnock%27s_dilemma). Another important issue is: the entire forum basically stops scaling when the number of users exceeds 10,000 or so. The information is too much to handle for any individual.

Platforms like Reddit or Hacker News can be seen as the second generation, it introduced voting as a feedback mechanism, which simultaneously (partially) solves the problems of content recommendations, restriction of low-quality content, and scaling. The "karma" system of Slashdot also counts, but it has a innovative "community moderation", so I guess it's 2.5 generation. Anyway, overall, its usefulness of voting as feedback is great and now becomes an integral part of many communities, but not without its own problem. Voting creates a strong incentive for people to collect upvote. People may stop talking about insightful contrarian views, which otherwise can generate an intellectual interesting discussion in a newsgroup. Also, voting seems to be more suitable for link/media recommendations, but less suitable for discussions. Many subreddits has thousands of votes for links but a single digit of comment.

Finally, ~2007, here comes the social media. Social media is unique because it leveraged the underlying human social network as a means of the filtering mechanism, and has a somewhat "decentralized" model of publishing. Unlike a forum where everyone post to a single directory, on Twitter, you post to your own account, and others can simply follow a number of people, then interesting posts simply circulate by itself. The system is highly effective, I love social media. Unfortunately, problems. First, the content has to be short, no long discussion is possible. This is usually not a problem, but the social media has slowly killed most forums in the process and became the center of the web, forcing people to use for discussion, although the social media is an awful platform for this purpose. Its structure is conversational, posts don't have a single thread, but a dozen of microthread. A replies to B, but this reply is usually ignored by C, which may have crucial context. Furthermore, the volatile nature causes many problems, A mentioned X 3 hours ago, and now X is missed by most readers. Or, A posted X and Y, but often only a single X/Y is seen by other readers, while the other one has important context. Also, the social media has the strongest filtering bubble effect compared to all platform, which may create an echo chamber, increase political polarization among other effects.

But think about it. Voting was invented by Reddit in ~2002, social media is a thing from 2007. Can we design a new generation of communities with innovation? A better alternative "filtering mechanism"?

3. I noticed a lot of online communities seems to go downhill. What is going? I have a comment.

https://news.ycombinator.com/item?id=19958700


Just letting you know I saw this.

Back later when I can consider it properly. You dropped a lot. Great, but I have to plow through it and will.

We have a common interest apparently.


There were always trolls and lots of people that weren't quite trolling but weren't contributing much.

(I have a lower 5 digit account over there)


The commitment to "no comments deleted ever" was great 'til racist trash became the most common theme in comments. Nowadays their moderation system just doesn't scale, and IMO the page would be significantly improved by a filter on the n-word


> PCMag: The first glimpse we get of Bitworld is this newly sentient virtual mind conceiving its surroundings, its being, and its ability to think and create and learn and adjust in the midst of endless chaos. It was fascinating stream-of-consciousness writing to encapsulate cognition. That must've been a really tricky part to write.

I'm looking forward to reading this, for sure.

But I gotta say that Greg Egan did a good job of this, at the beginning of "Diaspora". Which he published in 1998.

Also interesting is Kevin MacArdry's The Last Trumpet Project, which goes further. Read-only access to the past permits bringing everyone who ever lived into digital heavens. It came out in ~2010.


"My big picture view of this is that broad access to the facts is empowering to everyone who can get it. The broadening of that power base to include more people comes at the expense of the oligarchs of the world, who are always going to be able to reap power, wealth, and benefits from keeping everybody else in the dark."

This is a very interesting quote to me because it's the type of quote that "the oligarchy" uses to label those like Alex Jones wing nut conspiracy theorists.


Why is social media the poster child for algoritmically driven content? If anything, social media response to scale hasn’t been to use algorithms to drive content from top down, but instead to crowd source that information. Yes Facebook might rearrange your newsfeed. But the information it shows is still driven by the likes and shares of your network. On places like reddit it is driven by upvotes, or hash tags and retweets on twitter. The algorithm is secondary to the user group.

Thus if we really are going to blame social media, it’s because we aren’t more selective about the social networks we embed ourselves in. But this of course is a problem that existed from the dawn of humanity.


The problem is still that the algorithms cause problems. Social media is also a poster child because we could actually observe the transition in many cases: Early Twitter and Facebook as far as I know had entirely chronological timelines, and then changed that to "increase engagement", with all the side-effects that has. Just some examples:

- boosting things that are already popular increases their measured popularity even more, which both

a) over-emphasizes things that get reacted to quickly, which means e.g. quick outragous headlines get even more benefit over more in-depth coverage than they already do by human nature and

b) over-emphasizes popular sources over niche ones.

- Facebook actually does guesses at audiences, which means liking something can actually mean you decrease it's reach with the intended audience. (the "Facebook mom problem": if you post e.g. about your latest scientific paper and your mom likes it immediately, because that's what moms do, Facebook assumes it's family content and is less likely to show it to professional contacts that are the actual audience)

- for commercial content, Facebook intentionally limits reach among people who explicitly choose to follow you unless you pay for visibility


> The algorithm is secondary to the user group.

Neither is secondary; both are essential to the socio-algorithmic evolutionary process.

The algorithms are intended to optimize toward some user group behaviour. As the user group's choices influence the behaviour of the algorithms, so too do the algorithms' choices influence the behaviour of the user group.


Technologically, it seems to be pretty speculative and hand-wavy. Consider:

> One thing we do know about quantum computers is that once we can get them to work, which is no small task, they will be unbelievably fast and increase available computing power by orders of magnitude compared to traditional computers

> The near-future world of Fall is full of familiar buzzwords and concepts. Augmented reality headsets, next-gen wireless networks, self-driving vehicles, facial recognition, quantum computing, blockchain and distributed cryptography all feature prominently.


relatedly: https://www.nealstephenson.com/why-i-am-a-bad-correspondent....

That is not such a terrible outcome, but neither is it an especially good outcome. The quality of my e-mails and public speaking is, in my view, nowhere near that of my novels. So for me it comes down to the following choice: I can distribute material of bad-to-mediocre quality to a small number of people, or I can distribute material of higher quality to more people. But I can’t do both; the first one obliterates the second.


But his books are quite mediocre too (besides those wikipedia articles he adds to them) so what does it make his speeches?


I love his books...


Is this an ad for a book on HN?


Url changed from https://news.slashdot.org/story/19/05/25/0117238/neal-stephe..., which points to this.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: