There are a lot of interesting societal effects caused by the internet and social media, many of them very problematic. But when was this time in the past when people agreed about facts? Is it during the period of time when we lionized science after WWII and before the 80s but during which smoking was still healthy? Or was it when the Church decided what was true? Or was it when Hearst could print what he wanted and make it so? We have been in the Miasma since the dawn of civilization.
Neal Stephenson had a hand in sparking my interest in cuneiform and I've read many tablets myself (mostly Ugaritic and some Akkadian). The same effects are evident all the way to the beginning of recorded history, and likely predate it. The Miasma is the result of any indirect epistemology combined with human proclivities. There is a certain scale and swiftness that is novel, but it's not yet a brand new thing: much as flash crashes on the stock market are a new side-effect of high frequency trading, the crash of 1929 happened just fine without it.
The Miasma is us, not the tech we use to connect us.
>But when was this time in the past when people agreed about facts? Is it during the period of time when we lionized science after WWII and before the 80s but during which smoking was still healthy? Or was it when the Church decided what was true? Or was it when Hearst could print what he wanted and make it so? We have been in the Miasma since the dawn of civilization.
I see this fallacy quite often. Just because we had something in the past doesn't mean we e.g. don't have it 10x now.
It's like arguing that nuclear weapons are nothing special, we had spears and arrows since forever...
Scale matters, degree matters, automation matters, and technology is a multiplier. At some point amassed quantity becomes a new quality.
A government could have someone followed for example in the past, for example.
That was true even in ancient Rome. But 24/7 tracking of everybody, everywhere, made possible with mobile phones, GPS, facial recognition, plate recognition, and so on, is a totally new ballgame, not even available to the Stazi or KGB. A dictatorship with those tools at hands can do much more damage than one that just can tap into few phone calls with manual labor, or have someone followed. We can't dismiss it as "governments could always track someone if they wanted it".
Similarly, we can't dismiss the effect of the internet, because we had yellow press and smoked...
Fortunately we have a real-world example to demonstrate the effect of technology on surveillance: China. Let's see if they ever manage to become a democracy.
On this topic I've noticed something quaint on HN: people making excuses for China's "special" situation or even arguing against democracy itself.
> On this topic I've noticed something quaint on HN: people making excuses for China's "special" situation or even arguing against democracy itself.
I don't think it's quaint or HN-specific, especially if you talk with people outside of tech. People make excuses for China because the official narrative ("we're Good Guys who do Good Things, they're Evil People who Hate our Freedom") is bullshit, and are also impressed with their ability to execute large-scale infrastructure projects, something the West no longer can. On the other side, they look at how democratic governments are primarily a tax-funded, continuously running standup comedy.
But the conclusion from that isn't "China good, democracy bad"; you won't find many people who would actually prefer for their government to be replaced by the CCP. People are just asking, "could we stop with the bullshit good/evil narrative, try to understand them and their motivations instead, and perhaps figure out how to do infrastructure again and make our politics be less of a shit-show"? I'd say it's a fair sentiment.
I doubt that anyone's concerned about the average Chinese citizen and whether they're evil. The discussions are always about the government.
The fact that "we" whoever might that be are not the good guys doesn't actually influence whether the Chinese government are the bad guys. Their behavior makes them the bad guys.
>On this topic I've noticed something quaint on HN: people making excuses for China's "special" situation or even arguing against democracy itself.
I'm of several different minds about this.
a) I'm against many things the Chinese government is doing (religious persecution, surveillance, credit system, censorship, etc.).
b) I prefer populations to find their own way, not to be subjected to foreign intervention. If they want western-style democracy they should bring it themselves.
c) Foreign interventions are always or almost always hypocritical and self-serving. Especially when the same foreign powers play friends with dictators or support a worst regime at the same time they condemn another. There's also a track record of living places worse than they were, from (Iraq, Libya, Syria, etc.).
d) The foreign population is easily manipulated as to the truth on the ground, especially if a country is the "enemy du jour". It's easy to find dissidents and make it like the whole nation agrees with them on this or that matter, when it doesn't. Especially if the people you find are more like your culture (e.g. westernized in this case), which could make it even less representative to their compatriots, but more likable to your population. Bad as the CCP can be, it can still be the case that many Chinese still like it -- after all the country does well, they rise out of poverty, the might feel safer, be more conservative, etc. Heck, some Americans can't fathom why half their country can possibly like Trump, and they'll know whether the average Chinese might be OK with the regime?
e) China especially might be worse off if the CCP doesn't hold, especially with a sudden transition. It already had very bloody civil wars for the best part of the 20th century. Imagine the forces and power struggles in a 1.6 billion strong country if there's a power vacuum...
f) Democracy is a cultural issue too. E.g. there's the saying "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety". But that's a tradeoff people might knowingly make still (plus "temporary safety" is a weasel word - what's to say they don't get real long-term safety by the trade-off?). In any case, some cultures might not prefer democracy (either because it's all relative, and down to preference, either, if we want to make a "our side knows best" stance, because they're not mature enough to want it). Decorative or not, for example, I can't understand why the British don't behead their royalty and be done with it. But there are people there who genuinely love it (even coming up with BS arguments, like tourism, for all the resources the royalty steals from the nation).
g) Many of the complains against China (not about domestic democracy government abuse), like "IP theft", unfair "government businesses/subsidies", have been leveled against Japan, Taiwan, and Korea. Japan especially had all the same concerns, with the same ton ("stealing our IP", "their government funds their businesses", "they can only make copies of our stuff"), discussions in congress, heated debates in the press, etc. Also the whole "they're buying us" in the late 80s/early 90s when the Japanese bought US companies, Hollywood studios, record companies, etc. Most of those concerns are of the "those are good when we do them", variety (eg. trillion dollar Detroit bailout, huge agriculture and telcom subsidies, and so on) or the "we did them in the past and they benefited us greatly, now that we don't need them anymore they are bad" variety (e.g. https://foreignpolicy.com/2012/12/06/we-were-pirates-too/ ).
Your points are mostly related to the Chinese internal politics. Even if we look away from their human rights abuses, say because the US also has a bad track record, China's using their economical power to bully other countries and companies into self-censorhip and ignoring those abuses. They're essentially trying to legitimize their unsavory actions, and they're at least partly succeeding.
So they are harming our democracies, never-mind their citizens.
Well, "our democracies", self-censor all the time at many scales (from government to news to entertainment) in matters that suit us, eg. Saudi Arabia, so there's that.
Hollywood for example doesn't cosy-up to China because they're bullied, but to sell more movie seats there. I'd say the same thing is true for Apple's case.
> Decorative or not, for example, I can't understand why the British don't behead their royalty and be done with it.
As long as we have a Monarch, our PM can never become President. Even though the power of the Monarch is theoretical, this still makes for an important distinction because appearances shape the way people think about things.
Just recently we had the PM successfully taken to court for ”misleading advice to the Queen”.
>As long as we have a Monarch, our PM can never become President. Even though the power of the Monarch is theoretical, this still makes for an important distinction because appearances shape the way people think about things.
I don't know, we have a PM here and not a President, and it's not really that great either.
Are humans capable of detecting and understanding what that amplification means? Even though our pattern-matching is a wonderful skill, it doesn't always serve us well. We exaggerate local effects. We are known to be terrible at understanding risk. And I've even heard that nine out of ten dentists are bad at statistics.
I'm not saying we shouldn't ask the question. I'm saying we need a way to answer it that factors out human perceptual error.
Feedback loops are shorter, news reports are increasingly more useless[0], and our current age is somewhat unique - reporting of the past didn't have a strong economic system attached whose sole purpose is making it less truthful, less accurate, and more disagreeable. I'm, of course, talking about funding media through advertising impressions at article granularity.
--
[0] - Gwern makes a really good argument that with the amount of people we have on the planet and how information moves near-instantly around the world, you can plausibly assume that all news reports are flukes, one-in-a-million events, "rare datapoints driven by unusual processes such as the mentally ill or hoaxers are increasingly unreliable as evidence of anything at all and must be ignored." https://www.gwern.net/Littlewood
> before the 80s but during which smoking was still healthy
That's perhaps US-specific but it's certainly a strange view. Europe was issuing anti-tobacco advice and government health warnings in the fifties. The rise in lung cancers was noted pretty much immediately after the war. The US was late to that party, but even so was clear on the link between smoking and health in the sixties. We had anti-smoking and lessons on the effects of tobacco in UK schools in the 1960s, perhaps earlier. I sat through them in the 1970s - they weren't especially effective, but they were there.
That's despite all the free tobacco sent under the Marshall Plan to ensure Europe was good and addicted to Virginia leaf.
> Or was it when the Church decided what was true?
So what, the middle ages? Or the Victorian resurgence?
One question that recently came to me regarding technology is: "What happened to all the hope and optimism?".
Growing up as a kid in the 80s, I remember the development of personal computing as exciting, and especially with the explosion of the Internet in the 90s, the possibilities being envisaged by everyone were utterly intoxicating. Finally we would have world that connected disparate cultures, a bridge of communication that would help us understand each other and appreciate different points of view. A true global community. For those of us coming of age during this period, "surfing the web" was exciting because every week you'd learn about or discover something new, and there was a highly experimental, creative, and fun tone to the whole thing.
Instead now we have social media that is largely just a venue for people to yell without listening, political entities to peddle misinformation that serves only their own interests, an entire industry sector dedicated to exploiting personal information to target ads, a culture of consumption rather than creation, and the centralisation of online life into the hands of a small corporate oligarchy. This isn't what most people were hoping for in the 90s, though a few were warning of potential risks.
Like the author, I recently read Edward Snowden's book, and in one of the early chapters he talks about the experience of growing up with the Internet in the 90s in a way that resonated closely with me. Fast forward to the end of the book, which is absolutely chilling, and it feels like you're reading a dystopian science fiction novel, except it's real.
The Internet of today is almost unrecognisable from the early days. Stepping back I just throw my hands up and wonder what the fuck happened? Is there any hope that it can become better, or is it just going to be a further descent into cynicism? And what, if anything, can we do about it?
- "Eternal September". If you treat it as a cultural movement, like say "punk" or "hippies", then it was always at its best slightly before you joined and ruined by the larger number of people who joined after you and didn't get it.
- End of "end of history". The period after the fall of the Soviet Union was hugely optimistic. "Progress" had won, and was going to continue to win, forever. This was brought abruptly to an end on the 11th of September 2001, and much of the evils of the modern world stem from the subsequent war. The internet was a propaganda battleground for the first time.
- "Selling out". The great achievement of the internet was building a telecoms system without a billing system. There was outcry when the first ad spam happened on USENET. Ultimately it kind of destroyed the service. Since then so many of the other problems of the internet are caused by people doing things that are inconvenient or harmful to others not out of internal malice or carelessness but because it's profitable to do so.
Early adopters are always a more fun crowd, because "being a fun person" and "being an early adopter of stuff" are related personality traits. So don't give up! Become an early adopter of something new, or if there's nothing worthy, invent it :-)
> One question that recently came to me regarding technology is: "What happened to all the hope and optimism?". Growing up as a kid in the 80s, I remember the development of personal computing as exciting...
Back then nobody (that is, O(0)) people cared about those things so the tech could expand in peace.
Nowadays, computing is ubiquitous and in the front page of the paper so the people who feel threatened by it attack even the smallest green shoots of progress.
Megacorporations took control. Just as it happened with the telephone, once a big company has enough control, it starts to dictate the rules of the game, startups have to play by that rules, and everything becomes boring.
Now whenever someone announces an exciting product, you can be sure it connects to the cloud, requires a login, mines your information and will be obsolete in less than a year... alright, maybe we just become more cynical with age.
What we need is to quantify the negative impact that negative storytelling has on our wellbeing and direction. That will help us understand the importance of positive heuristics for the future, as in the 60s-90's.
We should avoid a negative vision of the future. But we also need a positive vision of future to attract us and guide us.
Stewart Brand had one. Many techno utopians did. But now so few intelligent people believe it is morally acceptable to be optimistic. After all, climate, race, capitalism...
> But now so few intelligent people believe it is morally acceptable to be optimistic. After all, climate, race, capitalism...
We have to consider the moral implications of our social and economic structures as well as our impacts on the only place we have in the universe to live if we are going to have a positive vision of the future.
The greatest challenge here is not being silenced by those who have a vested interest - logical or otherwise - in preserving the status quo with regards to these things.
Well, my argument is that we need a positive vision without considering how exactly we will fix our social, economic and environmental issues. That's my point. We can't wait till we figure that out to have a guiding optimistic vision of the future. We simply have to assume we will figure it out.
Second, and with respect, I would suggest that your rhetoric, which is quite common (and, by the way, represents vested interests in status quo intelligensia), is doing a lot of silencing. It's quite pervasive.
I'd argue that contemporary liberal group norms are much more silencing than, say, the oil industry.
> Second, and with respect, I would suggest that your rhetoric, which is quite common (and, by the way, represents vested interests in status quo intelligensia), is doing a lot of silencing. It's quite pervasive.
> I'd argue that contemporary liberal group norms are much more silencing than, say, the oil industry.
Do you not see the hypocrisy in this statement? You're attempting to discredit my entire viewpoint by painting it as "liberal intelligensia rhetoric" - whatever that means.
And as to your second point, the oil industry has a long record of covering up evidence of climate change, silencing victims of pollution/fracking, manipulating political process in order to continue destroying the planet, and so on.
I'm not saying Harvey Weinstein should have lost his job, but the oil industry is certainly responsible for more silencing than "liberal norms" by an order of some magnitude.
>When I described this to Amy, she responded with a magnificent rant that was something like “this is a romanticized utopian ideal about a thing that was inhabited by socially inhibited, white male nerds who consider themselves too smart to be misogynistic but, well, often are.”
This kind of thinking is itself a major cause and component of the Miasma.
The miasma works by provoking the human minds it contacts to produce more miasma. It's self-replicating, and its most potent constituents operate on the human need to chant their own team's fight chant whenever an opportunity arises, and especially when someone else's chant is heard. The guy who writes Slate Star Codex would call that an example of the blue tribe, and you can bet that every other tribe that hears it will spread a little of their own color of miasma back to get even. In the end, you get the brown miasma, that is toxic to everybody, and provokes its own production.
I'd argue it's not naturally self-replicating. It just seems that way because of how platforms promote content with algorithms that have been optimized for user engagement. Go read the book Zucked, if you haven't already.
Memes on the internet are already self-replicating in a way that mirrors evolution fairly well. Who's to say there aren't other mind parasites that can jump from host to host and evolve to be even more infectious? In Germany there is a current problem of Copy-Paste texts being shared on whatsapp that behaves almost like a virus. One host can infect dozens of others.
Those copy-paste texts are memes in the original sense. Oddly enough, I first read the term in Neal Stephenson's "Snow Crash" before I was aware of the origin.
The concept of a meme is a unit of information or idea, akin to a gene in living organisms. Successful memes are good at spreading and being replicated. Much like with genes, they alter over time and can become better at being picked up and spread further.
It's more a way of looking at the spread of ideas than an actual "thing" but it encompasses everything from lolcats to recipes to economic theories to religions. A successful idea is one that inherently makes people want to share it.
>In Germany there is a current problem of Copy-Paste texts being shared on whatsapp that behaves almost like a virus. One host can infect dozens of others.
People have been forwarding those funny emails for decades.
... as is its doppelgänger, the "red pill" or "manosphere" or "alt-right" or whatever it's called this week crowd. Other side, same coin.
The other day I came up with a dorky acronym for social media and what it does: the CRASER. CRASER stands for Craziness Amplification through Stimulation of Engagement with Reactivity.
The idea is that primitive emotions like fear, hate, anger, outrage, etc. are the things that drive engagement. In other words: emotional reactivity. We've programmed computers to moderate our discourse with the goal function of driving engagement, so they selectively amplify signals that push those buttons. The result is a craziness laser that is causing everyone to adopt increasingly extreme, irrational, and provocative belief systems and styles of expression. Like in a laser all the various forms of madness feed on each other and intensify each other.
In the end there will only be Nazis advocating a new holocaust, Neo-Leninists advocating new gulags, and people who just hate everyone equally and want to destroy the world for the lulz.
For what it's worth I remember the early Internet too and in my memory it all really started to go to hell when gasified social media appeared: algorithmic timelines, up/down votes, likes, etc. That's when the CRASER fired up.
I don't read her as saying that the nerds were invalid.
I think what she's saying is that if you were part of this group, the early Internet seemed welcoming and inclusive, but if you weren't, it was the opposite. From within a community it's easy to feel inclusive!
For whatever it's worth, even though I am a white male nerd myself, I never really felt welcome in "hacker spaces" and such. It must be a lot worse for people who are not white or male. And I'll believe her when she says that that was her experience with the early Internet. (Or, well, today's Internet.)
> I think what she's saying is that if you were part of this group, the early Internet seemed welcoming and inclusive, but if you weren't, it was the opposite. From within a community it's easy to feel inclusive!
Arguably, the people who weren't "in the group" at that time were people who weren't interested or didn't have access. Internet was neither global nor cheap nor popular back then.
As for the misogyny angle, I feel either she fixated herself on a problem and is projecting, or this remark was included to signal allegiance with the social justice crowd. As it is now, the comment reeks of antiintellectualism.
> It must be a lot worse for people who are not white or male.
I don't think it was back then, because back then people didn't care much about it, and a lot of communities of post-university Internet were text-only pseudonymous communication anyway. As for what's today, increasingly it's being white or male that makes you out of place in a hacker space.
> a lot of communities of post-university Internet were text-only pseudonymous communication anyway
So is this. Which is enough for you to spread the old, tired, harmful, and incorrect trope that "people who are not in tech are just not interested". A text-only discussion of how women or minorities don't belong because they don't want to belong is unwelcoming in itself.
> As for what's today, increasingly it's being white or male that makes you out of place in a hacker space.
You are twisting my words. For one thing, I don't feel "out of place" among non-white non-male people. People are people, I'm not afraid of them because of the color of their skin or what I guess their genitalia are like. For the other, all the hacker spaces I've seen where I didn't feel welcome had large white male majorities.
Yes. And I don't know what your race or gender is, nor do I care, nor would it make a difference if I knew.
> Which is enough for you to spread the old, tired, harmful, and incorrect trope that "people who are not in tech are just not interested".
At this point in time today, not only it's not a tired trope, it's pretty much an obvious truth. Programming today has zero structural barriers to entry, minimal capital requirements, and unprecedented amount of affirmative action targeted at all kinds of minority groups. If in 2019 you aren't programming, you're either not interested in doing so (a fine choice!), or can't (due to economic or health constraints).
Still, I was talking about the times where the Internet was a peculiar things only particular types of nerds, discriminated against in the physical world, ever found interesting. Getting on-line, or into programming in general, required more effort and money - you had to convince yourself or your parents that a PC and a modem and future phone bills were useful expenses - and didn't yet offered obvious paths to riches. If you were in there, it meant you were interested and had wealth to spare. That means, obviously, that a lot of people were excluded.
But if the point you're making is that any group with barriers to entry will exclude someone, then I don't see the point of making that point.
(You'll also notice that all the talk of the tech being unwelcoming started only after commercial Internet exploded and some of those high-school oppression targets made a shit ton of money and influence. Once tech became seen as the easiest path into money and fame, people started asking "how come the population of tech workers isn't uniformly distributed across all the characteristics you could think of", and people who were underrepresented followed that with "how can we fix it so we too get a piece of the pie?".)
I see some people on-line have a peculiar definition of what it means for a field to be welcoming - not only it has to remove all the barriers to entry, but it also has to bend over backwards to make the demographics uniform. It's true that the tech, until recently, didn't do the latter.
With a massive change like this, there must be some sort of societal shift behind it. You don't have to buy the article's thesis regarding the concrete reason. But there was something going on in society that changed young women's minds.
But "society pressured me into losing interest" is not the same as "I wasn't, our could not have been, interested in the first place". So yes, many women "aren't interested" in the trivial sense of "we had enough discussion threads in HN and elsewhere signaling that they shouldn't be interested, and finally this perception stuck".
> If you were in there, it meant you were interested and had wealth to spare.
If we agree that in many cases that wealth came from parents, there is no reason to assume that young women had less of it than young men. Unless there were factors like parents saying things like "computers are for boys". From the article above: "In the 1990s, researcher Jane Margolis interviewed hundreds of computer science students at Carnegie Mellon University, which had one of the top programs in the country. She found that families were much more likely to buy computers for boys than for girls — even when their girls were really interested in computers."
If you were in there, it meant you were interested (check), your parents had wealth to spare (check), and you were very likely a boy.
Anyway, all of this has been rehashed many times before, and we're unlikely to change each other's minds.
> For one thing, I don't feel "out of place" among non-white non-male people
This seems like your own biases are your worst enemy then. So a predictor of your comfort is the non-white, non-maleness of the group? I wonder what this predictor would be called if it was black men that made you feel uncomfortable?
You really feel equally "in place" among a group of Hmong women as you do among a group of Honduran women as you do among a group of Nigerian women, but you can't get settled in a group of Russian women?
Yes, I'm familiar with denying the antecedent. I assumed you weren't just making the "I'm comfortable around non-white, non-male people" as a logical declaration in the vacuum, and not part of any point.
> For whatever it's worth, even though I am a white male nerd myself, I never really felt welcome in "hacker spaces" and such.
I never really felt welcome in my university sci-fi club, and only went a couple of times. Looking back, I realise it's because I was cripplingly shy rather than through any fault of the other people in the club. (In fact some of my good friends now were in the club at the time!)
I think this kind of thing is more common than usually acknowledged, especially in this kind of discussion. It's one thing to feel out of place or unwelcome, it's quite another to lay sole responsibility for your feelings on the people around you.
I agree that shyness is a huge factor. I don't think that I wrote or suggested that I'm laying responsibility for my shyness on others.
Shyness is a thing, and it can be hard to overcome. And harder still if you are shy and you are not a white male in a white male dominated community. If the community wants to be inclusive, it has some responsibility in communicating who is welcome and what is offered to help newcomers overcome their shyness.
Seems like "Amy" missed out on that unique moment in history. I joined in the early Slashdot days and the occasional mailing list, and was fascinated how sophisticated the discussions were. Knowledgeable people went great lenghts to prove their points and didn't hesitate to tell others they were wrong. Up-and downvoting comments worked. Usernames were pretty much anonymous and and arguments had to stand on their own merit. I felt like finally there was a medium where reason and facts could prevail. Finally!
But Amy's world has won, the social web is a popularity contest, discussions are group-think and emotion-based, "white male nerds" can be ridiculed. Tragic.
Hmm. When was the golden age of Slashdot? Because I was active in the late 90s up to the early 2000s, and it was already showing the faultlines of the present day. Posting of shock images (goatse was invented there). "GNAA". Trolls got downvoted, but they maintained their own culture down there in the negative comments.
It was always a popularity context. The groupthink was so bad that adequacy.org could very effectively parody the obsession with Linux and AMD, as well as other tropes.
It looked like a golden age because "we" had a community that was "ours", but the only reason it was "ours" was that a lot of people invisibly bounced off it.
If there was a golden age, it ended somewhere between the Columbine shooting (remember /.'s response to that?) and 9/11. The response to that by Adequacy parodies so many comments before and since : http://www.adequacy.org/stories/2001.9.12.102423.271.html
> Of course the World Trade Center bombings are a uniquely tragic event, and it is vital that we never lose sight of the human tragedy involved. However, we must also consider if this is not also a lesson to us all; a lesson that my political views are correct. Although what is done can never be undone, the fact remains that if the world were organised according to my political views, this tragedy would never have happened.
Now that you mention it, I too remember the problems. To an extent I do consider them growing pains for a new medium, people trying to break the system in a childlike or hacker way. I also enjoyed the raw-ness of the new medium and accepted the "noise" as so many +5 rated comments were infinitely better than what you were able to read in a newspaper or magazine.
Trolling was in it's infancy and at least labeled as such. Today it's organizations successfully influencing elections through social media. And I do think the discussion of events was more nuanced and objective than today, the parody you've linked is actually a pretty good outline for a comment, better than most 140 character tweets.
>When I described this to Amy, she responded with a magnificent rant that was something like “this is a romanticized utopian ideal about a thing that was inhabited by socially inhibited, white male nerds who consider themselves too smart to be misogynistic but, well, often are.”
Hey, there were some black male nerds on the internet then too.
The internet was better in the past and that had nothing to do with the types of people using it. It could have been used by alien circus clowns and it would have been better. Why? Because it hadn't been weaponized yet. Nobody important realized in the 90s that there was going to be an upcoming war for attention span, for control of the narrative, for monitoring of the population, or for industrial/state-sponsored espionage. And it all was going to happen on the internet.
So if you ran across a discussion on X, you could poke around, find some scholarly articles on X written by, well, scholars. Not by scholars who were paid off or were looking to become the next internet superstar. So no, people didn't magically agree on facts, but there wasn't this endless chasm of belief-reinforcing bullshit that there is today, either. You could just go look stuff up.
There's another, larger discussion about whether it's good or bad, how to sew a silk purse from a sow's ear and the rest of it. But a difference in quantity can result in a difference in quality. Simply having more and more people come online and do things changed the nature of the net.
Well said. And weaponisation aside, usage and consumption of the whole new media was entirely better, because advertising and marketing companies sti haven't had the hold of it.
The first half of that book was so much more enjoyable (to me) than the second half.
I understand the limits and personal opinions that go along with this sort of semi-satirical/semi-predictive fiction. Still, the bits about identity verification, acceptance of "fake news" and slander, personal feed editors, and the fragmentation of society based on choice of info feeds straddled the line between believable and crazy/entertaining.
I could have done with a whole novel set in the world prior to the simulation bits.
> I’ve deliberately disengaged from some of it through deleting my Facebook account, limiting my Twitter usage to broadcast only, and trying to use LinkedIn in a productive way even though the UX seems to be set up to purposely inhibit you from using it in a way that doesn’t suck you into the LI vortex.
Unless you have deleted your twitter, you have not disengaged with the Miasima. It is far more intellectually toxic than Facebook ever will be. I honestly think it might the worst thing that's ever happened to western thought.
Cynicism, I believe, is the thoughtenemy. Trump, for those who loathe him, is a result of the cynicism of the left to not vote for Hillary. It's not because the right was too strong! As they say, "The perfect is the enemy of the good."
In the Diamond Age, Stephenson alludes to this, in his criticism of those focused on hypocrisy. After all, you can only be a hypocrite if you try to pursue moral actions.
The Internet works fine for getting good information on fast-changing things - like programming. Otherwise, I'd stick to libraries (the book ones).
Unfortunately, while we're staring at the "Miasma", the real miasma has been wreaking havoc in the real world - climate change. And the Internet is literally contributing to this miasma now with online video traffic for entertainment burning up data centres and networks around the world.
Let's do a rough estimation. A home router uses about 10W. It's unlikely a ISP router will use more for each stream, so if there are 6 on route, were talking about 60W. For the server, even if the CDN used a dedicated NAS for each user, we're talking about ~30W. Doubling all that to account for overhead, it's about 200W.
For comparison, a Tesla Model S uses about 200kWh/km, so a mile driven uses about the same energy as streaming 1.5hrs of video. A round-trip to a movie theater 15 miles away = 45 hours of streaming video.
You're forgetting about networks' and data centres' energy requirements. A recent report estimates currently use of the carbon footprint of digital technologies is about 4% of global emissions, and in a worst case scenario by 2025, digital technologies may amount to 7% of global carbon emissions. For reference, global emissions as of 2018 grew to 37 billion tonnes CO2 equivalent.
>>Digital technologies now emit 4% of greenhouse gas emissions (GHG), and its energy consumption is increasing by 9% a year. [...] only one form of digital use, online video, generates 60% of world data flows and thus over 300 million tons of CO2 per year. This use is far from being “dematerialized”. On the contrary, it represents 20% of the greenhouse gas emissions of all digital devices (use and production included), and 1% of global emissions, i.e. as much as Spain.
> You're forgetting about networks' and data centres' energy requirements.
No, I'm literally counting those requirements. Hence routers + server (+100% for overhead, like cooling).
> A recent report estimates that in a worst case scenario by 2025, digital technologies may amount to 7% of global carbon emissions.
Why do you trust that report? The numbers don't even make sense; just as an example, dividing the watched VoD time by the consumed VoD data, you get 24Mb/s, yet they say on the same row, "Average Bitrate : 3 Mbps". That's a 8x difference!
>>No, I'm literally counting those requirements. Hence routers + server (+100% for overhead, like cooling).
I would not say you're "counting", you're rather doing a back-of-the-envelope calculation with eye-balled numbers.
These estimations are difficult by nature. I don't "trust" the report per se, but they do provide sources, from which I can inform myself. Which is more than I can say about your calculations, which come without sources.
Digital tech is one of the fastest growing carbon polluters, and data centres alone account for almost 50% of the emissions. Now, the specific attribution to online video may be less clear-cut, but the trends have been replicated in multiple studies. You may also be interested in this recent report: https://www.sciencedirect.com/science/article/pii/S095965261...
Can you point me to this inconsistency?
>>Why do you trust that report? The numbers don't even make sense; just as an example, dividing the watched VoD time by the consumed VoD data, you get 24Mb/s, yet they say on the same row, "Average Bitrate : 3 Mbps". That's a 8x difference!
> I would not say you're "counting", you're rather doing a back-of-the-envelope calculation with eye-balled numbers.
Yes, of course. My point is that I wasn't forgetting them.
> I don't "trust" the report per se, but they do provide sources, from which I can inform myself.
And did you?
> Now, the specific attribution to online video may be less clear-cut
That's the thing, though. We know that streaming a video is a very low-power activity, since we can do so with a very low power device. And I added 100% of overhead, whereas the PUE of a real datacenter is actually about 1.09 nowadays.
So any report that tries to tell me Google or Amazon, who spend millions on DC energy improvements, are wasting orders of magnitude of energy more, is not a serious report.
A very big issue is that they assume a linear correlation between transferred data and power use. For the network part that may be reasonable, but for the DC is absolutely is not. Mining cryptocurrency, for example, takes a huge amount of energy to produce a message of a few KBs.
>>A very big issue is that they assume a linear correlation between transferred data and power use.
Right, so they do have to make lots of assumptions (same as all reports I've seen that are trying to estimate carbon footprint) - e.g., about the type of network, the device, the energy mix supplying DCs etc etc.
For instance, only 2% of networks in a country like Germany go over fibre, rest is copper. In Italy lots of people outside major cities use LTE for regular Internet access - absolutely insane in terms of energy use.
>>any report that tries to tell me Google or Amazon, who spend millions on DC energy improvements, are wasting orders of magnitude of energy more, is not a serious report.
You're talking past the point here, seems to me. The report is _not_ claiming that any specific technology is "wasting energy" in terms of efficiency - it poses the question of whether this energy is spent "wisely", seeing that supposedly the world is trying to reduce global emissions, and the consumption of say DCs or online video is growing constantly. Surely, not an unreasonable question to be raised, don't you think?
Amazon may be efficient but their energy comes mostly from non-renewable sources. On a scale of 1-10 how important do you think this use of dirty energy is?
Neal Stephenson had a hand in sparking my interest in cuneiform and I've read many tablets myself (mostly Ugaritic and some Akkadian). The same effects are evident all the way to the beginning of recorded history, and likely predate it. The Miasma is the result of any indirect epistemology combined with human proclivities. There is a certain scale and swiftness that is novel, but it's not yet a brand new thing: much as flash crashes on the stock market are a new side-effect of high frequency trading, the crash of 1929 happened just fine without it.
The Miasma is us, not the tech we use to connect us.