Meh. Just because a big part of the internet really is like the "torture chamber" of Clockwork Orange doesn't mean the internet of the early 2000s is gone. I used to hang out in some forums as a young teenager, and these forums are still there, and active. All of the IRC channels I was active in 2002 are still there. I can still use ICQ to send messages to people I now know for over 20 years. You can still spin up a personal homepage with nearly the exact amount of work as in 2001 (at least in Germany, you had to be very careful regarding legal issues even back then). It may be buried under a ton of shit (read: ton of tweets), but the internet of 2000 is still very much there.
It's like growing up. Even if you're immortal you're only growing up once.
Is it? My daily rounds almost never encounter anything like that.
On a side note: this "good" internet may be hard to find, but don't forget that for the majority of the population even in highly developed countries, it was still uncommon to have access to the internet at home 20 years ago, so it was "hard" to find the good internet also back then. This is somewhat thought provoking, as it seems to indicate that the "good" parts of the internet were, and are, elitist circles.
I remember people from different generations walking in the streets, talking and laughing together without knowing each other. Something very rare for Paris :).
> let’s take a pseudoscientific journey
It's just words to most people, I think.
Quite a low bar to reach, we can do it!
I interpret this as meaning the internet needs a good reason to be sad, but can be happy for no particularly big reason. This seems like a good thing.
Also I'm not sure how/if they filter out bot traffic. Twitter, as I understand it, is mostly bots and fake follower accounts with human traffic being a tiny fraction of the data. Kinda like the old days of Usenet where 99% of the traffic was computers sending binaries to each other and only a tiny fraction was human to human text discussion. I would imagine the happiness sentiment of a mere repetitive shell script could distort the data such that the average is highly compressed around the 5 result. Possibly if we assume a nuclear plant meltdown should have scored a 1 on any realistic and reasonable human scale, then dropping only a tenth of that might indicate twitter is, to one sig fig, maybe ninety percent inhuman repetitive bot traffic.
To some extent twitter is workforce automation for echo chambers and signalling group affinity and membership; I'm not sure the mood of an environment in that context means anything when related to the real world, which reduces the significance of horrible events resulting in small signals. Surely, tens of thousands of deaths and nuclear contamination have nothing to do with trying to fit in with the cool kids, or at least a tiny subset of the population who think they're cool kids, which is a counterexample to my mathematical model in the paragraph above.
1) Obviously, Twitter is not representative of the whole internet and its users. Twitter has a particularly activistic base, so it's no wonder "happiness" went down after Trump got elected.
2) It misrepresents the performance of the sentiment analysis method used:
This is based on sentiment detection research done in 2011 explained here: https://journals.plos.org/plosone/article?id=10.1371/journal...
They used a fairly simple lexicon-based sentiment analysis approach which even in 2011 was not even close to the state-of-the-art. They first create a sentiment/subjectivity dictionary by letting MTurkers rate words in different categories such as "happiness", "laughter", "greed", "hate", etc. For the final "positivity" score they add the matching words in Twitter posts.
This is a simple and efficient approach but produces fairly unreliable results w.r.t. negation, valence shifting, etc.
Even in 2011, machine learning approaches were much better usually.
And indeed, Trump has been elected for getting the majority of votes, yet it represents it as a sad day, while I'm pretty sure most voters were happy.
And let's toss this question back at you: why does California routinely ignore all of California outside LA and San Francisco?
Similarly, why did all the polls and candidates (including Republicans!) other than Trump entirely ignore the rust belt? That's why his win came out of nowhere, after all.
Obviously those areas are less populated, but why should a minority not have representation? Larger picture, how do you manage a union of states when not even states but a handful of cities hold all the political influence?
If we switch to a popular vote, a handful of coastal cities will have a lock on the federal government.
In the long run, it's called the United States because states are sovereign political entities, and that union works because small states have political influence. Instead of eliminating the electoral college, Democrats would do far better if they got back to their roots as the party of the working class.
A lot of your other things are pretty random and presumptive. You didn’t even agree that California should obviously count when considering who gets more votes nation-wide.
If you want to have this discussion, I'd urge you to engage in a more digestible and exacting approach, rather than trying to ramble your way through the topic.
That is all
I doubt there's ever been a candidate that has won votes that total more than 50% of the voting age population so there's no reason to single out the results of the 2016 election.
Wikipedia confirms that, https://en.m.wikipedia.org/wiki/2016_United_States_president..., 46% Republican, 48% Democrat.
Obama legalizes gay marriage in the US,
Five different terrorist attacks in France, Tunisia, Somalia, Kuwait, and Syria occurred on what was dubbed Bloody Friday by international media. Upwards of 750 people were either killed or injured in these uncoordinated attacks.
Then after going through the effort of showing graphs and numbers to make the rankings seem legit, the author describes how the scoring is done, and it seems incredibly skewed by the addition of arbitrary events that each add a point to the score. So basically, if the author wants a day to get a higher score, they can just look into it more to find more events so that its score goes up. If an event happened that day that contributed to happiness, the hedonometer score already reflects that, it makes no sense to double dip by giving the score extra points since that days happiness may have had multiple small contributions rather than just one big one.
I don't even really agree with how the hedonometer score was integrated. I think to really see how "happy" an individual day is, it would be better to subtract that day's happiness from the baseline/average happiness of that month rather than the raw score. This gives small happy days in a happy period an advantage over big happy days in a less happy period. I guess if you're just trying to say that day is happy without trying to tie any event to it, the raw data is fine, but I think that's much less interesting. There is probably some fancier math that could be done to weight the scores on a curve that would be more fair than just subtracting from their baseline if you really wanted to get into it.
In the end they come to the conclusion that same-sex marriage being legalized is the happiest day on the internet, but if you look at the first graph, it really barely looks like an out of the ordinary day, you wouldn't notice it if it wasn't being pointed out. This really makes it feel like the author came into this article wanting that to be number one and made everything else fit around that.