A fair argument could be the programming now is much more organized and can be done more efficiently toward a certain goal, but I don't think having more efficiency in this context is necessarily a concern - most concepts that can be programmed can also be re-programmed just as quickly. If Facebook is willing to expose a button to de-program a user they should be capable of doing that.
We might become inconsistent as individual human beings from this, namely what I believe yesterday is different from what I believe today or the next hour due to facebook, but later on some other programming might ease this confusion and convince you that nature is just as inconsistent as you.
It's not that someone else is programming you to vote candidate X instead of candidate Y or that now you want to buy brand Z, it's that we are rewiring our brains to follow the short-term feedback loops in how we act within society, i.e. we don't go places anymore because we want to experience something new but to share the simple fact of going somewhere with our networks and getting validated by them. I think that's the bigger issue here.
This isn't a new phenomenon. We see this in the form of companies branding strategy. The more you see the brand on the people you respect, the more likely you are to buy it.
A couple of examples.
1. Michael Jordan and Nike. Air Jordan's became the shoe to own in the 1980's.
2. Apple gear. Apple chooses their gear to be instantly recognizable. If you buy a mac book pro, say, you are extending their brand, unless you hide the apple logo somehow.
You also see this with brands like Abercrombie and Fitch, American Eagle, Hollister, and RVCA, where they sell clothing with the brand front and center on it.
So in 2018, the game is being played on Instagram and Facebook instead of college campuses and high schools in the 1980's.
I once bought a macbook pro while providing tech support for an organisation that used them exclusively. One of the first things I discovered when setting it up was that it takes three layers of silver gaffa tape over the apple to completely hide its glow.
How can I put this into better words? Maybe an analogy from the inventor of Twitter and Medium:
Social media/the internet is like a simple AI that gives you more of what you like. Now, imagine you see a car crash and you look, along with everyone else--you all stop to look. It's natural, it's inescapable. The internet notices this behavior and decides that humans want more car crashes--so it tries to supply more car crashes.
That's the kind of programming we're talking about. Humans are vulnerable and have psychological weaknesses. Social media is tailored to attack certain weaknesses because it's lucrative in an attention economy where advertisements = $$.
The dopamine hit after you see upvoted post is kinda like dopamine hit when you are popular in real life - except less powerful.
This being said, I think you are touching all of the problems but missing most of them:
-Yes, it's about short term loops vs long term loops. This is not about control, it's about what is a good way for people to spend their time: it's been shown times and times again that engaging in long term activities has a great positive impact on your sense of fulfillement, and even if, sure, the whole internet is pretty dangerous in this sense, FB is definitely among the worst content suppliers out there, combining the addictive attention grabbing with mostly shallow and unchalleging content, i think.
-Yes, it's about centralization: democracy is great, and democracy works with as much plurality as possible. Sure, mass media have existed for a while now (radios have been crucially exploited by all sorts of dictatorships in 20th century btw), but it has never happened that such a large fraction of the attention time of a whole population was cotrolled by a single private organization. This is Obviuosly dangerous.
-Yes, it's about efficiency. Like other commenters pointed out, it's also about the sneaky way in which any content arrives to you just like it was directly from your friend, rather than from a recognised news or content outlet. This catches us hugely more off guard, to such an extent that we are not prepared to handle i think.
-Yes, it's also probably about the human relations between people- here's where i'm not actually sure it's making things worse, but it's definitely making things different, just like telephone or cellphones did. This is why we should at least be careful, expecially with children.
Yeah, of course humans will still be humans in the end, I mean come on. It's just about deciding what is best for us
See the puppet images we made in this media literacy guide: https://github.com/nemild/hack-an-engineer
Ideally, we are individually able to realize this — and cut some of these strings.
The trouble is that like any other virus, people build up an immunity to the tricks on established platforms. You're not going to see another Kennedy or Roosevelt using the techniques they used, and it'll even be difficult to pull what Cambridge Analytica did now that that cat's out of the bag.
If you want to remain as effective as them you need to keep pushing and capitalize on the next mass media platform before people realize what you're doing.
Similarly, Americans will ask for a Kleenex even when they don't actually mean a Kleenex-brand tissue. I didn't even think twice about this until I moved to Europe and it was pointed out how weird that was.
It's true the big tech companies make it easier to target segments of people by collecting data, but pervasive mass manipulation is hardly new or unique to big tech companies and generally it doesn't need nearly the amount of data that they collect. People are generally similar enough to each other psychologically that you can paint with broad strokes like demographics and still hit most of your target (and interestingly, some of the most viral memes have no particularly targeted message).
"Politics in a democratically elected aristocracy ...
Let's pretend the voting system was a binding function from the collection of ballots to people in charge (which is not even the case):
The number of bits encoded per ballot per period between the elections is the bit-rate of that voter's feedback/input. The bit-rate of the output is the number of bits (entropy) in (changes in) laws. The democratic grade (the dimensionless ratio between bitrate of a voter to bitrate of law changes) is nearly 0.
Hacking the electorate is only profitable if we don't get to vote on ideas (at a high bit-rate) but on aristocrats.
<insert the aristocrats joke here>
Personally, I would prefer a fix to "hacking the electorate" stem from the required addition of information and transparency, rather than the setting up of mechanisms to regulate what is allowed to be seen, censorship. Adding information seems a much less prone to abuse than prohibiting information in a open democratic society.
What is the purpose of campaigning, if not that?
Meanwhile, the world burns..
“It’s exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology,” he said. “The inventors, creators [...] understood this consciously. And we did it anyway.”
But Palihapitiya says,
“Because your behaviors, you don’t realize it, but you are being programmed. It was unintentional, but now you gotta decide how much you’re willing to give up”.
Does Palihapitiya not realize that the programming he decries was built in to the system deliberately by its designers? Or is Parker wrong?
Limiting paid propaganda would help, but that's sacrilege in our ad/media driven economy, so the ultimate lesson is probably just "don't be poor".
What does he mean when he says "check-boxing" and how does that relate to the "implications of social media"?
Does he mean "approval-seeking" and the idea that this is what Facebook uses (e.g. "Like" buttons) to "engage" visitors (while gathering data about their lives, personalities, etc.)?
Anyone have a different read?
Granted, I'm doing some pretty heavy interpretational lifting there, but that optimisation loop is something I see as common to productivity, video games, social media, and many other obsessions of the nerdosphere. It feels good to give your optimiser a constant stream of feedback, even if trivial.
Of course, now our optimisers are being manipulated by much larger optimisers that run on engagement metrics rather than dopamine. Where did the intuition to build these myopic hyperoptimisers come from? Surely not the same people who grew up minmaxing character attributes or building spreadsheets for optimal daily task efficiency? When you gaze into the optimiser...
Weird that we managed to turn society into a GAN.
Not to give too much away, but the main way that the antagonist in the book preys on people is not by hacking their devices, but rather by hacking them through complex, technology-assisted social engineering.