> I know that you didn’t ask for this job. You didn’t ask for this role in society. None of you, not one of you, wants to think about the many people that can be affected by one fucking perfectly normal bug or mistake in the technology that you built. And this is one of the reasons we keep our heads down. No one became a geek because they wanted to be the center of political attention.
> That just happened.
> You don’t get to choose. You don’t get to choose what era of history you live in and what that era wants to do with you. And this is a moment when it’s all up for grabs. That’s what it means to say we’re on a burning planet. And what it means to say that we don’t have neutral ground is that you’re at the center of that fire. You set it. You’re one of the people that set it. You’re one of the people that tend it. And everything you do, the changes you make over the next months and years are going to chime down decades and centuries and shape the lives of people you will never know [...]
The is the wrong class of people to be guilt-tripping. As dumb as trying to guilt trip someone on Wall Street. Not that guilt tripping works at all any class of people. It just produces short term fixes under pressure with costs down the line.
Techies need to be guided by social scientists, psycologists, anthropologists, politicians local to different cultures. These are the people on front lines dealing with the consequences. That is the only route to a better place.
I’m not arguing against having more thought put into what we do, but taking it too far could easily put us into a collective rut that is nearly impossible to get out of.
You literally answered yourself. The key is that "important" != "earns money or has power".
It's not "politics doesn't exist". It's that the advantages of declaring a political cease-fire in many contexts outweighs the cost of not being able to advance your personal agenda. This is especially true in many engineering and technology contexts, where creating a product that generates business value for your company is orders of magnitude more important than your political concerns. Even if you think your politics is extraordinarily important, the answer is likely to donate a portion of your salary to political groups rather than politicizing the workplace.
Sometimes the political is really personal.
("Apolitical" is like "leaderless" or "self-driving": there really is a leader or a driver or a politics, it's just that everyone agrees to pretend that they don't exist or their intermittent intervention isn't crucial)
Every possible opinion is represented out there somewhere, and there will be someone who honestly believes, eg, that shutting down the electricity grid for 24 hours to make a point about Trump is justified. Or a point about immigration. Or a point about socialism, or what have you.
What is usually meant by 'politics' is that the people who are in charge of workspaces should be on the exact other end of the spectrum to these hypothetical activists. If anything even tangentially threatens the safe operation of the system, it should be a priority over political issues.
Republicans are real people who really make up approximately half the voters in this country. The "threat to democracy" here is the suggestion that it's a moral imperative to rig core communications infrastructure against them.
Facebook hasn't "fundamentally changed" anything. Right wingers just happened to be savvier at manipulating the tools it provides, this time around. If anything, that's a surprise. One would expect social media to be good at rallying younger, tech-savvier, more left-leaning voters. One would expect the DNC to run the savviest social-media campaign, based on its track record in '08 and '12.
Real integrity comes from ethical production. Place your time and skills towards companies and projects that at the very least do not make the world worse. And if you can't do that, I do not think you get to get on a soap box and talk about how great you are because you shop for local produce, eat vegan, and drive an electric car.
Everyone is as evil as their most sociopathic executive, you both contribute to the same bottom line but the employee sells out for less.
This sounds much like someone prefering to shut down such debate.
By contrast, prolonging and deliberately promoting uncertainty, as in the cases of leaded painr]t & petrol, asbestos, tobacco, CFCs, and now CO2 emissions, shows the other face.
Determining good faith seems a key.
So extending your example, by doing work that generates "business value" in a corporation as an at will employee, you're implicitly supporting the political structure of hierarchical capital ownership of productive labor. This would be in opposition to, say working only in a labor owned cooperative. (note that is not to favor one or the other, simply to make the point that there is a choice)
People generally do not make such a distinction because it doesn't seem like there are options to choose from, or more likely they are simply ignorant of other ways of cooperation/organization.
If one has an agreement (formal or tacit) to not talk about politics with your colleagues while the team manufactures Nazi flags... that may make for a pleasant and productive workplace, but broader society is still going to have a political reaction to that work.
The key is to understand the difference between contexts. Defining the context of your workplace is fine, but it's a mistake to assume that all contexts should treat you and your work the same.
This mistake is where you get comments like "but Facebook is such an inclusive company and everyone who works there are good people." That can be 100% true, but still not invalidate the criticisms of how Facebook's products are affecting society as a whole. And if someone at Facebook is going to listen to and consider that criticism, that will by necessity introduce the external context into the internal context.
Nobody designs an infrastructure to change your desire such that you desire a red Popsicle and not a green Popsicle. They design a generic infrastructure that can influence your desires and the specific desires are fungible parameters. It's equally capable of making you want red over green or green over red, or want to conserve water, or stay home from the polls on election day, or drink more Ovaltine. The engineering portion is the same in each case. It's a general-purpose tool.
The politics is in what you use it for. But that's the same as anything. A wall or fence with the same engineering specs has very different consequences when it's used to corral livestock than when it's used to imprison people.
The issue is that some companies (e.g. Facebook) are not just doing the engineering, they're also making the political decisions. They're not just creating a general-purpose technology, they're also deploying it in a specific way. And making the decision for everyone because they don't have enough competition.
That's exactly the view they are arguing against. There is no hard border between tool and decision to use it, the infrastructure will always shift the context of what's possible and alter society.
Once nuclear weapons become possible, a coldwar and arms race becomes imminent. Once a massive, uncensorable communication network becomes pervasive, the old social institutions that guide public communication become irrelevant and we build new ones or face massive misinformation and propaganda campaigns. It's not that we choose to use social media to propagate fake news, it's the inevitable political consequence of their technical structure.
This is not an argument against building things. But an argument for taking responsibility for what you build and acknowledging that building technology is a political move, it strongly affects the inner life of the polis.
Has the smith made a political move by making the hammer available to everyone? Because all the smith wanted to do is share his knowledge to everyone, so everyone can smith their own hammers.
Ultimately the smith is not morally responsible for what people do with the hammer IMO. The entire intent behind all actions of the smith was to do good, can we call him evil for that?
Similarly I don't think simply building tools is a political or moral move. Rather, it's what you encourage people to do with it and what people do with it that is politically and moral. If I encourage my tool to be used for the better of society, am I evil is a small number of people abuse it?
Don't get me wrong, I still think SV is a den of abuse, however that is because social networks like Twitter and Facebook aren't the hammer. They're not neutral tools. Both profit from moral abuse of the platform, it is enabled and encouraged to do so. Google as a search engine is making it's tool with the only intend to manipulate and abuse users for ads.
There is definitely tools that exist in a moral vacuum (just think of a selfhosted radioshow software, it can host a show about pro-LGBT and one about 9/11 conspiracies, should the author be responsible for either?). But social networks and the tools you mention are largely not it. Those tools have been made with the purpose to do evil.
It depends. Is the "hammer" a set of instructions to build an airborne pathogen starting from the HIV genome?
Technology is power and power is always political.
Technology is not power. It enables power. If you engineer the technology to enable only a specific kind of power, then you the engineer and the technology you made can be called evil.
Otherwise I would like you to point me to the particle of malice in the hammer and pathogen. The atom of injustice and evil that it's made of.
That's a clear contradiction. One one hand you acknowledge the choice of the technologist to be a power of evil, on the other you surmise that, as long as they are willfully ignorant to how the tech is used, they are merely a conduit of political power. It sounds like denial of an uncomfortable truth.
But a technologist with the goal to do good or neutral will be able to use the same technology to achieve the goal.
Technology itself is ignorant of how it is used, it's not a human. The tools, the technology, remains ignorant of how it's used even if it's for evil because ultimately it's the human wielding the tool that does evil.
You may call technology only made for evil purposes evil if you want (I did say "can be called evil", not "must be called evil").
Or otherwise, did the technology choose to be evil? Was it asked and did it consent to be evil? Where is it's atom of injustice in the tool? Or for software, where is the bit that is evil?
If I make a thousand hammers and use one to murder, are all hammers evil?
I find it dangerous to use chains to bind evil, consider a doctor saving a life and then the patient murders someone. Is the doctor evil? Are the tools of the doctor evil? Should doctors consider if a patient might do evil?
The same process should be applied to tools as well. If a tool is used to save a life and then a life is taken, is the tool evil? What if it doesn't save a life beforehand, is is now evil? How much live must it save before doing evil to be neutral?
That is why I consider technology neutral (even if you may call it evil it remains neutral in nature, not in use), once you allow technology to become moral and political, you open the can of worms that is the consideration if evil is what enables evil in any possible form or shape.
Technology always changes the landscape, but that isn't the thing anybody ever complains about. Nuclear weapons are only terrible when they're actually used against a city, not when they cause a cold war instead of a kinetic one. Which is the political decision.
When you build a system to affect mass desires and use it to get people to conserve water during a drought, nobody calls you a monster who is destroying society. You'll find serious people arguing that it's irresponsible not to do that.
People don't object to the tools, they object to the uses. Especially when somebody else uses them in a way that disadvantages the allegedly aggrieved party's political goals.
> Once a massive, uncensorable communication network becomes pervasive, the old social institutions that guide public communication become irrelevant and we build new ones or face massive misinformation and propaganda campaigns. It's not that we choose to use social media to propagate fake news, it's the inevitable political consequence of their technical structure.
Fake news isn't a controversy because it's a new thing. The "old social institutions that guide public communication" are the things that brought us three centuries of racist propaganda, gave birth to "yellow journalism", celebrated every desecration of the justice system in the Drug War, swallowed the party line on the War in Iraq and flamed outrage for the sake of ratings so hard that the current President of the United States is Donald Trump.
The reason fake news is now a big controversy is that now there are a hundred cell phone videos of the event in question, ordinary people have easier access to primary sources and the barrier is much lower for someone who objects to the prevailing narrative to find an audience. So when the latest line of bull makes the rounds, it's more often followed by an angry mob decrying its falsity to anyone who will listen and backing their objections with evidence. And then it seems like there are a lot more lies, not because there are actually more lies but because there are all the same lies and many more people pointing them out.
The people to watch out for aren't the people who create this kind of technology. It's the people who think only they should be in control of how people can use it. Which includes both Facebook and the people who think Facebook should be regulated instead of smashed into a million tiny pieces.
> Once a massive, uncensorable communication network becomes pervasive
> the old social institutions that guide public communication become irrelevant and we build new ones or face massive misinformation and propaganda campaigns
You're basically saying state run propaganda is the one you want... Which is "fine", but still propaganda and misinformation, just of a different nature (apparently you agree with that one)
By making the claim it's a technical innovation that led to the political problem - it's ignoring the fact it's a political problem, caused by a political problem (that everyone is biased by politics).
I think it's naive to think technical innovations cause the political issues. Even from the cold war example - it's politics that caused it, just the technology raised the stakes. Politics created the issue, not the engineering
When you disrupt the economic basis of free press, you create a fundamental vulnerability, at least on the short run until those institutions adapt, like an organism exposed to a new mutation of a pathogen. An it might well take blood again to recalibrate. As a technologist, you don't get to ignore the political effects of what you create, you can't say "well, people have been killing themselves on this planet for millions of years, I've merely created a device to automate killing a whole country, a neutral technology." It's not neutral, it has imediate political effects, it gives power to those who have it to the detriment of the rest.
There is no debate technology can change the world for the better, you just can't presume that any technology, introduced at any moment of time, to any audience will be by definition positive. And once you start to nuance that position of absolute amorality for technology, you are engaging in politics. Technology is always power and power is always political.
The engeneer in that example finds it worthwhile (or at least acceptable) to design a system that can manipulate peoples desires.
I'd argue, that alone is already political.
I think it's also incorrect in a few ways. One, the beginnings of the Internet were in government and academia. The former is definitionally political, and any academic will tell you how political the latter is. The notion that technology is somehow beyond politics is a very political idea, as is the centering of individual freedom as primary. Your refusal to acknowledge those politics is also a political position.
I personally agree with a lot of the politics embedded in the design of the Internet. But we shouldn't pretend like the politics doesn't exist.
It's self-evident and you aren't providing evidence, you're arguing definitions. The internet is politically neutral. It is designed to carry data, without respect for its contents, point of origin, or destination. All participants in the network are treated equally, all forms of communication are treated equally.
Now, you can say "neutrality is a political statement". And if you want to define it that way, then sure, it's political. But I think that's a fairly silly, meaningless definition.
The internet is not politically neutral. Carrying data without respect for contents, origin, or destination is a very political ideal. It's also an ideal that the internet has never met. Origin and destination have to have money to pay to connect, for example. And ISPs from the earliest days had acceptable use policies, so it was never "anything goes"; determining what was acceptable was very much a political process.
I note that the Bitcoin community, who are even stronger on the "without respect for contents, origin, or destination" philosophy, are even more obviously political. In pretty much any Bitcoin discussion here you can find people who espouse extreme anarchocapitalist values. Many are quite clear that any state interference in their business is anathema. Again, highly political.
What you're espousing is the sort of faux neutrality that the article is about.
What you are arguing is that neutrality is a political choice, and what I am arguing is that that is a bad definition of politics.
There are many possible notions of what "neutral" means. That you pick a specific, socially determined definition and treat it as the only possible one is a political action.
My point is that your definition is bad, because it is a non-definition. If neutrality is political, then 'political' is a word without a meaning, because it applies to any and all possible things.
1. The internet is not completely decentralized due to the politics of its engineers.
2. The internet being designed to be open and free was, as much as any other architecture, reflective of the politics of its framers.
I suppose they are essentially the same question, though. You're right, the architecture of the internet does reflect the politics of its engineers, to a degree. But I think it mostly represents an attempt at the most flexible, simple architecture to provide infrastructure for other applications without being opinionated about those applications. Which, on the one hand, is sort of a political choice, though I feel it's only political in the sense that it avoids any and all political considerations. Its politics are neutrality and agnosticism.
But this is different than the existing politics of taxes, gun rights, abortion rights, property rights, race relations, immigration, transportation, homosexuality, tariffs, or federalization.
You might claim it reflected free speech, i.e. the inability of government to restrict the content of written or spoken messages. But I think that would be a weak argument at best, especially for the pre-https web.
What if Amazon politicizes in the direction of libertarianism, and decides its only social responsibility is to increase the world's GDP?
What if Facebook politicizes in the direction of social justice, and determines that segregating people into identity-based safe spaces is the way to go?
What if Google politicizes in the direction of some political party, and decides it's duty-bound to tweak its search algorithm to hurt opposing candidates?
Any time you're building technology with (or with the intention of) mass reach, "if the person I consider to be the most evil in the world were to commandeer this, what's the absolute worst they could manage to do with it" is a useful question to ask. If some companies had asked this question far earlier in their lifecycles, we might see a very different world around us today.
This thread on technological determinism vs social construction of technology is also very relevant.
Anyway this is what Turner is talking about when he's talking about algorithmic bias and actor-network theory. It has nothing to do with random articles about Trump's immigration policies or affirmative action or whatever, which is most of what gets flagged on HN.
This isn't something HN are likely to entertain frequently, but I can think of a few topics for which I'd like to see the attempt made, perhaps every few months, on a scheduled basis.
Maybe if political/social science is included in engineering schools as Fred Turner suggests, the quality of discussion and outcomes produced could be more constructive over the long term.
But with the current generation or two or three forget about it.
You and everyone else who believes talking is a route to solutions is delusional. Especially in the current environment of self reinforcing echo chambers and us vs them narratives. There has not been a single major issue solved in the last twenty years of social media and 24*7 news "educating and informing" the public.
Will technocrats who are political have any success? That experiment is currently running in China and the default American narrative is it will fail. So either way you look at it an SV entry into politics will fail.
What they can do is change the environment. Remove the retarded mechanisms that increase "engagement" at the cost of dividing people. This is much more in the realm of their domain expertise than thinking they need to spend their time talking or propping up talkers more.
The ability to search for highly-voted, significantly discussed, and yet still flagged posts would be handy.
And no, the problem is nnot specific to HN. But it exists, despite both public and private denials by the mods, whom I generally respect
There are a few other examples I've submited to the mods via email.