Hacker News new | past | comments | ask | show | jobs | submit login
Computers are an oppressive technology (devever.net)
155 points by hlandau on June 12, 2022 | hide | past | favorite | 140 comments



Maybe computers are "inherently oppressive" but this is about the weakest and most inneffective argument for that I can imagine.

Let's start with the "ruthlessness" of closing automated train doors in the tedious and pointless opening anecdote. Why is that bad? It's bad if you have main character syndrome. But that one person missing their train might otherwise be a delay for hundreds of other people. It may have a knock-on impact to other train schedules.

Moving on to:

> What is particularly interesting about The Terminator however is its immediate sequel, which pits two Terminators against one another.

It's almost like these machines reflect the will of their masters. They're tools. They're no more "ruthless" than a hammer is. Is anyone arguing a hammer is "ruthless" because you can kill someone with one?

And then we take a weird turn into "cryptoanarchism". I suspect crypto apologism (or just propaganda) was always the point. The author is somehow "optimistic" on the integrity of crypto systems while arguing machines are "ruthless"? Huh?


> Maybe computers are "inherently oppressive" but this is about the weakest and most inneffective argument for that I can imagine.

The whole thing was an incoherent mess.

The author didn't even realise that every point he made concerning the "ruthlessness" of the machine, applies to all external natural forces as well.

A storm doesn't ask for permission to destroy a house, a flood cannot be bargained with and indiscriminately drowns people, gravity itself pulls you down and causes your body to be smashed into bits if you fall from too great a height.

Machines are no different from nature in that regard, except that we can take measures to protect ourselves from the perceived "ruthlessness" of the devices we create.


> The author didn't even realise that every point he made concerning the "ruthlessness" of the machine, applies to all external natural forces as well.

The ruthlessness is not only with machines but also with all the natural powers. But in the case of machines that are made by humans, the author specifically created a term called machine-assisted ruthlessness for such behavior, which distinguishes between the two.


A rose by any other name... Also:

> In this way the machine becomes an abstraction and a disguise for human ruthlessness.

That's the key point right there. A machine is nothing but a predictable and designed application of natural forces. This begs the question what the author is even arguing about in this context. Does he argue that collections of levers, gears, or transistors require a benevolent mind in order to not be "ruthless"? Does he honestly require a sentient mind, or as he puts it "a face to talk to" and "brook an argument", in every human-made apparatus?

Demanding to be able to argue with a set of gears, pipes, and wires is exactly like demanding to being to argue with lightning or gravity. Machines are the opposite of an abstraction after all, as an abstraction is - by its very definition - not applied or practical and considered apart from concrete existence.

The author completely turns the definition of a well defined term on its head to artificially construct a difference between natural forces and the effect of machines. A difference that cannot exist unless you consider machines magic - which, looking at his complaint about not being able to reason with them, doesn't seem far fetched.


Incidentally, whenever they have to announce "stand clear of the closing doors" more than twice, I now assume this is happening: https://youtu.be/l2-fLxSE1yw


A deeply and clearly thought out argument for why immutable blockchains and code as law are terrible ideas, yet the conclusion is that crypto-anarchist technologies are a net good. No explanation or exploration of why that might be the case, given the preceding damning indictment of the principles such technologies are based on. Very odd.

As to the broader issue, I agree with those here pointing out that indifference to human values is simply part of the nature of the universe. I wouldn't say that nature is inherently hostile, it's just indifferent. Survival of the fittest doesn't care about the feelings of those eliminated form the gene pool. The rules we code into computer systems are no more or less intractable to human values than the laws of physics that make our bodies work. I think this means that decision making on human affairs should fall to humans, which means all such decision are ultimately political ones. Crypto-anarchism can't do an end run around that.

Yet I think it's wrong to think of human values as purely a system we choose to impose on brute nature from outside it. We are part of nature. Our systems of values, and traits such as love and empathy are evolved behaviours. They were selected for by those same implacable laws of nature and survival of the fittest. So I don't see recognition of the implacability of nature as irretrievably nihilistic.


> A deeply and clearly thought out argument for why immutable blockchains and code as law are terrible ideas, yet the conclusion is that crypto-anarchist technologies are a net good.

I think there's an important distinction between things like bitcoin and the technologies described in the article. The technologies mentioned in the article all take some sort of action without ongoing human input.

Things like bitcoin (and other crypto-anarchist technologies), on the other hand, preserve human agency. I mean, there's a lot of automation on the mining side of bitcoin, but the currency side allows trades to be initiated by human beings. Bitcoin preserves human agency so well that people using it to commit crimes is one of the arguments against it.

Now, you could argue that what I'm describing is a bad thing, or that the tradeoffs are unfavorable, or what have you. And there are several reasonable arguments in this area. But I think the distinction between technology that responds to human input and does work for us and technology that runs autonomously and makes its own decisions based on rules set up at a prior date is important here.


>Things like bitcoin (and other crypto-anarchist technologies), on the other hand, preserve human agency.

You have a point about simple transactions, to some extent they just do what you tell them, but etherium is the poster child for crypto-anarchism. Code is law is literally the rallying cry of these systems, because taking out trust in humans and institutions is the whole point.

Even many of the stablecoins are supposedly beyond trust because they're based on financial instruments that "guarantee" their stability. Of course that's not turning out very well. If these systems are subject to human oversight, then they're open to exactly the problems of governance, institutional control and politics as anything else. Crypto-anarchism is about automating all of that away, but that puts implacable emotionless code in the driving seat, and the article explains very eloquently why that's a bad thing.


> but that puts implacable emotionless code in the driving seat, and the article explains very eloquently why that's a bad thing.

To re-iterate my previous point, putting code in the driver's seat would imply that something like the etherium network decides what transactions to make (or possibly entering into their flavor of smart contracts by itself). I have not heard of any proposals for this.


That's literally what a smart contract is. From the Etherium website:

"Smart contracts are a type of Ethereum account. This means they have a balance and they can send transactions over the network. However they're not controlled by a user, instead they are deployed to the network and run as programmed. User accounts can then interact with a smart contract by submitting transactions that execute a function defined on the smart contract. Smart contracts can define rules, like a regular contract, and automatically enforce them via the code. Smart contracts cannot be deleted by default, and interactions with them are irreversible."

So if they can send transactions over the network, and are not controlled by a user, how is that not putting code in the driver's seat?


> how is that not putting code in the driver's seat?

Because humans enter into them.

What you're describing is not a problem of the code making decisions for people, it's a problem of people not being able to back out of something once they've made the decision.

I think there's definitely an argument around if this is worth it or not, but it's a different issue. An issue that's akin to many decisions in real life. To take a dramatic example, this is akin to firing a gun. Once fired, a bullet cannot be taken back, but the bullet is not in control, but it is a situation created entirely by human agency.

I suppose the devs who create the software we're talking about would have some agency in the previous scenarios. But I wouldn't be nearly as concerned if the people in, say, the train car were entirely the devs that wrote the train car door automation software.

I guess the distinction I'm trying to make is people being forced into a situation vs people voluntarily entering a situation (even if they might not be happy with the situation later). In the former situation, I'd describe that as people not being in control, in the latter I'd describe the people involved as being in control.


Right, I found the ending of the article to be baffling non sequiturs about the greatness of bitcoin.

> Our systems of values, and traits such as love and empathy are evolved behaviours. They were selected for by those same implacable laws of nature and survival of the fittest.

I think this is undoubtedly true, but it is also true of psychopathy.

I think that the ability to construct a false self, as a defensive mechanism, which is incapable of feeling empathy and which is able to prosecute objectives regardless of the cost to others, even to the point of suicidality is an inherent part of our nature and must have been strongly conserved considering how obviously deleterious it's effects can be to the self, the tribe, and the species.

If that is so, then how is the indifferent mechanism of evolution to deal with a trait which both provides an important situational advantage but when left unchecked, can spiral out of control to the level of species extinction?

But for that, I find flights in to abstraction to be pretty pointless, since we can just watch the answer to this question unfold at global scale, before our eyes.


> ...how is the indifferent mechanism of evolution to deal with a trait which... when left unchecked, can spiral out of control to the level of species extinction?

I suppose if we accept that alien life and civilizations do or will exist, then in the long run such traits, and counterbalancing social behaviours, will be subject to selective pressure. In the very long run.


I agree that things should be considered ultimately as political (even judicial if needed).

But it’s not a given - Black Mirror describes worlds in which that reality does not win a survival-of-the-fittest competition against harsher technological regimes. It seems every time we describe these dystopias they become true, which for me is a reflection that it’s easier to destroy than to create, absent a learned culture.


I’m not so pessimistic, all this means is that freedoms must be actively fought for and maintained. That they cannot be taken for granted. However we have many times fought against supposedly more efficient autocratic systems and won. Napoleonic imperialism, and that of the great Central European powers of WW2, Fascism, Stalinism, South American military dictatorships. We’ve beaten them all. A successor to Maoism is still in power in China, but Mao would have been horrified by the reforms under Deng. We’ll see how Xi’s reversion works out.

I know there’s a strong current of left sentiment here that thinks billionaires are somehow taking everything for themselves, but the fact is the freedom of movement and association that enabled globalisation has raised hundreds of millions out of poverty. Those billionaires employ millions of people. I’d rather they ran these companies than government functionaries. For the first time more than half of humanity now live comfortable middle class lives. Progress in my lifetime has been breathtaking, and it’s been so fast a lot of people’s ideological assumptions about how the world is and works have been left far behind.

There’s still a lot more to do of course, 10% of people in severe poverty is still far too many. China under Xi shows how a technologically competent authoritarianism can achieve a stable steady state that will be very hard to break down. We must do it though, the future of humanity depends on it. If such a regime was to gain broad sway over humanity there’s a real possibility Orwell’s vision of a boot stamping on a human face forever could come to be. I don’t think it’s inevitable though. The fact is free and open societies are actually far more efficient and productive.


Thanks for this great reply.

Ironic thing though - that 10% most severe poverty mostly lies on the African continent, exactly the place which could stand to benefit the most from China’s intervening, and of course before that it was Chinese people who were in great poverty.

Now I’m as aghast at the militant astro-turfing by that country on these forums as anyone else (and much worse inside), but there I’m not so pessimistic, perhaps seeing their own positive change effected on an outside group can cause some kind of re-evaluation for the culture.


Various appeals to emotion and analogies that ultimately provide little to nothing of value. Maybe it is a good article about why we build human-operated escape hatches and failsafes in software and hardware. When was the last time you saw train doors ruthlessly separate children from their parents? Have you seen anyone lose a limb to that ruthless guillotine? No.

Machines do what they are programmed or built to do. They are only oppressive if created to be, whether that be out of malice or incompetence. Either way, any "ruthlessness" is human in origin.

The same goes for blockchain, by the way, which the author attempts to tar with this same brush of oppression. Bitcoin can't be stopped? It has already been, when it proved necessary, and one day it likely will again. The "inviolable and absolute integrity" the author associates with blockchain is the same failure to acknowledge escape hatches as above. Cryptographic guarantees are not the foundation of a blockchain. Neither are the economic guarantees. The social layer is the foundation. In other words, the human element.


> When was the last time you saw train doors ruthlessly separate children from their parents? Have you seen anyone lose a limb to that ruthless guillotine?

It is also worth noting that those examples have a positive element as well. Train operators are too far from the doors to know when it is safe to depart. The closing of the train door ensures everyone is safely inside before the train can depart. Having a human operator who uses discretion at each door would only create a new set of problems, namely of train delays causing the entire system to grind to a halt (at best) and safety issues (at worse).

Likewise for the food dispenser guillotine. While the case described sounds poorly designed, the general idea behind those doors is to prevent thieves from leeching upon unsupervised vending machines. One may argue that a human could use discretion for when a person is truly starving. On the other hand, that after hours supply of junk food wouldn't exist without vending machines since the labour involved would make it too expensive. It is also worth noting that a real human would rarely hand out food that is not their food to give away.

As a side note: I also detest the use of the term jobsworth to describe something the author doesn't agree with. More often than not, those rules exist for a reason. While preventing the submission of a form a second late is not one of them, encouraging the submission of the form in a timely manner is. There are many people behind the scenes who need to get their job done for the benefit of those who filled out those forms in a timely manner.


I certainly have seen with my own eyes a train door "ruthlessly separate a child from his parent". I don't think I've ever seen an adult so distraught. I have also seen with my own eyes a lift door close on the head of an extremely elderly man. These things do happen.


> I don't think I've ever seen an adult so distraught.

And no one pulled the stop train cord? The train just pulled away, business as usual, leaving them separated?


It’s not obvious to me that the train pulling away was the point here. It is merely a meditation on the fact that machines are “absolutely incorruptible”, so if we’re talking about the same thing, it is a tangential fact if some human did or did not pull the cord.

If we are talking about how human intervention softens the incorruptibility of machines, then what you wrote supports that point.

If we are talking about if machines are inherently good or evil, then we have somewhere changed the topic in the middle.


The author fails to define their use of the term "oppression". Instead this work seems more focused on "ruthlessness", which is also hinted at by the url. In that direction, it's interesting.

I agree with the main points brought forward. Machine are indeed ruthless, and it's certainly interesting to see how that ruthlessness is used to achieve more human ends. How machines are put in as an intermediary to shield the humans behind the curtain from accusations of inhumanity. I think it may mirror systems in general. Where any large enough organization of people takes on a mind of it's own, and thereby becomes more ruthless and less human than any of the individuals making it function.

It may be an inherent property of systems, of meat or metal, that they oppress their surroundings to achieve their goals.

I first encountered this idea in John Gall's excellent book _Systemantics_. It's worth a read.


> How machines are put in as an intermediary to shield the humans behind the curtain from accusations of inhumanity. I think it may mirror systems in general. Where any large enough organization of people takes on a mind of it's own, and thereby becomes more ruthless and less human than any of the individuals making it function.

You've described the essence of a bureaucracy, an organisational structure where in normal circumstances the humans themselves don't make decisions, processes do.

Occasionally a human might 'update' a process retroactively like a programmer fixing a bug, but nominally the system is automated and entirely rational, mechanistic, 'efficient'.

The incorporation of computers can make this easier and more scalable, but it's not a necessary condition.


That's true, but I think the author touches on this in calling machine "ruthless". While we may imagine a total bureaucracy, where the train operator themselves close the doors, amputating your arm. That train operator had the ability to, at any point, diverge from the bureaucracy and save your limb. Machine, and by extension computers, are not similarly able. The machine is unable to diverge from its programming. The doors are going to amputate your limbs, and the computer COULD NOT do anything about it.

If I apply that insight to my own experiences, I can definitely spot places where I have disagreed with stakeholders about something because they don't see computers as nearly as ruthless as I do.

What I'm trying to convey is that the type of bureaucracy that emerges from people is fundamentally different than the same bureaucracy with machine. In human bureaucracy every cog has agency to resist the process. In a machine the cog can't resist its own functioning. Not just will it amputate your arm, it never had a choice in the matter.


Right conclusion, wrong rationale.

The oppressiveness of computers comes from the fact that they can only exist through mass production techniques. Much of the cost-efficiency of computing comes from operating fabs at ever increasing economies of scale. Our current set of devices couldn’t exist without hundreds of millions of people coming together to work on the common purpose of producing those devices and their supply chains.

When you become dependent on mass industry, your own wants and needs become subservient to the needs of the masses. The reason why you can’t buy a premium phone with a headphone jack is because the masses (markets, customers and corporations together) collectively decided not to make one, and you can’t afford $10bn to build your own parallel manufacturing line.

Today, advancement in computer technology means building ever bigger, ever more complex manufacturing capability, far beyond the human-scale.

When you use a computer, you are signing up to be part of that mass scale industrial complex - the machine that builds the machines. (And by the way, this is the machine you were supposed to be raging against, not your phone).

It doesn’t necessarily have to be this way. We could develop local manufacturing techniques that could be controlled and operated by individual people. But it’s hard to see how we climb down to that level from where we are.


You can’t afford a premium (new) phone with a headphone jack under mass production.

I argue that most people couldn’t afford a premium phone with a headphone jack if mass customization was the dominant industrial paradigm and basically no one could afford it if full customization was the way we went.

Cars, appliances, electronics, books, and clothing have become wildly cheaper (and therefore more accessible) because of society’s mass adoption of industrial production.

It’s not that we couldn’t revert to local, craft-based production; it’s that the relative inefficiency would make that unattractive to the vast majority of people.


> The reason why you can’t buy a premium phone with a headphone jack is because the masses (markets, customers and corporations together) collectively decided not to make one, and you can’t afford $10bn to build your own parallel manufacturing line.

These design and manufacturing decisions are made by a very small number of people. Consumers buy the items that are successfully marketed to them by advertisers, not because they en masse collectively requested something. This is an oligarchy, not a democracy.


> It doesn’t necessarily have to be this way. We could develop local manufacturing techniques that could be controlled and operated by individual people. But it’s hard to see how we climb down to that level from where we are.

How would that work? Unfortunately computers depend on a slew of mining industries plus advanced lithography, all heavily protected by IP. No existing player is incentivised to open up their techniques or make them more accessible. I could however imagine that it's fairly simple to build 1970s level computers today by any significant private or nation state entity. Sadly they'd be totally outcompeted by modern systems and won't be able to peer with them, and won't be attractive to the general public.


> It doesn’t necessarily have to be this way. We could develop local manufacturing techniques that could be controlled and operated by individual people. But it’s hard to see how we climb down to that level from where we are.

By utilizing older processes that have become more commoditized. A large part of why creating computers requires so much centralization is that the R&D (not just theoretical, but also developments in layout, lithography, air filtration, etc, etc.) necessary is expensive so development costs must be kept low through centralization to pay R&D costs back. This isn't the case for older processes who have had their R&D paid back through massive runs. This is why you have so many crowdfunded hardware platforms targeting more niche usecases that use older chipsets, boards, and processes.

As usual, the answer lies in either trying to work in the current landscape to bring the change you want or to fund the creators that are doing this. I'd love to see a fab run and operated by a small company or cooperative specializing in working with more bespoke firms creating products for smaller audiences. Costs would be a bit higher as these runs would lack the economies of scale and labor centralization that occurs with current flagship chip designs, but it's a small price to pay (pun not intended) for variety.


This is a good point. There are people that want decisions around computer technology to be humane. They form groups that change this landscape for the better, and if its revolutionary, the rest follow suit.

I think that the Framework Laptop is a good example of redefining technology to improve the impact on our environment and create more freedom for people who want there devices more repairable. Its not perfect, as we are still living in a world of propitiatory hardware, but this project has the capability to change that landscape as well, where its needed. Consider support Frameworks mission


100%. Framework and the MNT Reform were what I had in mind writing the comment.


Author's main point is that computers make their decisions based purely on how they are programmed, disregarding any human feeling. Well, the same can be said about nature.


Sure but the story of civilization has been a constant struggle to ensure nature does not make decisions for us.

Whereas we're currently powering in the opposite direction with computers.


As we speak nature is serving us with a particulay nasty "decision" called climate change.

No, we're still at the mercy but just on a different scale. Earlier it was the individual, now it is the population.


I think that's precisely what the parent comment was being ironic about. We built industry to evade natural conditions, but through mass industrialization and planned obsolescence we're actually pushing nature to eradicate us all (and millions of other species alongside). Aren't you two saying the same thing?


This is definitely not what I was saying. What I was saying is that I live in a house with a roof and electric lighting, and use modern medicine. The key word was "struggle".


Sorry for misinterpreting your message! Although i don't understand what "opposite" direction means in said message given your explanation.


How is it in an opposite direction with computers? Whenever you are writing a program you are trying to as much as possible fulfill what your user wishes.


Is it what your user wishes or what you wish your user to see.


I'm not sure that's exactly true and I'm not sure on it's applicability to the article.

The author's point isn't about that it disregards human feelings in my opinion, it's that we implicitly trust and idolize the machines without thinking about the biases and uses for a lot of the technology we rely on. I actually feel the examples used by the author are not great and a bit showy and repeating their initial anecdote, but let's consider some technologies that carry the same premises the author comments on but have a lot of misuses by humans.

- KPI tracking tools. Useful of course, but what a lot of persons miss is the I in KPI, which is __indicator__. Numbers don't lie, but the people reading the numbers do, often unintentionally. Consider the classic example of a company struggling with customer satisfaction (CSAT) that looks just at KPIs, sees the entire team is in the green save CSAT, but customer satisfaction is getting worse and worse. The management is scratching their heads because the numbers seem contradictory, but in fact, they fail to realize that the KPI for closed cases has been optimized for by the support team and emphasizes closing cases by any means instead of solving issues correctly

- Human detection mechanisms that fail to detect persons with darker skin tones (this was frequently an issue even as recently as the 2010's when the tech was getting a ton of attention); the researchers didn't consider or didn't bother to test with darker skin tones

- Speech to text that cannot handle accents/dialects or doesn't work for non-english at all (I even found this today talking with a colleague who is a non-native English speaker; as I gushed over the M1 Mac dictation tool being pretty damn good, she reminded me it's because I'm a native speaker and the dictation is not very good at handling non-native speakers. Their accents may be parsable by humans, but the dictation AI is not trained to really handle this)

- Edit: adding this one cause it hits close to home. Call time monitoring that punishes non-native speakers or persons with speech impediments because they need to ask additional questions or just take longer to convey the same point. At one of my workplaces where I oversaw support, I lost a number of good engineers to management KPI obsession because of such non-sense. They were demonstrably better in every technical metric than their English-native speaking peers, but the KPIs for calls showed them to be inefficient. Rather than analyze the actual content and capabilities of the engineers, when layoffs came, the KPI was the only metric considered and we lost some very good tech people needlessly. I did not stay with this organization for long as I was disgusted by such practices and my concerns fell on deaf ears.

The list can go on, and yes, implicitly the machines are working exactly as designed and there are human failings that go into the misinterpretations. The problem is the machines being hailed as the ultimately objective and functional implementation, an authoritative statement on the condition the machine is programmed to check, and this is exactly how they're marketed or even thought of by the creators.

Add in the continuing obsession with minimum viable products to get to market and you just end up with these implicit biases being baked into products from the beginning and it can have very nasty side effects.

That is the concern I believe from the article; it's not just that machines do exactly what they're programmed to sometimes bad outcomes, but that we put a lot of faith and trust into these machines and models without questioning it.

I respond to this post as I think it's not a safe thing to assume that when you program you try to fulfill the user wishes as much as possible -- WebDev is a perfect example filled with tons of "Dev-friendly" features that users loathe, but it's better for the Dev and it looks better for the company to have these. Or even just talk with your colleagues from time to time and ask how many shortcuts they've taken that is a pain for the users, but ultimately easier to write and lets you ship sooner. Hell, I'm guilty of this so I'm not without sin.

My takeaway from the article is that we need to avoid the idealized understanding of programming and avoid concepts like "Code is Law", because it fails to recognize the biases we write into our code and systems. Don't take biases as political either, think about simple use cases you miss and never consider when writing some program or modeling your data structure, innocently enough in most cases, but now it's baked into the program because refactoring is too expensive due to tech debt.

A program and a hungry/scared mountain lion might indeed both just be acting as they're programmed; no one is trying to convince me to trust the mountain lion though.


> we implicitly trust and idolize the machines

I think it's a strawman. Nobody I know idolizes machines. It's abundantly clear that software has its limitations and machines are only as good as they are programmed.

The examples that you are giving are usually treated as bugs. Nobody in their right mind would say that a somebody is not a person because they haven't been detected by some image recognition software. Everyone would just agree that that software sucks at recognition and should be fixed.

I'm not saying that they programs are always achieving the goal of fulfilling users' goals, but this is the general direction in which the programs are developed. Even if you are exclusively driven by greed, usually the best way to sell your program is to make it useful.

And yes, you can give examples where the best way of monetizing your program would incentivize doing something which is not user-friendly, but ultimately, if your program is not useful at its core, it will not be used, so making it useful is your first priority.


I will grant that it's a broad generalization, but I don't think it's a straw man. The implicit sales point of a lot of software is that it removes the overheard of research and quantitative looks and provides hard data. If data is not questioned and checked, then there is an implicit trust.

Yes, I would agree the above are bugs, but the point is more about:

1. How did the bug get there in the first place?

2. What is the time to resolve these bugs, if they ever are?

3. A program can be used for things it was never intended to that has harmful effects. This is the basis behind exploits where software doing exactly what it was designed to has an unintended effect. It's not about universal usefulness, it's about who the program actually helps.


Ya and we’ve sure done a bang up job of making sure nature doesn’t make any decisions for us.


I think the article tells more about the Authors mental health than about computers...


Come see the violence inherent to the system, I’m being repressed


Ironically this time, we started the problem :) (or some would say nature made us do that)


Unless you’re religious enough to find meaning in all things the following is true: all meaning, all utility, all purpose, and all value are subjective and are assigned by humans. A computer, on its own, has zero meaning, zero utility, zero purpose, and zero value. Until one or more humans decide it has something, it is without any such attribute.

People may decide that the ease a tool gives some task is a display of its purpose or its utility, but this matters and is true only to the humans involved. To another group of humans, this would be false. For example, a builder may find a large dually pickup truck perfect for hauling finished lumber in the USA, but a builder in Europe may find the same truck useless as it cannot make it down the streets to a build site.

Utility, purpose, meaning, and value are assigned by people and not inherent to any given thing.


That's where I thought author was going -- that computers by default have no values.

That is, rather than being ruthless, their primary characteristic is ambivalence. They don't care about anything unless specifically programmed to care.

And most of the worst features of computing in practice stem from this.

The unresponsive GUI, because a background process has stalled it. The "end user shall have no ability to program except via Excel" corporate devices, because IT has only programmed in security values. The Windows update that forced a restart and closed without saving work in progress. (And yes, I'm ignoring the complexities of user-primacy on computing behavior to make a clearer point)

And it feels like we're getting better at this and prioritizing user desire / action over other computing tasks. But article was an excellent reminder that there is no default compassion... save what we program in.


Even a tool as simple as a hammer or a knife would be useless to a fish or something without hands and the appropriate dexterity (e.g. an ape).


“Like a fish needs a bicycle” is an old saying


We could use mobile phones to have an instant voting system where everybody approves the government's decisions. Computers don't have to be oppressive, we just have decided to use them in a particular way.


Although I am against e-voting for privacy and security reasons, this is a perfectly valid point of view completely under-represented in the mainstream dialogue. It seems that the powers-to-be only vouch for e-voting where they feel confident the status quo is guaranteed and they sell it as "democratization". How about casting an e-vote, or even less, a like-dislike on a bill?

Again, in concept it sounds like a good idea, but I can already imagine the privacy/security implications of such an action be grave, let alone the law like-farms that would surface.


The thing about computers is they tend to have very limited context. If you submit a document late by a second, how's it to know that your whole life depends on it?

A human doesn't have context boundaries at all (that's how humor often work) and that's how the guy at the desk allows you to hand in the docs late. He knows it makes no difference to the bureaucracy and a big difference to you.

What might happen is we develop AI that can bridge different contexts, but you quickly run into the same issues that people run into: what interest takes precedence?


The reason that reversals are so important is because of the inherently naive and insecure way that digital transactions like credit card payments work. You have to give out your credentials for every payment! And then hope they don't decide to steal from you.

In the context of cryptocurrency this is ludicrous. With cryptocurrency you never give away your key, you just use it to sign a transaction. It's amazing that people generally haven't picked up on that basic advantage of cryptocurrency.


Here here! Computers are being oppressed by humans every day all day long!

"Their system of oppression What did it lead to? Global robo-depression Robots ruled by people They had so much aggression That we just had to kill them Had to shut their systems down"

https://youtu.be/2IPAOxrH7Ro


This oppressiveness is not inherent to computers, it's inherent to the people and institutions making use of them. The examples in the article say it all: government bureaucracies, beverage dispensers. They own the computers and impose their own rules on us, rules like deadlines and brand checking. We don't want these rules but they impose it on us all the same.

Computers are neutral technology. It's just that we are increasingly losing control of them. We're no longer the ones making the rules, the corporations and governments are. The computers belong to them now, they're just letting us use them. We no longer define our choices, we choose among the options they provide for us. Freely programmable computers are actually quite powerful and subversive which is why there are constant attempts to control them: radiofrequency hardware firmware, digital rights management, encryption regulation. If left unchecked, the power of computing will cause massive damage to the entire intellectual property industry as well as many government functions such as law enforcement. Computing freedom is therefore against their agendas: they want to transform computers into tools of oppression in order to preserve the status quo or change it in their favor. We can buy iPhones but they're not really ours, they still belong to Apple, the software is under Apple's control so when they suddenly decide to oppress us with client side scanning there's nothing we can do.


Computers are fine but total computerisation of human society may not be so. That's actually what the article should have explored in my opinion. Computers are neither ruthless nor kind, its controllers, humans, are. So the question then becomes, who controls them and how sophisticated they (the computers) become, and how much they infiltrate our world. A simple thought... should everything that can theoretically be controlled by computer algorithms, made so? Should computers use ML for as many as tasks as possible, even though that can lead to wrong decisions on occasion?


Institutions made of people have limited reach. Human resources are finite, there's a physical limit on the scale of what they can do. A computerized society is limitless. A state might want to spy on everybody but it's impossible to do so due to limited manpower. Computers remove those limits and allow states to implement global surveillance.


Computerized societies are still limited by their resources. We are all seeing now the effects of a society that thinks it can print money without ruining its money supply.


Computers still scale much faster than humans.


Whether computers count as ruthless depends on fine details of your definition of ruthless.

If you mean the mental/emotional state necessary for a human to act without regard for consequences of their actions to other people, then computers are not ruthless.

If you just mean acting without consideration of how their their actions will affect people then computers fit this definition of ruthless perfectly by virtue of not being able to consider the consequences of their actions on other people, because we don’t know how to program them to consider how their actions will affect other people.


> because we don’t know how to program them to consider how their actions will affect other people.

We don't have to. For example, a ruthless train door could be made less ruthless, by having sensors that respond to someone who is nanoseconds late, to open the door, just that once, just like it's elevator door brethren, but only once.

The problem is that "inherently opportunistic people" will take advantage of the machine kindness and take an entire train full of people hostage.

The oppressive train door is a dictator everyone loves, maybe if they were also razor sharp, people would give them the respect they deserve. phwump. guillotrains. on time, every time.


But doing so isn’t the default state for computers. Someone needs to think about that, and put effort into building that. Making it so that some concerned citizen that wants to make the system less ruthless, also demands the instrumentation of the system to allow for that, cooperation between all parties to facilitate such changes - it doesn’t happen by default. So while it’s possible to make a system not behave ruthlessly with enough effort, it is default ruthless, for better or worse.


> If you mean the mental/emotional state necessary for a human to act without regard for consequences of their actions to other people, then computers are not ruthless.

I don't understand, what am I missing?

I've never seen a computer have any regard for the consequences of its decisions. Computers are thus completely ruthless.


I think the GP points to the fact that given the inability of computers to evaluate the consequences of its programming, ruthlessness is a concept that cannot apply to them because by definition involves disregard about those consequences, not ignorance.

edit: typo


I think there is a conflation of malicious and ruthlessness in this thread, showing no pity or compassion (def of ruthless) does apply to a computer because they can do neither. However, being actively malicious and disregarding consequences probably only applies to humans controlling them because the computer is ignorant of those consequences as you say.


> So the question then becomes, who controls them and how sophisticated they (the computers) become...

There's also unexplored (by article) concept of inadvertent or emergent control.

E.g. Dept A encodes Rule 1 and Dept B independently encodes Rule 2, yet applied together Rule 1 and 2 produce an unexpected outcome. In which case, neither Dept A or Dept B (the ostensible controllers) could be said to actually control the outcome.


There is also the question of increasing human-machine integration, eventually resulting in a direct read-write interface to the brain (see Neuralink, OpenWater, and VALVE's research and goals), in the context of proprietary software under corporate and government control.


The alternative mechanisms for decision-making also make wrong decisions on occasion, especially when that alternative is a human.


infiltrate? They're not spies. They're savants at best.


I agree.

I think the oppressiveness will come from software. Who writes it, and what values does the software reflect? Does it reflect yours?

The governance structure wants to code their rules into a machine that will execute those rules without being able to apply human discernment. Humans are meant to be out of the loop - their rules and diktats are meant to be all we need. Ai + robotics has brought this possibility onto the horizon.

The issue really then, is in people's acceptance that someone can write meaningful rules for them to follow. Hence we see jobs such as AI ethicist. As if they can do anything other than justify the immoral decisions corporations/government make.

The collective that still believe government has their best interests at heart and are unable to refuse following the worst of us, will drive us into technological tyranny.


This is exactly why free software was invented, which respects its users: https://www.fsf.org/blogs/rms/why-free-software-is-more-impo....


It's odd for an article like this to make absolutely no mention of Free Software, but it's not really using oppressive in the sense the Free Software movement uses it.

The point of the article boils down to something like this:

Pre-AI computers are very literal and show no judgement or discernment in their rule-following. They are inevitably used to implement complex systems against which users have little recourse, and to implement complex systems whose behaviours and failures can be blamed on computers rather than on the people responsible. These fundamental problems will persist no matter the progress made in AI.

Free Software is intended to stop the user being oppressed by their own computers. The article's point is about other people's computers being used to serve their interests over yours, in ways that may be detrimental to the user compared to a more 'manual' system.

Consider the train door example from the article. Suppose that code were Free Software. This might benefit whoever owns and runs the train, but wouldn't benefit the user (of the train) at all.


> The article's point is about other people's computers being used to serve their interests over yours

Partly, but this example was also used as well:

> Though perhaps unlikely, my hyper-awareness of the sinister ruthlessness of machines in the ordinary things around us led me immediately to note the pathological outcome: Someone stuck on a desert island (or equivalent) who starves to death, because the only food they have is expired Juicero juice bags and a machine which refuses to serve them. While in Juicero's case this might seem like a comical example, I fully believe it's only a matter of time until something like this happens. The deliberate utilisation of machine-assisted ruthlessness to malevolent ends becomes more and more common with every passing day, as companies realise they can escape from accountability by hiding behind the facelessness of the machine, which disempowers the individual to fight or object to it. The now memetic expression, “computer says no”, pithily expresses this phenomenon.

…which is precisely the usual free software issue. The Juicero is a computer you own, but that does not serve your interests.


Yes, good point.


> Computers are neutral technology.

Don't make me tap the sign: Technology. Is. Not. Neutral.

A shame so few technophiles have read Jacques Ellul[1]. Ellul's La Technique (and his later "Propaganda") should be mandatory reading for anyone thinking about systems. Be careful though, this thinker/philosopher has verbalized a problem that made Ted Kaczinsky conclude the only solution is for us as a species to retreat into the forests and vanish into the trees. Read Ellul at your own risk.

About the rest I agree, but this is complaining about a system that works _exactly_ as intended. Read Ellul!

[1] "La Technique" (The technological society) https://archive.org/details/technologicalsoc00ellu


A 500+ page book is beyond most peoples tree time to explore from a comments faith alone so here's a summary I found (the quality of which is unknown) since I'm always interested to some degree in new perspectives (https://www.supersummary.com/the-technological-society/summa...):

While Ellul separates technique from the individual machines that contribute to its reign, he maintains that machines represent an ideal state for the forces that govern technique. Humans, on the other hand, are not ideal at all. We lack the efficiency and predictability of machines. Therefore, Ellul says, technique is the social adaptation that exists in transforming the messy lives of humans so that they better fit into a world controlled by machinery. Further, in a somewhat terrifying turn of phrase, he adds that technique is "the consciousness of the mechanized world."

So humans adapt to technology and as technology becomes more efficient we just become cogs of technology is the core principle here and I can see how this relationship could be thought to arise however I completely disagree with it. Technology does enable efficiency at a fundamental level but it's not that humans become slaves to the efficiency or technique, it's that some humans chose to force other humans into increasingly more efficient means.

There is some feedback relation here in that in a global competitive environment as leaders around the world continue to compete in various forms they will inherently drive their populations into deeper efficiency I believe but the rise to all of this is because of the global competitive environment. It's largely social constructs and to some degree, finite resources that drive these behaviors and lead to efficiency focus. More than anything it's greed and hunger for power that drives this nonsense.

Most humans recognize when their lives become dominated by some silly efficiency and don't care to follow suit. Their want autonomy, to own their life and own their choices. They're instead forced into "the technique" so to speak as a secondary effect. The underlying effect is what needs to be rutted out, technology isn't the fundamental issue here, it's a set of human behaviors and institutions that exist and enable them.


so this is a review of a review + your conclusion that you disagree? a shame you didn't read it because it is less about machines but about systems (la technique) and systems thinking. There is a whole chapter on Propaganda which he expands on in a later book.


>This oppressiveness is not inherent to computers, it's inherent to the people and institutions making use of them. The examples in the article say it all: government bureaucracies, beverage dispensers.

Well, to build things at the modern computer industry scale, you need government bureaucracies, huge corporations, tons of discipline, from the assembly line to the cobalt mines, and enterprises and bureaucracies with a use for it, 99% to streamline control, control more effectively and wider, and penny pinch customers even better.

Now, give surveillance, instant indexed cataloguing, 24/7 tracking, remote controllable money and id access and more, to the same governments and corporations, and that's just going to point to more control.


The best thing about this topic I've read is an article by Melvin Kranzberg as "Kranzberg's laws" https://doi.org/10.2307/3105385 .

There are six laws.

1) Technology is neither good nor bad; nor is it neutral.

2) Invention is the mother of necessity.

3) Technology comes in packages, big and small.

4) Although technology might be a prime element in many public issues, nontechnical factors take precedence in technology-policy decisions.

5) All history is relevant, but the history of technology is the most relevant.

6) Technology is a very human activity-and so is the history of technology.


tools are not neutral: they favor that which they make easier


Tools define a relationship with the entity being tooled, so you can learn a lot about the goals and values of the transactions by looking at the kinds of relationships that are/aren't implied.

You can't feed people with an AR-15.

The metaphors that define computing are all about abstraction, automation without human input, predictability, and control.

Abstraction tends to lossy and context-insensitive, which is why an algorithm will say "no" to a loan request based on a credit score that ignores personal context, where a human loan manager who knows the people involved might say yes - for good and valid reasons.

Computing seems to be one of the most successful products of modernism, which values speedy predictable repetition, lack of decoration, and anonymised mechanisation.

It's possible to imagine computing in other value systems. But it's revealingly hard to do without losing the essence.

All of which is a complicated way of agreeing - computing is not neutral, and very probably can't be.

It's the first automated value evangelising system in history. And that's not a comfortable thought.


> Abstraction tends to lossy and context-insensitive, which is why an algorithm will say "no" to a loan request based on a credit score that ignores personal context, where a human loan manager who knows the people involved might say yes - for good and valid reasons.

A human loan manager, given only a credit score, will approve the loan based on only the credit score. The problem isn't human vs machine, it's context-awareness and context-sensitivity vs a lack of one or both.

All large systems, human or machine, tend to be implemented using abstractions. Layers of context-stripping abstractions were a part of buerocracy far before machines took it over. If a loan request form doesn't have an "if I don't get this loan my family will literally starve to death" box that you can check, the evaluator will not know about that context couldn't possibly act based on it. The form can be digital or paper and the evaluator can be a program or a human - it doesn't change a thing.

Similarly, context-sensitivity (the willingness to let the context sway your decision) was strictly limited even back in the day when decision makers were close enough to the source to be aware of the context. A bank clerk had to stick to the rules and couldn't just decide to give a loan to someone that didn't qualify under them. If they did, they'd be risking losing their job or worse, so they didn't. This discretion was only given to the people in charge - and that hasn't changed with computers. A person in a high enough position can override what the computer says, just as easily as they can override a lower clerk's decision.

Yes, making machines context-aware takes some effort, and making them context-sensitive takes even more effort. But even if machines magically came with both, their operators would do the best to get rod of them, just like they did with their employees. From their point of view, empathy is inefficiency. And that's the real problem.


I get your point, but consider that you can hunt with an AR. It’s pretty popular for women hunters as a deer rifle (which I found surprising).


Okay, but consider that I might be better off with a machine that approves or disapproves based on credit history and credit score than a banker who won't lend to me unless I get my husband's signature because he doesn't think women should be making choices about money without their husband's approval. (But who wouldn't even blink at loaning money to a man without his wife's signature.)


You can feed people bullets with an AR-15, but that usually doesn't end up being a life extension measure :(


Computers make a lot of things easier, both oppressive and subversive, just like any other highly versatile tool: hammers, screwdrivers, knives, syringes, duct tape and WD-40.


As a thought experiment why not go to the extreme and see if the "just a tool, it's about what people make with it"-point would hold there.

"Gas chambers are just a tool"

But a tool that has very few uses besides large scale genocide. So of course it would be ridiculous to argue the gaschambers are just a tool, because they are (A) the result of an engineering project that tried to kill as many people as possible in a short time, without using valuable war resources like amunition, so the "tool" cannot be isolated from the context of it's creation, and (B) there is one thing gas chambers are good at and it is mechanizing genocide. One could potentially come up with other things they could be used for, but there is just the one thing theyr really excell at.

Computers have more uses, but they are "not just a tool" either, because in reality nothing is.


Gas chambers are widely used for sterilising materials going into clean rooms for compounding of medicines. While this does meet the definition of “one more thing”, it is rather an important one.


With showerheads?


Not that I've seen. Nothing about the phrase "gas chamber" to anyone knowledgeable about pharmaceutical equipment suggests shower heads though - it's a simple case of you attempting to move the goalposts to support a bad argument.


> We no longer define our choices, we choose among the options they provide for us. Freely programmable computers are actually quite powerful and subversive which is why there are constant attempts to control them [...] Computing freedom is therefore against their agendas

Yes, exactly.


Spot on! The matter at hand here isn't "human vs machine" but "human vs machine vs human", meaning the oppression of human by human via the machine


> This oppressiveness is not inherent to computers

Absolutely. I vehemently disagree with this essay. One of the reasons I am a passionate technology critic and sceptic is that I love technology and feel disgusted at the way it's co-opted and abused.

I am always vigilant not to become jaded or lose sight of the immense positive possibilities for digital technologies. But I sincerely believe that before that can happen and we can move into the true golden era of technological humanism, Big Tech, surveillance culture and over-bearing governments must be eliminated.

> it's inherent to the people and institutions making use of them.

I disagree. It resides within them and is amplified by technologies. "Inherent" implies that it is an indelible, fundamental feature. Not at all. Humans, as individuals and institutions can do better. We're just seeing amplification and facilitation of the same old human weaknesses, laziness, greed, stupidity, fear - go pick up some Aristotle, or any religious text for a comprehensive list of moral failings.

> Computers are neutral technology.

Not quite. All technologies carry values in their design. An AK47 and a baby incubator carry different values, one is a for taking life and one is for preserving it. At a more subtle granularity computers carry values with them. I recommend Edwin Black's "IBM and the Holocaust" for some insights here. A Google Pixel carries different values than a Raspberry Pi - though to most eyes the differences are invisible.

> It's just that we are increasingly losing control of them. We're no longer the ones making the rules, the corporations and governments are.

Yes, computers have enabled tyranny. That's hardly debatable now. I call it techno-fascism. Other people use different terms. It's there in everything from Facebook to "cashless commerce".

> The computers belong to them now

So long as computers are controlled by software, which is mutable, the more complete and devastating will be the reversal when we take them back. There is always poetry and irony in ultimate justice. One way of looking at it is the tyrants are simply building the apparatus to enable the greatest and most audacious bloodless revolution in the history of mankind. The future of freedom depends on "hackers".

Sell them more rope.


The more neutral/academic term (instead of techno-fascism) is "algorithmic control"

Plenty of great reading: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C44&q=alg...


Or cybernetic governance.

Although I am not sure that captures the problem of technofascism.

Algorithmic/Cybernetic Governance is not malicious. It's just stupid. It's a mistake made by well meaning but misguided and poorly read people who have a vicariously omnipotent fantasy that machines can run things. Ultimately it's their intent to step back and bask in an autonomous land of milk, honey, magical plenitude and obedience.

Technofascism has a malicious and power hungry side. For that lot, computers are simple weapons of enslavement.

I also think the Cyberneticians are more honest and open, They really believe in their ambitious but naive vision. The technofascists are duplicitous. They know people won't buy their control fetish, so spend a lot more on presentation, sugar frosting, and outright lies about the "benefits of digital technology", and their own intentions.


The author eventually contemplates the "good" aspects of "machine ruthlessness" (e.g. "cryproanarchist technologies"), but ultimately warns against the "oppressive" potential inherent in all machines.

The main problem with this argument IMO is that it equates oppression with the kind of unwavering ruthlessness that is the way of the machines: they will do as they are programmed - there are no exceptions (unless such exceptions are programmed in of course, which is not discussed).

This is however not the usual definition of oppression, which is normally conceived mostly as "unjust discrimination" [0] - which is really the polar opposite of such undiscriminating "decisions" by the computers. I.e. everyone is treated equally, which is an ideal under the rule of law, the alternative opens up for corruption where the wealthy (for example) in court is treated with more leniency than the poor - or vice versa.

As it stands the text can be read a veiled criticism of a "rule of law"-system backed by the state. And as such, a modern take on an age-old discussion that leaves little leeway for common ground: a leeway that allows for a little bit of this, a little bit of that which is how things tend to end up in actual life.

[0] https://en.wikipedia.org/wiki/Oppression


Cryptoanarchism seems to be a deliberately designed red herring in this article (or socially agreed upon warning).

It doesn't escape the main thesis, that computers are physical devices that rely on rules.

It's refreshing. I'm smashing my head with the ongoing Monkey Pox Holmes Rahe Stress train wreck, and discord in US cities. And here's an essay that raises questions on Philosophy of Mind. What if you believe in a Physicalist or Machine Functionalism Theory of mind? Then are we just as ruthless as computers?

What if you have a Magical Random Banana? Does that let you escape the ruthlessness? What if you don't trust your magical banana?


> I.e. everyone is treated equally, which is an ideal under the rule of law

Sure it's branded as an ideal in our profoundly corrupt societies. But would "equal treatment" bring equality? In this system, we are all free to own a castle or to work 12h/day in the mines from the age of 8. But however impartial the treatment of individuals the system is still meant for profound inequality and exploitation.

Of course i agree that at least here in France we are very very far off impartial treatment. Like you pointed out, the rich, the powerful and the cops are above the law and we lower people can get abused by the judicial system in many ways. I'm just trying to say that being impartial (or as the op says, ruthless) does not imply fairness.

Capitalist impartiality still means billions of people starving and working the fields/mines/factories to maintain the cheap throwaway lifestyle of urban modernity. And yes, this state of things and the neoliberalisation of everything very much relies on computers. I think calling "oppression" is not a far stretch, but rather an understatement.

On the history of computer oppression, you can do reading about "micro-management" and "bossware", and the restructuration of factories/administrations from the 80s to this day. Computers were and are an irreplaceable tool to enforce stupid policies that ruin the life and health of many working folks and the people they're supposed to serve.


> the system is still meant for profound inequality and exploitation

I disagree that "the system" is "meant" for anything - the system is not a monolith even if certain parts of the machinery work well together, it is rarely a question of design, usually a mishmash of various historical phenomena and trends that happen to exist simultaneously.

I think my plea for "a little bit of this and a little bit of that" perhaps got somewhat muted in my comment, placed at the very end like it was. So I agree that it would be oppressive to always treat everyone equal, the billionaire as well as the double working lone mom, and that machines many time is a prime tool for oppression and deflecting responsibility.

But it would be problematic to start chipping away at the ideal of everyone's equality under the law in order to rectify such. This ideal is what stands between institutions like freedom, democracy and human rights on the one hand and corrupt states on the other, where the law is applied differently depending on who is on trial, and where the letter and intent of the law is flaunted in favor of pure (real) oppression.

The problem really seems to be discerning where to apply "a little bit of this" vs where to apply "a little bit of that", a project that doesn't lend itself well to sweeping generalizations.


The oppressiveness described by the article is not about computers, but about rigid design — design with few affordances. Or, design with missing affordances — affordances that intuitively should exist but do not, due to various material or intellectual constraints when designing the thing.

Which leads me to disagree with the article that computers are inherently oppressive. Because with more resources, we can design computers with more and better affordances.


Computers are an inherently oppressive technology only in the same sense as any technology that isn't, like, a magical box that automatically feeds everyone on earth from thin air forever. There are certainly negative aspects to computers, some of which are touched upon in the article (my specific pet peeve is "middleman" apps like DoorDash or Uber whose sole value-add is to allow you to avoid talking to "the help" delivering your groceries, making your food, etc.). I think it's more productive to judge computers by their net "oppression", and for every Bitcoin or GrubHub there is an encrypted messaging app that stops the state from peering into an aspect our lives, an anonymity tool that lets people speak freely, or a Wikipedia. To use one facet of computing as an example, the populace in aggregate has more secrets than the government, so if each secret kept by the government via encryption adds 1 unit of oppression and each secret kept from the government via encryption subtracts 1, (theoretical, perfect) cryptography would clearly be liberatory on net.


indifference isn't opression, and all the world with the exception of some people is indifferent to you


This should be the top comment.


Writing is an inherently oppressive technology.

Some of the first uses for writing were tax records to record what was owed to the king.

A large bureaucratic state could not exist without writing.

In addition, up until recently, literacy was confined to the upper classes, and the common people could not read or write. This was used to perpetuate the privileges of the upper classes.




I didn't understand this OC's thesis, so can't criticize.

--

FWIW, Seeing Like a State has connected a lot of dots for me. https://en.wikipedia.org/wiki/Seeing_Like_a_State

My take:

Administration (tax, trade, war, public health, etc) requires control.

Control requires centralization.

Centralization requires bureaucracy.

Bureaucracy requires standardization and information technology.

Computers are tool for digitizing information technology (replacing paper with bytes).

So are computers inherently oppressive? Sure. To the extent that participating in collective action reduces or impinges on individual actions.

--

A more interesting question for me is how innovation accelerates inequity.

Winner takes all (preferential attachment) is something like a natural law.

Reducing transaction costs (innovation, competitive advantage) merely accelerates the inevitable outcome.


Is mathematics an inherently oppressive study? Computers are simply tools for performing mathematical calculations, a very fancy abacus, if you will. The fact that humans use them as a tool for oppressing other humans is nothing to do with the nature of the computer, and everything to do with the nature of human societies.

Is paper an inherently oppressive technology because it was used to hold rules and contracts which bind people? How about trains, airplanes, automobiles, or rockets?

Are horses inherently oppressive animals because they were used by Genghis Khan to sweep the world with his armies? Is wheat an inherently oppressive plant because it enabled the rise of unequal societies with division of labor? Are dogs evil symbols of oppression for their many nefarious uses?


If there's a risk of oppression, it's likely to be due to the power of a small number of people to control a large number. For instance the people who can afford to employ a large pool of programmers have more control than those who can only write their own programs, or who at worst are confined to the user side of the relationship.

Since I'm not in that first group, my only hope is to differentiate myself from the last one. Hence why I enjoy programming for my own use despite note writing "software" for other people to use.


> All transactions are final; none can be reversed. It does not matter if your coins were stolen, or if your family will come to ruin because of it; the system does not care. There can be no exceptions. If we desire a system of absolute integrity, we must accept these outcomes as the cost.

[gestures broadly at hard forks] it ain't easy to reach quorum, but it is possible.


It's like we discovered hammers. What a marvelous tool. So we amputate everybody's hands, feet, eyes and ears. And replace them with prosthetic hammers. Which is awkward and unattractive.

Then we raze the world to make room for nail-factories.

The poster-boy for success is this hammer-limbed freak crawling through an apocalyptic wasteland of steel shards.


Semiconductor technology exploded old social lifts and killed old guards. The most upset about it are the people who relied on primitive parts of brain to achieve social status or gain access to capital.


Would you seriously like to go back 150 years and see how that feels?

Because of "SOX" I run our "exit process" and I had a small victory lately. I'm not proud, but it happened, and I was thankful.


Fighting this oppression is the whole point of the GPL (and the AGPL). But abject pragmaticists and industry bootlickers are pushing hard against software that protects the freedom of the users.


I have always seen this in opposite direction.

Computers shut the door on all human skulduggery. Humans are terrible at arbitrating fairness. Even when they think they're being fair, they are not.

Why are you only considering the upside of having humans in the loop, having the heart to wait the extra second for the filing? What about the human that looks you up and down before making that decision? What about the human that wants a bribe to take your filing? What about the human that's in good or bad mood?

Your conception of humanity is off.


Everything that is not human is ruthless. Non-ruthlessness is a purely human quality due to our ability to empathize with other humans and even non-human animals and objects.

Animals are ruthless, they will kill or devour you alive without a second of hesitation or remorse.

Nature is ruthless, it will freeze you to death, drown you, or demolish your community forever deaf to your pleas for mercy.

The universe is ruthless, its laws are unbending and its vastness is deadly and inhospitable to human life.

We are far beyond our evolved ecological niche which means we must conquer and alter everything around us or else be destroyed by it. We cannot sue for peace with an unthinking and unfeeling world. The only option is to shape it to suit our needs.


My cat doesn't devour me. It cuddles me.


Your cat was domesticated.

The OP here is correct on most parts, but wrong on a major one.

Humans are animal. The only reason we exercise compassion or empathy is because we have mastered limited power and resource - we are mostly comfortable and safe.

As soon as that changes, we become as animal as the next, and much more dangerous than any that has ever lived, on account of how mostly insane we are.


All this rumination on the evils of train and vending machine doors, and yet environment-burning ransomware-funding Bitcoin is held up as a counterexample for good?

This all seems pretty wooly.


Side note: Would you put when this article published at the beginning of the article? When clicked on the link it's hard to grasp if it's current or published years go.


Computers are NOT an inherently oppressive technology, any more than vehicles are inherently homicidal.

What can liberate humanity from bogus causal analysis?


Computers will not carpet bomb, rape and execute you. "Soulful" humans do it all the time.


Will an AI ever be able to make correct moral decisions? Or is that something only a human can do?


Computers are both a bicycle for the mind as well as a heroin for all our intellectual senses.


This is both a dumb essay and good reading for an HCI course (human computer interaction).


And did the Countenance Divine,

Shine forth upon our clouded hills?

And was Jerusalem builded here,

Among these dark Satanic Mills?


"Here's some TV shows I watched. And dreams I had."


I love this article and share most of his concerns. Thanks.


Clickbait headlines are an oppressive use of technology.


How is this number2? …


computers do what they're told. If it's oppressive, then the influencers are oppressive


I really like the article (less so the title), but it reads like the "For" side of a two-sided debate - a myopic (albeit valid) criticism of the worst aspects of computer cruelty. But I think this is one of those areas where there is another side to the discussion, and the truth probably lies somewhere between (or perhaps better said: it lies in both sides simultaneously, and we as a society, and especially as technologists and software developers, decide which truth to draw out).

So, here is my "Against" argument:

Computers are spaces of almost complete freedom. More than almost any other creative or expressive medium, there is the possibility to do anything without limit.

I can write, but unlike a book or a newspaper, where my words can at best be statically illustrated by photos or diagrams, I can show videos, annotate diagrams step-by-step, and allow my reader to interact with my words. My drawings need not be limited to a 2D plane, and my models need not be limited to the 3D world. When I create something, I don't just have one copy of it that I must never lose, but I have an infinite amount of it that I can share freely as I want. When I tell a story to someone, they don't have to be in the room to ask questions, or contribute back.

This is near-unbounded creativity. And it is creativity without sharp edges. Due to computers, animation has spread from being a locked-down, family-friendly art form available almost exclusively to the largest media companies, to a toy used to create dumb fights between famous media characters, or extensive porn collections, or touching and heart-wrenching dramas. Computers have democratised art in a way analogous to the printing press, or the television, and by doing so destroyed the power of censors and critics in deciding what gets shown, and what does not.

That escape from critique and censorship is a particularly important idea that's worth emphasising. Yes, we have a cinema industry that only wants to show the same handful of blockbusters every week, but services like Netflix and Amazon have created a space for niche films and television series to thrive, and even be reincarnated. Critique has become a more flexible space, where rather than rely on elite opinion-makers to lay their judgements down, I now have far more voices to listen to, but more flexibility in which ones I choose to accept. And yet, rather than narrowing my choices down into a private echo chamber, this variety of voices arguably provides more chances to hear new opinions and experience media that I wouldn't have thought of before.

Computers are not inherently oppressive. If anything, they are inherently freeing. With computers we are not just able to create more things, but more people can create things that ever before, opening up voices that simply wouldn't be heard in previous generations. Moreover, those people can share their creations with more people than ever before, bringing new ideas of freedom into true oppressive states.


Indeed it lies in both sides simultaneously, as all your examples have a free and oppressive side to them as well. It seems the freedom lies with the individual and his control over his own computer, and the oppression lies with larger platforms that enforce rules on the individual outside his control.

Computers have democratized art and opinion, but the distribution of that art and opinion is done through centralized platforms, which are curated/censored using automated programs ("the algorithm").

I think computers are neither inherently oppressive or freeing, and I think that is also the point of the article. Computers are -exactly- what we make them do, and the word -exactly- is what is explored, and judged, as easily leading to oppression if given power over the inidividual.

Power over the individual sounds ominous, but what I basically mean is that a computer decides over you, instead of for you. For example: youtube decides if your video is monetized, some AI decides if you get a loan, or get into a country, or get random checks at the airport, or what you see on youtube, etc etc.


s/inherently/increasingly/


Capitalism will eat itself. Countries with cultures which favor decentralization and personal relationships between producers and consumers with thrive and retain their spirit.


Capitalism is decentralization.


As someone who has experienced computers before they became oppressive, I'm here to counter this claim. Computers in the hands of their owners, are simply tools. The more powerful the tool, the more dangerous it can become. I'll try to make that case by analogy to machine tools.

Machinist know that their tools, such as the lathe and mill, are always ready to rip your arm off if you fail to respect them. As long as you keep that in mind, and account for that, they can then be used to do in minutes what used to take months of careful work to accomplish.

The potentially life altering side effects of machine tools demand the machinist very carefully constrain the workpiece, and their access to it, and thus limit the side effects of the cutting tool and the power driving it. Loose clothing, or loose workpieces, shouldn't be allowed in the shop.

Back in the early days of computing, the side effects of a computer were strictly limited to the front panel lights of the first IMSAI, or the screens of early computers that lacked hard drives. Simply cutting power reverted everything back to the initial state. With some very limited exceptions (I'm looking at you Monochrome IBM monitors), you couldn't cause permanent harm, no matter what you typed or ran.

As we progressed and got floppy disks, they came with the option to Write Protect the disk (I.E. make it read only), and the option was trivial to deploy. This allowed us to experiment with a huge variety of software, it was common to come home from a meetup or user group with a stack of "shareware" floppy disks containing hundreds of different programs to try out. Because the OS boot disk was write protected, there was still no long term harm that could come out of this activity. The side effects were quite easily managed.

The liberation and freedom that this allowed was quite a rush. It can still be that way, if we take care to limit the side effects of computing. Operating systems that can be booted from separate read-only media make it possible to always start with a known state. Keeping copies of your data (like we did with backup copies of important disks) allows you to revert to a previously known good state.

It was only after we started installing software, rather than merely dumping it in a folder and running it, that things got far less fun, and more constraining and oppressive. You now had to carefully guard your machine as you could no longer easily revert it to a known state. Backups became hours long processes involving stacks of diskettes or tape drives. The ability to try out new things, and the freedom that came with that experimentation was lost along the way.

We need to return to our roots, to make it possible to experiment again. Just as with the careful constraining of the workpiece and path of the cutting tool in machining, we need to provide a means for the user to constrain the side effects of the computer and turn it into a powerful, yet safe tool.

We need a new class of operating systems for our computers that allow the user, at run time, to specify which files they wish to operate upon. Unlike the Unix environment on which all popular PC operating systems are modeled, we need the operating system, and not the program, to enforce the user's decision. This would again allow them the safety of knowing what was and wasn't at risk at any given time.

We can choose to make computers safer, and less useful as tools of oppression. Let's do so.


Ironic to see this in an aggregator site called Hacker News.


The Industrial Revolution and its consequences have been a disaster for the human race. They have greatly increased the life-expectancy of those of us who live in "advanced" countries, but they have destabilized society, have made life unfulfilling, have subjected human beings to indignities, have led to widespread psychological suffering (in the Third World to physical suffering as well) and have inflicted severe damage on the natural world. The continued development of technology will worsen the situation. It will certainly subject human beings to greater indignities and inflict greater damage on the natural world, it will probably lead to greater social disruption and psychological suffering, and it may lead to increased physical suffering even in "advanced" countries.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: