Hacker News new | past | comments | ask | show | jobs | submit login
Seeing Like a Finite State Machine (crookedtimber.org)
89 points by raleighm on Nov 28, 2019 | hide | past | favorite | 40 comments



The title is a reference to https://en.wikipedia.org/wiki/Seeing_Like_a_State which discusses "legibility", the process of mapping messy reality into convenient categories for systematic control.

The characteristic item of legibility is "papers": ID cards, passports, and so on; but also things like title registers. In a transparent rule-of-law society the control systems enabled by these are generally to the public benefit, which is why they get instituted in the first place. But at every stage there's people and situations that don't "fit", and a huge temptation to hammer the square peg into the round hole. Impositions ranging from "no you cannot have the right gender on your passport" to wholesale ethnic cleansing.

And the stuff about the inability to correct mistakes in authoritarianism leading to scientific and technical error is spot-on. That's how the Soviet Union got Lysenkoism, and its space program only survived due to the personal political effectiveness of Korolev (who had already been in a gulag!).


Distributed thought policing can be just as effective in enforcing error as the centralized variety. In the west, there are many true things that one cannot say: who can say how much damage these prohibitions have done to our decision making?

As for the Soviets: their physics, chemistry, and mathematics education was very good. It was only in the softer academic fields like history, biology, and psychology that you saw widespread nonsense. These fields are the ones with direct political implications. Today, we struggle with these fields in much the same way for much the same reason: see the replication crisis in psychology.


> In the west, there are many true things that one cannot say

sigh such as? And what repercussions do you receive for saying them? I see a lot of people with books, TV interviews, and academic positions complaining about being "silenced" despite still being very audible.

> we struggle with these fields in much the same way for much the same reason

Evolutionary psychology may be the worst example of this. Or large areas of economics. Or the political implications of climate science.


> > In the west, there are many true things that one cannot say

> sigh such as?

Be aware that you are, in effect, asking for people to actually step in the bear trap in order to prove to you that it’s dangerous.

Even pg’s essay What You Can’t Say (well known around here) didn’t mention any specific things, for obvious reasons.


> Evolutionary psychology may be the worst example of this. Or large areas of economics. Or the political implications of climate science.

social psychology was specifically singled out by the replication crisis for having something like 2/3 of its published papers failing to replicate


> I see a lot of people with books, TV interviews, and academic positions complaining about being "silenced" despite still being very audible.

such as? And how can you complain about opinions which you can't find easily just by looking at those which you can?


the replication crisis seems like the opposite problem? it's too easy to say things, therefore, there's too much noise to get anything valuable.

it could just be that the softer academic fields are just harder to study, unrelated to what society we try them in.


Might you be confusing state-enforced controlled speech with simply not being well received by the people you're speaking to?


my reading is that they consider the two to have very similar effects. conflating the two is the point, not a confusion


I think there's actually a good point here. There does seem to be this scare mongering that China is going to become some sort of ultra efficient techno-totalitarian regime where their oppression is guided perfectly by technology. However, It's very difficult to get useful work done with machine learning. It's very easy to get absolutely rubbish results. The reason there's a massive arms race amongst the top tech companies for talent is because you need really smart people to actually apply machine learning well.

So the likelihood is that a state like China is going to primarily to be getting rubbish results. The data they feed into their systems is going to be patchy, biased, often doctored. So when it starts locking people up and sending them to re-education camps they're not actually removing the disruptive forces in their society, they're just reproducing the great purges from Russia. Picking people more or less at random and attacking vast swathes of the population. And whilst they fail to solve their social problems they'll start creating economic problems as they start destroying their own workforce. At which point they either have to abandon these techniques entirely (unlikely) or just go more and more to the extreme, which is what we've seen time after time for states that engage in these behaviours.


There's a tradeoff between type I and type II errors. As long as China's leaders are willing to err on the side of their own safety, and as long as they don't mind locking up way more people than necessary -- which conditions both seem to hold -- I see little hope for the hypothetical destabilizing effects of lousy AI.

After all, dictators throughout history have made their countries horrible with no AI at all.


China has a lot of smart people and a lot of money. I think they could do a pretty good job at applying machine learning.


Yeah, over a billion people, all loyal to the cause.

Machine learning is sloppy and imperfect now but if anyone is going to get out ahead of the pack first, it would seem to be China. Especially since they are not held to the same accountability as other nations due to the West's relatively weak influence, and they are more comfortable with locking up the wrong person just to make sure they catch the right one too.


Is there any reason why machine learning techniques can't be used to control the application of machine learning techniques?


It's similar to "is there any reason why a computer can't program a computer?" Yes, you can do that, and you could even sort of say that a compiler is doing that. But if you think it keeps you from needing to have a person program the computer, you're expecting too much out of it.


Automated Machine Learning does seem a thing though?

https://en.wikipedia.org/wiki/Automated_machine_learning

Edit: I was just curious, given the claimed widespread applicability of current ML techniques and the apparent demand for human intervention in the process, to what extent there had been any progress on automating the use of ML itself.


Machine learning doesn't have to be perfect (or even effective) to do its job in this context. It's like the Mechanical Hound in Fahrenheit 451 - as long as the public hears it's been let loose and then sees it catching someone, that's enough to reinforce its 'infallibility' and its usefulness as a tool of the state.


The technological side does not matter. The modern AI renaissance is purely driven by wide availability of training data. The West is in the process of restricting this, China is putting it front and center.

The academic state of algorithm development (whether China is better or worse in this regard) is a minor rounding error.


> The great advantage of democracy is its openness to contrary opinions and divergent perspectives. This opens up democracy to a specific set of destabilizing attacks but it also means that there are countervailing tendencies to self-reinforcing biases

> These correction tendencies will be weaker in authoritarian societies; in extreme versions of authoritarianism, they may barely even exist.

What does authoritarian vs democracy have to do with "openness to contrary opinions and divergent perspectives"? Do the courts of public opinion perform better than government?


One of the more interesting results from dynamic systems theory is that all simultaneously complex and stable systems are stable because they are stabilized by internal opposing forces.

As authoritarianism imply trying to shape the world through power and authority, the society must then either become unstable, stop being complex (?!), or respond with equivalent power by eg. overthrowing the government.

One could say that the courts of public opinion perform worse, and is thus better as it decrease the total amount of power/energy in the system. Ie, it's ultimately less lethal in most cases. Not to say that the public can't be terrifying, but on average it seems to be a much better bet.

Allowing large scale disinformation campaigns is a problem for democracies, and it must be combated vigorously, or one could get into an situation where the disinformants have too much power, with much of the same systemic results as an authoritarian leadership.


Authoritarianism has its share of opposing forces too, I would wager. The vassals-clients are trying to switch places with the lords-patrons, and the patrons have to neutralize those attempts. A lot of work is wasted on policing: watching the suspects and watching the watchers.


"Allowing large scale disinformation campaigns is a problem for democracies"

It does not have to be disinformation. Given how easily the majority is brainwashed by propaganda it is not that difficult to imagine mass postings that while being technically true are presented in a way that helps to form needed state of mind and just progress from there.

How do you fight that?


You don't. If the masses change their minds in a way that you don't like in response to true facts, either put out your own competing true facts or just accept that you make have been wrong.

A worldview that can exist only with the support of censorship doesn't deserve to exist at all.


I've always subscribed to the notion that while there is an objective reality, in many cases humans have no hope of finding it directly and instead only have access to it through cultural filters.

What this means is that cultural processes can change what facts are considered true over time.


That's postmodernist baloney. There is an objective truth, and we can asymptotically approach it. Humanity can access facts. It's nonsense to suggest that we can't distinguish between claims because everything is false.


That's rationalist baloney. There is an objective truth, and in many cases we can asymptotically approach it, but not nearly in all cases.


Which specific ideas are important enough to censor the public to enforce but not amenable to empirical verification? Sounds like religion to me.


Using the right pronouns for people in workplaces and schools. There is empirical evidence of improved mental health but ultimately it's based on a specific belief system — you have to believe that trans people are a group to care about, and that almost anyone who asks you to use a different pronoun for them is being sincere about it.


Who gets to decide what counts as "disinformation"? Combatting disinformation campaigns is just a euphemism for giving someone the power to decide what's true, and a policy of enforcing some official truth just leads to errors in decision-making.


Certain things are definitely false, but in attractive ways. If you don't have a process for getting them out of public decision making you end up making policy based on Protocols of the Elders of Zion or the 21st century garbage equivalent.


Again: who operates this process? Who gets to decide the official truth? You? Why?

We have a process for filtering untruths out of common knowledge: ideas competing on a fair basis in public has worked for hundreds of years. We should continue relying on the marketplace of ideas instead of just accepting pjc50's official list of true facts.


I don't have a solution, and I don't even have a list of facts, but I can find out where things have been thoroughly investigated and determined to be false. It's rather like climate change, there is no one single solution and it requires continuous effort by almost everybody. Giving up on truth is like giving up on the environment and throwing your trash in the street.


"I don't have a solution" is a cop-out. You clearly have a solution in mind, because you've rejected the option of just letting people form their own opinions. Every single time I've seen someone make an argument similar to yours about how we need to protect the public from "disinformation", I've found that once you probe deeply enough, the real argument is that what the person talking about "disinformation" really wants is to put himself and his friends in charge of constructing official truths for consumption by the public.

No, thanks.


Meanwhile, doing nothing allows "vaccines are dangerous" and "Jews are dangerous" messages to become popular to the point that people get killed. And your response to that is that any number of deaths is better than even the most limited attempt to remove known lies from public discourse?


It's interesting how you offer censorship as the only alternative to "doing nothing". People trying to gain extraordinary power over others always that they need special authority to deal with some extraordinary emergency. I refuse to accept the idea that edgy things said on the internet constitute some kind of Reichstag fire justifying giving tech companies or politicians or NGOs the right to decide the truth.

If an idea is bad, you can discredit it using argumentation and evidence. If you can't, your alternative probably isn't as correct as you think. Who are you to decide a priori which ideas are good and which ones are bad? Who gave you that authority?


So why are we losing against the anti vaxers?


Because experts betrayed the public's trust by spending years making false and self-serving claims while censoring those who tried to debunk these claims. Now the public doesn't trust experts even when they're right. The way to regain trust is to stop the censorship, not amp it up.


The central point of this garbage article is that authoritarian regimes that use machine learning will fail because they'll be misled by biased outputs. Titledropping Scott's famous book is just for flavor. As one of the comments says, the article equivocates between "bias" as injustice and "bias" as error. Authoritarian regimes can use machine learning just fine.

The rest of the comments are just bizarre, with claims like there's no difference between dictatorships and democracies because "totalitarianism" is just a "colonialist" category. And this blog wants to make the claim that it's China that's detached from reality?

Machine learning's real problem is in the west, which has a very difficult time accepting unpleasant facts even when fair ML systems produce them. It's the west, not a place like China, that's going to be misled by accepts to "fix" machine learning.


Downvoted ? Really ? Must have hit a nerve somewhere.

"Machine learning's real problem is in the west, which has a very difficult time accepting unpleasant facts even when fair ML systems produce them.'

For example, "proof" that bias exists in an institution due to a statistical difference between its distribution of identity groups and the general population. Will be interesting when these type of rules become part of our ML overlords.

True story: I took a science course in school a long time ago when my lab partner used the calibration knob in the back of the voltmeter to get exactly 5 volts out of the power source. Found this out only after we kept getting very odd results.


>Authoritarianism then, can emerge as a more efficient competitor that can beat democracy at its home game (some fear this; some welcome it).

false assumption that democracies cannot be authoritarian




Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: