Hacker News new | past | comments | ask | show | jobs | submit login
In an age of all-knowing algorithms, how do we choose not to know? (nautil.us)
47 points by dnetesn on June 17, 2018 | hide | past | favorite | 5 comments



Socrates also said something along the lines of 'I know that I know nothing'.

You can look at your entire upbringing and education as stuff you know with certainty and stuff you were supposed to know, as well as stuff that is 'correct to know', but that doesn't make it so, nor does it make it completely invincible to change.

You can have the date of your death and it can be wrong. You can 'know' things with near certainty and those things can be wrong too.

Choosing to be aware of the fact that your own knowledge will forever be flawed, no matter what technology exists to tempt you into the belief that you can know everything; this is a tricky balance, but it's the correct balance.

Because, you can't know things before you do. You can 'wiggly know' them, you can be correct in thought a billion times in a row. That doesn't mean you don't require reality to validate that correctness. You do. You can think thoughts, you can analyze computations, but you can't know they are true until they are. This is the only thing I know with certainty, and is my belief that it's a necessary way to think in order to maintain sanity.

This is especially important when we one begins to have the awareness that some types of AI work as effectively as our own minds do (for certain problems). So what does one do in that instance? Fear it? Meh.

Why not just consider it human? Are a million Russian bots really that much scarier than a charismatic politician with a winning smile?

I believe fear corrupts judgement far more than knowledge.


I like the points you're making. The proportion of us who aren't convinced by this zeitgeist's most promoted fears is larger than it seems.

I'm not willing to consider a bot swarm to be human though, just a new human tool used to do stuff humans have been doing forever.

Since I've been watching, society has been one step behind the state of the art manipulation techniques. It seems like society gains a big increase in resistance to a technique when we collectively agree on a catchy label for it. Morton Downy Jr. and all the big audience talk shows of the 80s and 90s were perfecting ways control the fears of people (particularly parents). They seemed to become a lot less influential when we started calling it "Trash TV".

24 hour news adopted the lessons learned from Trash TV, and really refined them after 9/11. The Iraq War, the Patriot Act, the financial crisis were all offered to the public by the same techniques that made your mom ask you if you were huffing canned air.

It hasn't been until a few years after internet techniques have been put into use at scale that society has accepted "fake news" as a label for the last cycle's technique. We've always known "shill" but it doesn't seem like it's the label to start inoculating us. To be fair, we did mitigate a class of techniques with "clickbait". I'm super curious what the final phrase we'll settle on is that'll blunt artificial consensus and fake conversations.

The fear I come back to is that those who manipulate and exploit have perfected not pushing people too far, so things will always get worse and we'll never again have a sinusoidal rebound to an era of greater optimism.

Two things help assuage that fear for me. 1) Reading historical examples of conspiracies to manipulate and exploit. There are a ton of things in American history alone that make the current suspected intrigues pretty milquetoast. The Business Plot was crazy! Bay of Pigs. Mohammad Mosaddegh. MK Ultra. Tuskegee. All that makes me feel less worried that we've got a steepening slope to dystopia.

2) The various revisions to the Playstation 4's "TV and Video" section. I don't have to worry about companies riding fascism's razor edge, they're as unsubtle as ever.


I say consider it human in the same line of thought the government says "consider a corporation a person". It simplifies the abstraction to make it easier to reason about.

But in another line of thought, what is consciousness, what is awareness? If you live in a vacuum void of stimuli, eventually thoughts cease - everything becomes predictable and you don't have to think or reason about anything if you choose. In that line of thinking, humans are dependent on other humans for the continuance of thought, idea, etc.

So in that line of reason, what is an AI? AIs today are created, maintained, directed, trained, modified, and destroyed by humans. They are not programed to 'make mistakes' or to have a tolerance or appreciation for making mistakes, and that's the biggest difference I see right now between what it means to be human and what it means to be a machine. But functionally, they operate the same as any human - they are dependent on humans for a variety of tasks and they are better than humans at a variety of tasks, just like you might make a comparison between one human to another.

So when it comes to determining how to reason about AIs, it's honestly much easier for me to reason about them like they are a human, because that supersedes all the technology language (similar to how you might choose to interact with a person without considering their neurology or psychology as a factor in anticipating or trusting in their behavior) and allows you to reason about them as though they have intent that can be reasoned about, simplified. Regardless of whether that intent is coupled with the humans that are 'behind the wheel' or not, the point is that you can consider 'human + machine' to be a singular entity, because at the very least, without data, there is no machine.

I think it's important to reason about stuff this way because it separates details - details that while significant to the actual research / engineering of AI, are not necessarily significant to their effect on things like the individual and society, global economy, etc. AI appears to operate on a finer granularity, but I'm not even sure that matters, because what is granularity when it comes to the reasoning systems people use to function modernly? Is it possible to even use that type of terminology when comparing ways of thinking?

And I think that is sort of a middle ground between everyone in a panic and everyone who thinks "same old shit, different decade". It could be different, we don't know.

> The fear I come back to is that those who manipulate and exploit have perfected not pushing people too far, so things will always get worse and we'll never again have a sinusoidal rebound to an era of greater optimism.

I don't know about that. Machine learning is in part, used to predict (or control) the future, before the future has happened. That kind of seems like trying to bite your own teeth.


> In each of these cases, the nature of the conclusion can represent a surprising departure from the nature of the data used [...]. That makes it hard to control what we know.

This article conflates anyone knowing from me knowing in order to generate this article. I don't have access to the crime prediction algorithms, so even if I wanted to know the result for me, I can't. On the other hand, I can pull my free yearly credit report right now. I haven't, so I don't know what it is right now. Injecting AI here doesn't change how I would operate as a human to know or not know.

I would argue AI does not shift the balance of forceful-media-inception. That belongs with cybernetics and direct chip-to-brain interactions. AI just uncovers new information for us to consume through old ways. So choosing not to know is by just using the pre-AI way: avoid certain sites, doctors, etc. Once we have cybernetic implants, that's when we may have to adopt new behaviors to avoid a new avenue from unwantingly pushing information into us.


Start by disallowing hn from partaking in shenanigans, which may need to be preceded by discovering how such shenanigans could possibly be occurring in the first place. (Hint: how can data be used against you in your little reality?)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: