
The Hidden Costs of Automated Thinking - laurex
https://www.newyorker.com/tech/annals-of-technology/the-hidden-costs-of-automated-thinking
======
robbiep
The leading example of modafinil is erroneous.

Modafinil works by at least 3 mechanisms. Part of the complexity is because we
don’t yet fully understand how and what causes sleep; or why it is so
necessary that without it you will surely die. So balancing what does what it
(modafinil) works by is difficult, however possibly the greatest contribution
to our understanding of the mechanism governing the drive towards sleep has
been modafinils insights into this:

As you go about your day, adenosine (yes, the dna base) accumulates in the
extracellular space in the brain. These levels reach a peak before you sleep;
sleeping allows re-uptake/cleaning of intracellular adenosine. Theories have
proposed that levels of intracellular adenosine contribute to the drive to
sleep/fatigue.

Caffeine binds as an antagonist to the adenosine receptor; which is where the
stimulant effect is derived from.

Modafinil causes re-uptake of adenosine; so there is less intracellular
adenosine bonding adenosine receptors (and presumably causing the reverse of
stimulation), a function normally carried out at night.

There are at least two other known functions since I last did an enormous deep
dive in modafinil and read every published paper during Med school

~~~
bryanrasmussen
So how should modafinil and caffeine intake be leveraged to best work
together?

~~~
robbiep
That's a deep question, I don't think they mix well and this isn't my area of
specialisation. I generally try to avoid receptor mashing so try to stick to
one thing unless work absolutely demands it (I also think it's very important
to have a wash out period where you are clean, we really don't know what the
long term effects are but Modafinil has been used since the early 90s and
doesn't seem outright harmful). Caffeine has a short half-life (depending on
your genetic profile), said to be 4-5 hours online but I don't find this to be
true to me (i'm a fast metaboliser according to 23&me); whereas Modafinil has
17 hours. take from this what you will.

My honest opinion is that you should try to balance your life, sleep well and
exercise, because everything else is just a crutch, and crutches shouldn't be
used long term.

------
willey457
The most important phrase of the entire piece:

“A world of knowledge without understanding becomes a world without
discernible cause and effect...”

~~~
GauntletWizard
I've already started snarkily saying that machine learning experts are people
who believe that their degree in statistics allows them to ignore the
correlation causation fallacy.

To put that point another way: It's commonly known that shark attacks are
rare. But that's because we know where sharks are. We don't swim in beaches
where sharks can easily swim, and rarely far out enough for them to be
prowling. When swimming in deep water, shark attacks are comparable to risks
of sports like cycling [1]. Sharks aren't going to kill you, but if we didn't
have institutional knowledge of where they attacked (and continued watersports
at the rate we currently have), you'd likely know someone who died of one.

[1
][https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3941575/](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3941575/)

~~~
jayd16
You're kind of missing the point though. Correlation is enough to get results.
Worrying about the edge cases where correlation falls apart is simply worrying
about insufficient training data.

~~~
rmellow
What kind of results do you expect from blindly using correlation? Building a
e.g. credit rating model with this kind of mentality will just result in a
racially biased model.

~~~
jayd16
Why the hypothetical? Valid uses are already implemented. Salient points on a
face do not cause a face but you can still get very good facial recognition.

Its a parlor trick perhaps, but its silly to ignore the results.

As for building a biased system, again its just about the data you use and you
could easily build an "intelligent" system that had a racial bias. We should
worry about automation that simply pushes the status quo but I don't think
that's a problem specific to statistical based ML.

------
heisenbit
The is a lot of wisdom in that article

> It’s possible to discover what works without knowing why it works, and then
> to put that insight to use immediately, assuming that the underlying
> mechanism will be figured out later.

> Our accounting could reflect the fact that not all intellectual debt is
> equally problematic.

We often talk about the technical debts we are accumulating when creating
something new. But what sometimes can be worse is when one forges ahead
without the proper understanding, extrapolating from familiar but possibly not
applicable precedence.

The complexity of modern tech makes it all but impossible for even most of us
here to fully understand how our computers work - a very different situation
from the time Woz was able to hold all moving parts of the Apple design in his
head.

We have to rely on stuff we don't understand and it can be an emotional
challenge with different individuals reacting very differently. Avoidance
leads to a crawling speed and attempts to reinvent the wheel when the simplest
sufficient solution calls for a four wheel drive. Deliberately closing eyes is
also not uncommon - believing in the magic of the solution with no regard to
the laws of nature. Overly broad deep diving, studies and investigations is
another anti-pattern. As is being overly conservative - my current project
technical debt is way too much code as we did not fully understand the
framework we build upon and feared to make the required small innovations.

Finding the right balance here as an individual or finding common ground as a
team is challenging.

~~~
playing_colours
We rely on a lot of things without understanding how they work from the dawn
of times. They are reliable because they passed the test of time, passed
through many generations. On the other hand, our understanding about the inner
functioning of them changes - the will of Gods, aether, thermodynamics,
quantum physics.

Nassim Taleb covers this idea in his Antifragility.

~~~
imperialdrive
Glad you mentioned that book as it was relevant and eye opening to read.

------
Hnrobert42
I am surprised the author didn’t use one of the most salient examples:
navigation. I used to consider myself quite good at navigation because I took
the time to explore my city. Now, I blindly follow instructions until I am
lost when my phone dies.

~~~
wwweston
I'm _super_ glad for navigation apps in Los Angeles. In fact, the main reason
I finally adopted a smartphone was because of how frequently a different route
can save you a half hour of waiting in traffic (or more).

But I'm even more glad that I learned to navigate LA via map and memory well
before I came to rely on GPS.

Lately I've been trying to keep that fresh by taking a moment to mentally walk
through the navigation instructions _before_ I start traveling somewhere.
Having the overview in your head -- even if you forget some of it -- makes a
world of difference between building a geographic model and just following
instructions.

Somewhere in here there's probably a lesson or two about the difference
between augmentation and automation, but it's not one I've teased out well
enough to articulate better than this.

~~~
wpietri
Yeah, one of the things I think about a lot is supportive versus controlling
technology. Google Maps, etc, seem uncomfortably in the middle to me. Yes, I
told it where I want to go. But after that, I'm basically a peripheral device
to Google for the duration of the trip, and it doesn't help my geographic
sense much.

I recently went to Amsterdam. Before I left, I got Google Maps directions for
various places I was going and then used Google Earth to "walk" the routes.
After 10 or 15 minutes I built up enough of a sense of landmarks and general
layout that I felt pretty well oriented when I arrived.

That was cumbersome to do, but I'd love to see GPS tools move in that
direction of supporting not just my current goal, but my long-term
independence from needing turn-by-turn directions for everything.

~~~
lonelappde
Couldn't you look at the road and landmarks while you drive, to learn the
geography?

~~~
wpietri
No, because the standard interface shows me a postage-stamp sized
understanding of the world. It's sufficient to follow the directions, but not
useful for learning the broader context in the same way that one has to do
with physical maps.

------
Nasrudith
Really there is something the author missed - just because we have a theory
doesn't mean it is right. Amusingly all of the flaws listed to AI apply to
Natural Intelligence technically.

I think it isn't truly artificial intelligence but fundamental theory vs
practice in a long standing dance over eras.

------
endominus
It's hard to know which science fiction story to quote here; The Machine Stops
is the obvious choice, but more people may be familiar with the empire that
Asimov's Foundation supplanted, who valued layers-removed "analysis" over
original research.

~~~
imperialdrive
The Machine Stops, thank you!

------
nineteen999
What's hidden about it? Every time you call a bank, government agency etc you
have to step through a mountain of IVR's and frequently have to talk to time
wasting "AI" just get to a human which still does the bulk of the non-trivial
work and whom can help you.

It should be obvious that we have and are continuing to push a lot of half
baked AI into industry and government processes and that it's frustrating
users no end.

~~~
tonyedgecombe
I'm not sure it's specific to AI, I've always had to negotiate past front line
support to get to someone who knows what they are talking about. The world is
built for average, not for you.

~~~
nineteen999
I'm completely average, not above, nor below (at least in my own personal
assessment - others may differ).

The difference is the front line personnel can sense if you're agitated and
they need to escalate your case quickly. I've heard saying "operator"
repeatedly down the line may help in this particular case. A human operator
will normally react. AI/ML/Expert Systems etc don't care.

Plenty of "average" people have complained to me about this use of their time,
and that it is not good value for money, especially given the extravagant fees
or taxes they may be paying these particular organizations.

~~~
lotsofpulp
The real benefit will come once everyone’s phone number is tagged with a
dollar value, so if you’re a low value person, you get sent to the back of the
line, perhaps even being dropped as a customer because the system can
calculate how much you cost to support versus how much revenue you bring, and
if it’s negative, then there’s a reason to not do business with you.

This always did happen before, people who knew people or were much higher net
worth obviously got better service, but when it can be automated and subjected
to everyone in a granular fashion, it really brings home where you are in the
hierarchy and removes any veneer of being equal in society. This is also
already implemented at various types of businesses via their “rewards”
programs where you’re tiered into different support based on how much you
spend or can spend, and get routed to different support agents accordingly.

~~~
sokoloff
I personally don't think everyone should be treated equally by support.

I've been left waiting at the counter at an auto parts store while the store
personnel left me to address the needs of the local auto mechanic's shop (who
undoubtedly buys 1000x the parts I buy in a given year). The parts store
_should_ prioritize the mechanic over me, because they're a much more valuable
customer to the parts store. It doesn't do me any good to have the parts store
fold because they served everyone exactly in sequence.

The USPS undoubtedly has specific support lines and discounted rates for high
volume mailers while I as a retail customer pay retail prices and get crappy
retail support. Amazon and UPS _should_ get much better service from USPS than
I do.

Comcast Business customers probably (hopefully) get better service that $99/mo
“TriplePlay” residential customers.

People with startups here probably prioritize engaged, paying customers over
free users.

Airline Elite or Diamond or Private customers spending $25K or more every year
_should_ get better service than the buyer of a once-a-year $250 trans-
continental ticket. When an airline experiences a disruption, they're going to
re-accommodate their First Class and Elite frequent fliers first.

That's just good business, every bit as much as when I walk into the local
diner and the server brings me a coffee prepared as I normally order it before
taking someone else's order.

~~~
lotsofpulp
I agree, but the problem for a buyer comes when the number of sellers is so
few (2 or 3 in many markets), that they effectively have no options. You can
easily be blacklisted nationwide (perhaps deservedly, perhaps not), but there
is no recourse, and no transparency.

It's also psychologically different when you know an organization is tracking
every single person and how much future potential revenue they can bring
versus how much they cost to support and price discriminating accordingly.
It's obviously the optimal thing to do as a business, but as a society, the
idea that we're equal and should be treated as such is also an important
feeling. I can see possible discord in society from having that socioeconomic
tiering be so blatant and in your face, but perhaps it's inevitable in a world
where the gap between the haves and have nots keeps getting bigger and bigger.

I don't go to theme parks much, but I did a couple years ago as an adult for
the first time since I was a kid, and I have to say it felt weird to see all
of the different tiers for the queues for the rides. You pay $x, you wait in
longer line, you pay $x+$20, you get to skip to halfway through the line, you
pay $x+$50, you get to skip to front of line. It really brings to the
forefront how un-valuable your time may be compared to someone who can afford
to spend more, and I know the world has always worked like that, but I wonder
how it feels to the kid who can see it happening in front of them.

------
TazeTSchnitzel
I like this article, but am a bit disappointed they didn't mention another
reason not understanding the system is bad: possibly-unknown biases baked into
the data set. For example, a machine learning system designed to guess who is
a criminal that is trained on existing police arrest data, say, would likely
conclude that being poor or black is an indicator of criminality somewhere
within the model. But we can only find that out by experimentation, we can't
see how it works. There's a huge risk in this type of bias laundering, when
suddenly it's not “the biased police think”, it's “the computer thinks”.

~~~
okusername
Is it "bias" if it's true? The issue is that we want to override statistics
and distort the analysis for ideological reasons, but that's not a fault of
the algoryhm, it's a feature request.

~~~
jyounker
We know, based on statements from ex-agents within the DEA, that the DEA
didn't go after drug use in white suburban communities because it would have
been politically untenable.

The incarceration patterns therefore reflect this political bias.

~~~
sokoloff
Though I think your statement is likely overall true, in a sub-thread about
biased conclusions from data, it's important to observe that from the
statements of those ex-agents, all you can conclude is that _parts_ of the DEA
didn't go after drug use in white suburbia...

------
Verdex
While I think it is a meaningful question to ask, I had a really hard time to
get very deep into the article. At any one point in time, at least 30% of the
page is covered by an advertisement. Some of the time nearly 100% of the page
is an advertisement. No doubt that the page layout and advertisement itself
were decided upon by some form of automated thinking (in the case of layout
css in the case of advertisement something closer to what we think about when
we say AI).

The author raises a concern about automated thinking. But the article is
displayed in a scenario that relies on a plethora of automated thinking. Let
me guess he's only using good automated thinking while the other automated
thinking that affects him is the bad kind?

Ultimately, humans use tools and we push really hard to develop better and
better tools. Maybe the tools other people are using concern you (and maybe
that's a good thing to be concerned ... after all shoe stores used to x-ray
feet), but progress continues and those that don't adapt are left behind.

Maybe he's got a point, but I also don't see the new yorker abandoning its
website and reverting to paper. How bad can the automated thinking really be.

Honestly, I'm not sure I have any major refutations or solutions concerning
the article. It's just that I keep bumping into people making odd statements
about how they're really worried about the computers or the internet and
they're making these statements on facebook. Seeing someone concerned about AI
on an ad supported website makes me wonder if they appreciate how much of
their existence is already dependent on AI.

------
funnybeam
I particularly liked the point about how much worse this gets when the output
from one (or more) ML system is used as the input to another.

Not something that had occurred to me before

~~~
OrangeMango
There were reports a few years ago that Google Translate had gotten very bad
and the speculation was that the training input data was culled from Internet
sites that themselves had used Google Translate to do translation.

~~~
funnybeam
"The blind leading the blind"

------
randcraw
The end result of AI systems that make decisions that cannot be explained
would be to instantiate these automated practices into de facto law.

At some point, if a decision process is widely adopted yet cannot explain
itself, law becomes arbitrary. Ends justify means - ipso de facto. Power
becomes unaccountable.

That's the best reason I know to demand accountability (and explicibility)
from any system or authority, AI or not.

------
adamnemecek
Not working in the field might put you in "not in field" debt. AI will let
people concentrate on other things.

~~~
arpa
I do think that there is a risk where in the future we rely on AI so much we
no longer need people asking questions that matter and go down the road so
vividly painted in "the machine stops" (1909), wherein a certain character
looks at Aegean peninsula, the birthplace of the concept of idea, and, in
disgust, laments "no ideas here".

