
Ironies of Automation (1983) [pdf] - pul
https://www.ise.ncsu.edu/wp-content/uploads/2017/02/Bainbridge_1983_Automatica.pdf
======
wigiv
Speaking as someone who has been responsible for "turning the lights back on"
to fix problems with "fully-automated", "lights-out" factory lines, much of
this paper still rings true forty years on - if nothing else as a check
against our engineering hubris. It remains tremendously difficult to quash
entirely the long tail of things that can go wrong in a factory.

That said, many contentions raised here really have been resolved
substantially with increased computing efficiency and ubiquitous connectivity.
The touted expert human operator's ability to see and understand processes
from a high-level, informed by years of observing (and hearing, and "feeling")
machine behavior has truly been eclipsed by an advanced machine's capacity to
collect increasingly granular snapshots of its complete operating state - the
temperatures, vibrations, positions, and other sensations of its various
organs and elements - every few milliseconds, hold on to that data
indefinitely, and correlate and interpret that data in ever-expanding radii of
causation.

The best human operators (of any technology) not only respond to problems,
they anticipate and prevent or plan around them. Massive data, advanced
physics-based simulations, and "digital twinning" capabilities of
manufacturing equipment afford pre-emptive testing of virtually infinite
scenarios.

Not only can you simulate throwing a wrench in the works - you can simulate
the effect of the wrench entering the works at every possible angle!

It's not infallible, and will for a long time still require a human-in-the-
loop at some level, but as the author rightly put it themselves near the end
of the paper:

"It would be rash to claim it as an irony that the aim of aiding human limited
capacity has pushed computing to the limit of its capacity, as technology has
a way of catching up with such remarks."

~~~
solotronics
Do you think that with integrating AI/ML techniques fully into manufacturing
automation that in our lifetimes people will become fully obsolete for this
kind of work? As a cloud/software/network guy I am curious to see someones
opinion who is knowledgeable in this area.

~~~
wigiv
Within our lifetimes (say, the next 40-60 years), no, personally I don't think
we'll see completely autonomous end-to-end manufacturing widely implemented
(as much as I'd like to, considering it's a problem space I focus on!).

Some pockets of industry are much further ahead than others, but it will take
A LOT of work to reach parity across the board. If not for technical reasons
(which I'm more optimistic about), then for political and social reasons, as
these systems and understandings adapt. That's a whole 'nother discussion...

AI/ML will play a huge role. Not only in machine resilience once commissioned
and operating, but upstream and downstream as well. Better (AI/ML-assisted)
tools for designing products and the factories/equipment that make them will
preempt some of the challenges caused by the currently disjointed process.

I disagree with the comment that AI/ML techniques are only useful once you've
physically built a plant - there are of course emergent behaviors that only
crop up when dynamics of the whole unique factory are at play, but any given
problem that arises is almost always traceable to one or a small number of
subcomponent failures, for which better, more granular datasets are becoming
available to train AI upon.

And, as I mentioned in my comment about throwing virtual wrenches in virtual
works - simulations can begin to generate training data sets as well!

------
mjb
This is one of my favorite papers. The core point that I think should get a
lot more attention is this one:

> When manual take-over is needed there is likely to be something wrong with
> the process, so that unusual actions will be needed to control it, and one
> can argue that the operator needs to be more rather than less skilled, and
> less rather than more loaded, than average.

The "remaining" operational work once automation has done it's job is more
complex and weirder and requires more knowledge and skill than the basic task.
On top of that, computes can (and do!) get systems into states that humans
wouldn't, so those skills may exceed the skills needed by manual operators.
This is something that people who work on things like driver aids talk about a
lot, but I don't see as much attention to in the systems observability field.

> One can therefore only expect the operator to monitor the computer's
> decisions at some meta-level, to decide whether the computer's decisions are
> 'acceptable', If the computer is being used to make the decisions because
> human judgement and intuitive reasoning are not adequate in this context,
> then which of the decisions is to be accepted ? The human monitor has been
> given an impossible task.

This is particularly visible with things like data integrity in a complex
database schema. Above a trivial scale, things both change too fast for a
human to monitor, and change in ways that don't make sense to humans. When a
human sees an anomaly, who's to know if it's expected?

Still a great, thought-provoking, read after 37 years.

~~~
mjb
For example, one of the things that computers do a lot in cloud "control
planes" is placement optimization. I've got this compute job, or data, or
packet, where should I put it? Computers can solve these kinds of problems
much faster than humans, and much better when the dimensionality goes up. If
things aren't looking as rosy as usual, is it because the workload has shifted
to being harder to pack, or because something's gone wrong with the
automation.

These problems, with humans operating numerical optimization processes, are
only going to get harder and more relevant as ML techniques become more
ubiquitous. At least we need to be prepared for some very tricky post-hoc
analysis of computer decisions. Years of work to understand a millisecond of
decisions might not be unusual.

------
dang
See also

2019
[https://news.ycombinator.com/item?id=19132724](https://news.ycombinator.com/item?id=19132724)

2018
[https://news.ycombinator.com/item?id=18230258](https://news.ycombinator.com/item?id=18230258)

2016 (1 comment)
[https://news.ycombinator.com/item?id=12749342](https://news.ycombinator.com/item?id=12749342)

2014
[https://news.ycombinator.com/item?id=7726496](https://news.ycombinator.com/item?id=7726496)

~~~
jacques_chester
Since we're talking about automation, isn't it time this function was
outsourced to some code that doesn't mind working on a holiday?

~~~
dang
You might be surprised at how much curation goes into those seemingly
mechanical lists. I try to exclude the cases that aren't interesting, and
unfortunately it's pretty hard to write software the distinguishes the
interesting from the uninteresting. In fact, if I knew how to write such
software, I could automate my job!

~~~
pvg
There's a bit of a difference there in that an automated list that's half as
good as what you'd manually curate is nearly as useful as your version,
especially if people know it's automated. This isn't at all the case for most
other aspects of moderating that depend on human input. The previouslies don't
really have to be dang-quality.

------
VHRanger
The heuristic to think about automation is using a hand screwdriver (manual)
vs a drill screwdriver (automated).

The latter dramatically enhances productivity, but also introduces some
overhead and complexity.

This is also as good a time as any to link to the [automation
FAQ]([https://www.reddit.com/r/Economics/wiki/faq_automation](https://www.reddit.com/r/Economics/wiki/faq_automation))

