
Bang Bang Control - dedalus
https://en.wikipedia.org/wiki/Bang%E2%80%93bang_control
======
guscost
Auto racing is an example of this being the optimal strategy, not just a
simple or cheap implementation. Disregarding cases where you have to feather
the throttle or brakes to maintain traction, the best racing line is always (I
think) where you mash the throttle on a straightaway until the absolute last
possible instant, then immediately mash the brakes to slow down just enough to
squeak through the turn. It takes some getting used to.

~~~
thatcat
Braking is also used to control weight distribution before cornering, shifting
weight to the front wheels to allow for a sharper turn. Optimization occurs
when the centrifugal force's direction is counter balanced by the friction of
the tires, which varies with acceleration and braking. Optimal lines depend on
the shape of the turn, but max braking always occurring right before the apex
and shifts to high acc after. Really though, its max brake/acc without
slipping not just 1/0\. Traction control/abs simplify the situation at the
racer input level, allowing optimal strategy to approach 1/0 by functioning as
a micro instance of the bang bang strategy. The abs/tcs is 1/0 due to
optimization, not simplicity, since a variable input would not allow the
wheels to return to a rolling state anymore quickly.

~~~
guscost
Great point about shifting weight to the front wheels. Real life racing theory
is definitely much more complicated than my amateur understanding, especially
when you're talking about high-performance cars and trained drivers.

------
PhasmaFelis
I see some typically Wikipedian problems with over-technical description in
the first paragraph: it uses the word "plant" to mean (I think) "device" or
perhaps "system," and includes the somewhat baffling sentence "These
controllers may be realized in terms of any element that provides
hysteresis"\--and "hysteresis" itself is quite poorly defined in its own
article. (The best I can figure is that hysteresis refers to a system whose
output lags behind changes in input, but I'm still not clear how a typical
reader is supposed to get that from "Hysteresis is the time-based dependence
of a system's output on present and past inputs. The dependence arises because
the history affects the value of an internal state.")

~~~
styrophone
Recognizing that the spirit of your comment is about the utility of Wikipedian
descriptions in general (I agree!), I thought I'd build out some of these
concepts in case someone similarly frustrated also happens to be curious about
them.

"Plant" is a legacy term referring to the device or system acting on the state
you're trying to control. Sticking with thermostats, the plant is the heating
element.

A straightforward way to think of hysteresis is that the output depends on the
history of the input. A simple example of hysteresis in a temperature
controller would be to turn off heat if the temperature _rises_ above
threshold A, but only turn on heat if the temperature _falls_ below threshold
B.

"These controllers may be realized in terms of any element that provides
hysteresis" \-- This is indeed awkward phrasing, since it could be read as
though the property of hysteresis is fundamental to create a bang-bang
controller. In fact, using hysteresis is "just" a best practice. Strictly
speaking, you could make a hot water heater that turns heat on and off based
on a single hard temperature threshold as measured by a thermometer, and it
would work. However, any noise in the system will make the output jittery and
cause unnecessary cycles. Having a "heat off" threshold higher than the "heat
on" threshold makes for smoother transitions.

~~~
PhasmaFelis
Great info, thanks! You should be a Wikipedia editor. :)

------
personjerry
I find that this is interesting to compare with Pulse-Width Modulation[0], in
which a system must alternate between OFF and ON at some proportion to
"simulate" a non-binary signal (i.e. 60% of the time if the signal is on ON
and 40% it's on OFF, then it looks like, on average, 0.6 rather than the
discrete 1.0 and 0.0).

[0] [https://en.wikipedia.org/wiki/Pulse-
width_modulation](https://en.wikipedia.org/wiki/Pulse-width_modulation)

~~~
snowwindwaves
Two examples where I use bang-bang control for inflate/deflate valves on
inflatable weirs[0] and open/close solenoids on hydraulic pressure units
controlling hydraulic rams attached to pelton turbine deflectors[1].

But I calculate a pulse width - how long the valve should be opened for in
order to correct the error.

For a turbine deflector loop it might execute every 1s with a minimum pulse
time of 50ms and a maximum pulse time of 1s.

For the weir control it might execute every 60s with a minimum pulse time of
300ms and a maximum pulse time of 20s, because the weir continues to move
after the output is de-energized and the response of the water level over the
weir depends on the area of the body of water behind the weir.

I set a gain that determines how long the output is energized (the pulse
width) for a given magnitude error.

In my case the actuators are still discrete so the signal stays as a pulse
train, it is not filtered up to be a continuous time signal.

[0]:
[https://www.google.ca/#tbm=isch&q=obermeyer+weir](https://www.google.ca/#tbm=isch&q=obermeyer+weir)
[1]:
[https://www.google.ca/#q=pelton+turbine+deflector&tbm=isch](https://www.google.ca/#q=pelton+turbine+deflector&tbm=isch)

------
gaze
Control theory is a super fascinating field.

~~~
tantalor
Care to elaborate? What do you find fascinating about it.

~~~
Animats
Much of modern control theory is equivalent to machine learning. "Adaptive
feedforward control" is a form of machine learning. So is "automatic system
identification". Back in the 1990s, before machine learning had really got
going, I thought that adaptive feedforward control was a path to AI. In a
sense it was, but it took a lot of mathematical development in other fields to
get it there.

Feedforward control is unusual. In feedback control, you look at the error and
try to zero it out. Usually, there's some lag in the system, which leads to
undershoot, overshoot, oscillation, and all the usual problems of feedback
systems. Maxwell (yes, _that_ Maxwell) figured all this out in 1868, and his
little paper "On Governors" defined linear control theory for a century.[1]

Feedforward control, though, is about measuring disturbance inputs, predicting
what's going to happen, and adjusting controllable inputs so that the error
remains near zero. An example is a heating system with an input for outside
temperature as well as inside temperature. When the outside gets cold, the
heating plant will be cranked up in advance of the inside getting cold. This
is essential if the system being controlled has a lot of lag, like a big
building's heating system.

Adaptive feedforward control is automatically identifying how the inputs
affect the outputs by watching everything for a while. It's thus a form of
machine learning. For the heating system example, a good controller might
learn that a 10 degree drop outside means a 5 degree drop inside 1 hour later.
It might also learn that the heating hot water temperature rises at 30 degrees
an hour when the boiler is at full power. Such info can be used to decide to
crank up the boiler half an hour after the outside temperature drops, so
heating water will be available just when it's needed. This is a basic form of
machine learning.

One big difference from the machine learning community is that controls people
demand error bounds on their control algorithms. So the machine learning
techniques used tend to be the ones ameninable to error analysis. Support
vector machines, yes; deep neural nets, not so much.

[1]
[https://commons.wikimedia.org/wiki/File%3AOn_Governors.pdf](https://commons.wikimedia.org/wiki/File%3AOn_Governors.pdf)

------
nocarrier
It's also used on some rockets and guided bombs, which is where I first heard
about it. It leads to rougher trajectories but it's a more reliable control
surface since you only need to handle zero or full deflection on a given fin.

