Hacker News new | past | comments | ask | show | jobs | submit login
What is a control system anyway? (feltrac.co)
361 points by feltarock 6 days ago | hide | past | web | favorite | 76 comments





PID Without a PhD [1] is an excellent paper that gives a good intuition for PID controllers. In addition, it has concrete advice on how to go about implementing a practical control scheme, and how to tune the PID constants.

It's really my go-to reference for any practical control theory application.

[1]: https://www.wescottdesign.com/articles/pid/pidWithoutAPhd.pd...


Love this! I had read another paper from Wescott about sampling[1] years ago and loved it, but I didn't realize there's a handful of other great articles[2] that they've published.

1: https://www.wescottdesign.com/articles/Sampling/sampling.pdf

2: https://www.wescottdesign.com/articles.html


It kind of bugs me that I understand PID quite well and I've coded quite a few for of them. But trying to dive deeper into control systems, I always hit a wall caused by my weak mastery or mathematics.

I've heard someone say that PID is the deepest you can get without in control systems without advanced mathematics and I think hey might be right.


Depends on what you define to be advanced and deep.

If you know linear algebra, that opens up a variety of state space techniques. If you know a little optimization, that opens up optimal control, including the incredibly common LQR strategy. This stuff is typically accessible for an advanced undergrad with a few of the right prereqs.

But control theory is a deep field. You can find papers that use differential geometry, topology, information theory, etc. for designing control systems.


Control systems overlap with other AI topics in that there's usually an option of a "brute force" method that reframes the problem as a constraint optimization, and then applies a mix of lookahead, filtering and backtracking - classical CS ideas - to rank the possible answers to control at any moment in time, pruning incorrect answers. All you need to start is a ranking function that expresses the amount of error, and a way to generate the tree of "all answers over time" - since with a problem like acceleration and overshoot, an answer that looks wrong when looking one step ahead will become correct one hundred steps later.

And this in turn overlaps with machine learning techniques since a problem generalized in this way can subsequently be adapted into a trained system that approximates the same result as the brute force answer.

Ideally, you can use an analytic solution, of course, and that's what you really want to use to get a result that is both fast and precise, but to do that, the "advanced mathematics" are necessary to define the problem appropriately.


I took a linear control systems class in college, probably at the peak of my mathematical abilities. I hadn’t yet forgotten the fundamentals of Laplace, Fourier, and solving differential equations. Not to say I was some math wizard, but at least the prerequisites were fresh on my mind.

It was fascinating, but I just couldn’t grok it. Whatever weaknesses I had in the relevant maths, that class just seemed to amplify it x10. I still wish I could have absorbed more of the knowledge


I think a practical lab component is necessary to really grok controls. The theory can be extremely abstract - to internalize it requires some hands-on experience too (in my opinion).

> I've heard someone say that PID is the deepest you can get without in control systems without advanced mathematics and I think hey might be right.

It's true that there is a lot of surprisingly deep math in control theory, but even learning just a little bit of the theory can be greatly illuminating.

The trouble with PID is that it is typically treated as a black box with three magic knobs that you twiddle until your system works. This can be very frustrating.

By contrast, by applying a little bit of theory, you will be able to calculate the PID gains and have your controller work the first time.


Are Laplace transforms advanced mathematics? I say no, because they make the differential equations of linear control easy to solve. Once you learn them you can create optimal solutions for PID-type problems without having to spend days tuning the thing.

In lieu of my final in control systems class, the professor allowed anyone to make a PID controller and present it to the class instead. All the EEs went this route (seems unfair to the others in retrospect LOL) and it was a great experience. Many did it digitally, but I opted for purely analog with op-amps for P, I and D with a summing op amp circuit at the end. This was connected to a pair of power transistors to drive a DC motor (from my lego mindstorm) and some gears to move a stick on a board. The input was a potentiometer with a nice metal knob. Each of the PID also had a pot to control it's own amplification before going into the summing circuit so you could adjust and see the behavior of each element. This was one of the most rewarding projects I have ever done I think, so cool to see it in action.

Reminds me of this project from MIT students where they built an analog Segway:

https://techtv.mit.edu/videos/687f4627e5c14a52b24c694f7eeecf...


That's amazing, something very legit about the analog approach and I can imagine it was pretty thrilling to see it actually work.

PID loops are simple and effective control solutions and are relatively easy to incorporate into embedded controllers as well. Generally I have left out the integration step because it can easily cause instabilities, even with a wind-up preventer.

When things get more complicated like with flight controllers or advanced motor drives PID is not recommended. Another problem I've seen is people gloss over tuning a PID controller properly too.


>Generally I have left out the integration step because it can easily cause instabilities, even with a wind-up preventer.

Funny, in automotive we always left out the derivative component not the integral which IMHO is the most useful component of the controller.


Yeah I've usually heard that the derivative component is what gets left out, never heard of leaving out the integral

Yea, I built a ball beam balancing device that torqued the beam using electromagnets (instead of the standard 2nd order system with a motor in the center and fast response). The electromagnets can not 'instantly' change the angle of the beams, and the ball's dynamics lead to overall instability as it rolls around and accelerates faster than the system can respond.. Without a derivative term it would have been impossible to control the system. That being said, the project was deliberately contrived in a way where the effects of the P,I,D terms were observable.

If you have a system without natural dampening and can't tolerate overshoot or oscillation you'd use a bit of D to slow things down.

It does depend on the system you're controlling, often there's a natural integral in there and so you effectively 'shift' everything down one (P acts more like I, D acts like P, and I is a double-integral). Like how PD is fine for the quadcopter control until wind is added to the simulation.

Definitely.

The main price of an integrator is 90 degrees of phase lag; this will harm the stability of many systems but as long as it's kept at a low bandwidth it's usually fine.

The main price of derivative gain is high frequency noise amplification, and so a well designed PID will have a limited bandwidth for the D term as well. For many systems, especially those that are already damped or don't need the extra phase margin, it's not needed or not worth the noise cost.


A quick intuition for this is: d/dx(e^(iwx)) = iw*e^(iwx)... that is, differentiation amplifies higher frequencies more than lower ones.

Right, an integral is a low-pass filter with infinite gain at DC and a derivative is a high-pass filter with gain of 0 (negative infinity dB) at DC; they both have 90 degrees of phase shift.

In high school we nearly always left out the I term because we didn't mathematically verify our controllers, couldn't risk breaking the systems we were controlling with windup, and cared more about smooth and predictable motion than accuracy.

Why did you guys drop the derivative part? I've always found that pretty useful.

> Another problem I've seen is people gloss over tuning a PID controller properly too.

This, a million times. My senior design project in college was building a quadcopter from scratch (including the control software)[1].

We basically built a test harness out of two poles with strings attached to the quadcopter so once we lifted it off, we could just test different PID parameters until it was stable. I think we eventually built an app to vary them in flight, which was definitely a terrible idea, and we crashed many, many quadcopters this way.

I think the answer is basically "build a simulated quadcopter with the same parameters as yours and test parameters in the simulation", but that probably would have been an equivalent senior project... IIRC, this is basically what you have to do for state-variable feedback, so it's much, much more effort than simply do what we did (especially when the cost of failure is low).

The control loop code is actually fairly straightforward [2], it was mapping what exactly the PID controller was controlling to the real world application (turning motors on and off).

[1]: https://github.com/Rose-Hulman-ROBO4xx/1314-BeagleBone-Quadc... [2]: https://github.com/Rose-Hulman-ROBO4xx/1314-BeagleBone-Quadc...


I'm a PID newbie and am wondering if it's the right way to go for a hobby project. My control variable (heat input) and process variable (volume of vapor output) have a fairly laggy and non-linear relationship due to changing mass and composition of working fluid and phase change. Proportional just kind of runs out of steam (har har) near the set point presumably b/c of the non-linearity, and that plus lag really messes with the integral tuning (i basically need two different values for when pv is under or over the setpoint).

Should i keep tinkering or try something different (maybe multiple loops or something?) What's next after PID?

Also this is an ambient pressure system with multiple safety precautions to ensure it stays that way.


PID is popular because it has only three tunable parameters, and you can kind of just turn the knobs until it works. :-)

For a more systematic design, construct a simple mathematical model of your plant. If nothing else, this will allow you to run faster-than-realtime simulations to experiment with parameters. (Thermal time constants are typically annoyingly slow to fiddle with in real time.)

Simulink is an ideal tool for this, but you can just as well use Python or C++. One way to do it is to structure your model as a differential equation and then use a solver to integrate it.

There are several typical lines of attack. Write the plant either as a transfer function and then think about poles and zeros. Write it as coupled first-order linear differential equations and think about state space.

"Feedback systems: An intro for scientists and engineers" by Åstrom and Murray is a popular introductory text. "Feedback control of dynamic systems" by Franklin and Powell is a superb, slightly more advanced college textbook. (The current edition is wildly expensive but older editions are just as good and nearly free.) I also like "Control System Design: An Intro to State-Space Methods" by Friedland (an inexpensive paperback published by Dover).


This is awesome and has got a few ideas flowing already (and a few terms to google). Not only are the simulations faster, i can mess around with them whenever i feel like it. Setting up the 'real deal' is a weekend affair. Fidelity of any simulation i put together will be very questionable for a while but at least it can help me understand how the parameters affect the response.

I really appreciate it! Thanks!


I had to do PID in firmware for some electro-mechanical system that was continuously self-calibrating and ended up only using P and I parts. No derivative.

I've had the chance to have had a teacher who exposed us to the design of robust RST controllers and optimal control (Pontryagin, Lyapunov, Bellman, Hamilton, Jacobi), Evans' work. Getting into an ML shop and looking at what my colleagues who do DL were doing made me think: hmmm, this backpropagation-thingy looks really familiar.

There was, and still is, a comparatively scarcity of documentation on these as most of the content online only tackles PID when it comes to "control systems".

Fascinating stuff.


Soo, if I wanted to look deeper in the maths for these RST controllers, where should I be looking? As I've been confessing in another comment in this thread, I am quite stumped how to proceed to understand more complex controllers. This is all my fault, because of my weak grasp of advanced mathematics.

The best book I know of is pretty old by this point, but it's very readable! Modern Control Systems by Dorf & Bishop. Probably pretty expensive by now, too!

There's also the Control Systems Design Guide, but that's more of a cookbook-style. Dorf and Bishop is a theory book, but a good one.


Beautiful introduction.

I remember my PID lab in college - we had two plexiglass towers with water in them and a pump - our task was to calculate the PID parameters so that one of the columns reached a certain level. I think it was measured by a buoy, so we had a pretty noisy input to begin with, but that was part of the problem at hand.

It was a great introduction because the lab was 2h and the pump rather slow, so we couldn't just half-ass it.


We had the same! Our towers leaked water as well (not sure if a feature or a bug in the lab, there was a non-trivial amount of duct tape involved), so you really needed to take "external forces" into consideration.

These visualization are very satisfying and provide some wonderful insight.

A few minor bugs: - The gray area is always too small, so there's plenty of area of the canvas that never gets cleared. - Perhaps this is intended, or maybe it's caused by by the demo running at 144Hz, but the drone is always swept away by the wind for me. It doesn't even stand a chance to fight it.


Same issue at 120 Hz, the drone had no chance against the wind. Confused me up until reading your comment.

Works fine after setting refresh rate to 60 Hz. (Btw, amazing how bad scrolling looks at 60 Hz after getting used to 120 Hz! Never noticed it before nor did I think it could matter...)


OT anecdote

If the cruise control in my vehicle is using PID, it still needs some tweaks. It's super aggressive when it's lower by 2pmh and often overshoots.

My mom's is horrid: takes forever to get up to speed, then allows downhill drift to drive the vehicle nearly 5mph over the set speed.

I feel like this kind of stuff would be fun to tweak in a simulator to help me understand why it doesn't work better, or show that I'm not entirely crazy and it can indeed be improved.


In cars with manual transmissions, stepping on the clutch turns off the cruise control. In one '96 Honda Civic I was driving, I decided to see what happens if you slip it out of gear while cruise control is on without stepping on the clutch.

Answer: it revs the engine up. I hit the clutch before I got the chance to find out whether the cruise control or the rev limiter has ultimate authority. My guess is the rev limiter, but I didn't want to bet on it.

Still, it's surprising that a computer that clearly knows vehicle speed (well, wheel speed) and engine speed (presumably, for engine control reasons) doesn't know that the vehicle is out of gear (or suffering some kind of serious mechanical malady) and respond appropriately.


What's often hard about this situation is tuning to one problem case (fixing that downhill drift) often makes a different scenario worse. For example, fixing the downhill drift can make that overshoot worse.

I can imagine tuning something like this for a cruise control scenario would be tricky, because there's probably hundreds of different cases to test for.

I think a simulator here could be particularly fun to toy with, because you could have "unit tests" with 200 different scenarios that you try to optimize for. You could set your parameters, and it could run the 200 different scenarios, and show you were your parameters did well and where they did badly.


There's a lot of literature spread across a bunch of fields. The one I first saw was "Active Nonlinear Tests"[0] from the systems dynamics / complex systems area. The idea is that you perform searches of parameter space for a simualation to find the places where it exhibits dramatic nonlinearities.

I've since seen something similar but not identical in the Statistical Process Control world: robust process (or product) design[1]. The idea is that any given system can be thought of as a function that represents X = F(a, b, c, ...) + E, where E is some measure of error or noise that can't be removed. The goal is to find settings for a, b, c etc that minimise the effect of E. So that even if uncontrolled noise swirls around, your process remains stable. You can imagine flipping that to look for maxima instead.

[0] https://www.santafe.edu/research/results/working-papers/acti...

[1] Web search results are underwhelming, I refer you instead to Douglas Montgomery's Introduction to Statistical Process Control.


My wife's Kia downshifts when you go more than a certain amount over the set-speed, which lets it apply "negative" throttle with engine braking.

Adaptive cruise control (ACC) is a common toy control problem. The dynamics can be modeled in a very straight forward way. A little bit of google can probably tell you how to implement one.

That accounts for catching up to other vehicles. It doesn't deal with the problems I mention.

Hrm... I wonder if they use a feed-foward control taking into account the angle of the dangle?

that's an example of non-linearity. probably the car only applies throttle to regulate speed and not brakes.

home heating systems are similar in that the usually only add heat.

in both cases, the PID would be generating the proper output, but it doesn't have a means to drive the system consistent with its output. e.g. it can't turn on the brakes, or air conditioner.


Both vehicles do have some mechanism to slow the vehicle. It feels like a variation on downshifting; it doesn't feel exactly the same as when I'd downshift (whether a manual transmission, or an automatic transmission in 'let the human decide the gear' mode.) When they're "too far gone" (in mine, it's 2mph over, in mom's it's 5mph over), it starts actively reducing speed in this odd way.

If it's a newer vehicle, it's possible there's actually a CVT rather than a traditional automatic. Some Manufacturers will put in 'fake gears' in a CVT because drivers feel like the car is going faster when it shifts (even though it actually doesn't, the CVT itself can keep the vehicle in an optimum power band all the time.)

Actually, CVT Management is an interesting control system question in and of itself, especially when fake gears are involved.


Very nice! I've been working as a controls engineer for 20+ years and this is one of the best visualizations I've come across.

As others have said, this page provided more understanding than more than one university course I've taken.

Oh, and anecdotally I'll state that while Ziegler-Nicholls was great for my lab exercises, it was, ahem, quickly dropped when I started tuning heavy lifting equipment. Increasing proportional gain until stable oscillation was achieved with 160 metric tons in the test tower outside was, ahem, not boring. Not in the least.


I think I got a better insight into PID from this one page from me entire control engineering course in university

I was thinking the same thing.

In my class, at no point was it explained why we would want to do this. It was "This is the math for a P controller. Here's some homework. Now test. This is the math for a PID controller....". Things didn't click until years later.

Looking back, it was a shortcoming in so many classes I took. Actually discussing the problem we were trying to solve adds so much intuition and understanding to the mathematics.


Yeah, that's the best explanation I think I've ever seen of the I and D terms.

This was the controls lab final project when I was back in college [0] (video is not mine). It was a super cool project.

Our task was to tune two PID controllers (one for each motor) to move the "head/pickup tip" to the color balls on the screen and then drive back to "drop" the ball in its respective "bucket". We were given the forward and inverse kinematic models, as well as a black box piece of code to actuate the motors, so we could focus on just tuning the PIDs. Lots of fun!

[0] https://www.youtube.com/watch?v=PLk7nn2hWN0

Edit: found a video of a project from a different year, also really cool and about tuning PIDs:

[1] https://www.youtube.com/watch?v=ohwVLDmCSns


Huh, I once plugged a 2D physics engine into Tkinter and made a little floaty PID drone sim. I added balls that spawned from above it and fell on it, and it would stay on point. I'll see if I can find it later today if anyone's interested?

Very cool! On my device, the actual canvas is a bit bigger than the area you're clearing, resulting in this[1].

[1]: https://i.imgur.com/orrXwZT.png


That glitch is proper oldschool lo-fi trippy fun, though. Would consume substances and play with it for half an hour if I had anything right now.

PID-controlled sous vide with Clojure transducers: https://blog.adafruit.com/2014/10/06/boiling-sous-vide-eggs-...

Really like the visualization, this does a great job of explaining the concepts of PID in an easy to understand way!

The visualization used here could also work as a nice foundation for more complex controls algorithms for 2D space like path planning splines and pure pursuit.


I'm working on this exact problem except for an autonomous boat, so probably it's completely transferable, so this is nice timing. For now though I think I'm going to use a much simpler autopilot system (that I've already coded): Accelerate towards target at maximum speed; if within a certain distance of target, slow to approach speed (10%); if within "arrived at target" distance, cut speed to 0 (station keeping). This seems to work "well enough" in my JavaScript simulator, but I really wonder how it will do on actual water.

Great explanation, we often end up writing the same rules in video games.

Any "drone" logic to do with some kind of builder unit or even a literal drone in something like COD needs rules like this.


every time I see such great, simple and intuitive explanations I always remember how these topics were presented in Uni and want to tear-up a little: monotone voiced prof draws some nonsensical formulas and crap on blackboard, explaining zero things, giving zero context and examples. After a lecture you'd go to an excercize and start applying these formulas to artificial isolated problems, like a symbol manipulating machine. I always imagined myself being a person in that famous "chinese room experiment", a mindless monkey, manipulating symbols in equations with zero underlying understandings, just as it was "taught". brrr.

I second that. University education in Electronic Engineering tends to favour rote learning over understanding.

People could be genuinely interested in this stuff if engineering tried to focus more on creativity than memorization.


Same here. Give some interesting, non-artificial problems to solve. Tune a PID on an actual (real or simulated) pole balancing machine. Tune a controller on a (simulated!) rocket chasing a target in the sky. Give problems that students can relate to and understand, and in context that allows for exploration and interactively verifying solutions.

The CS course I did had a lot of the maths content of the related electrical engineering courses - admittedly with a focus on discrete maths for the third and fourth years. This did give us a lot of maths, but with no real idea as to what people actually did with it.

Fortunately, for me at least this was fixed when as a postgraduate I joined a research group that was largely control engineers!


My memory of learning control theory at university was learning some super complicated stuff that was applicable to 99% of control problems. PID controllers, which are good for probably 95% of control problems, were effectively "an exercise for the reader". So trivial compared to the broader theory we were studying that nobody even thought to mention them. The expectation seemed to be that when you needed them, they'd just drop out of the maths and they'd need no further explanation.

We'd already done a form of proportional control with op-amps. It would have been enormously helpful to start control theory with discrete PI and PID to give us a practical grounding and something we could actually use before leaping off into the wider theory, but that's not how academia works...


This problem persists even in applied classes. I was taking a set of courses in Mechatronics for engineers who were already working in in industry. We went deep into the theory of various types of controls but when one of the students (an engineer with probably 20+ years of experience) asked about PID, it was dismissed as a "special case" of (I think) a Phase Lead control.

Happily the guy teaching the lab portion of the class was himself an experienced controls engineer who actually knew how to use the math to accomplish something useful. We learned far more from him than from the lecture classes.


As someone who's only worked with PID controllers, do you have any pointers to the more complicated stuff you're thinking about? A wikipedia article maybe?

It's been a while, but I remember https://en.wikibooks.org/wiki/Control_Systems/MIMO_Systems#S... as a fair example. State-space representation is very powerful, and totally unnecessary for PID.

Try Model predictive control (MPC) and linear quadratic regulator (LQR)

Maybe I shouldn't be so hard on the teaching at my uni then. There was a pretty decent amount of attempts at getting an intuitive explanation of how the systems worked, and quite a few demos of different control schemes and how they worked and might fail (as well as tons of maths).

Those are great controllable animations/simulations! Thanks for sharing that!

I had always wanted to get a physical intuition for how PID changes led to different behaviors. (And for example, in practical life, how an autopilot doesn't wildly overshoot and oscillate around a VOR radial when trying to intersect it).


it's interesting but that not how thermostats work, sure they're on-off but they have hysteresis they don't keep switching

also you can use bang bang demo and inertia to orbit the thing moving the point on the tangent at the fastest approach, which is awesome.


The thermostat he describes is not stateless, the key phrase is "When the temperature is too high by some amount" (emphasis mine) -- the thermostat will turn on outside a particular range.

This matches my understanding of a simple thermostat, is it wrong?


The JS simulation doesn't demonstrate this, though; the bang-bang controller keeps oscillating in very short steps around the goal.

The idea in the article is about acceleration and not velocity is great. We do not deal with the target but error. And the response to acceleration has to be proportional.

Why can’t it apply to real drone? Interrupt needed?


I built a volume control system (hardware + software) using mainly a PID controller: https://wallfly.webflow.io/

A great example of this is a tweet from Max Howell (creator of Homebrew) regarding interviewing at Google.

Google: 90% of our engineers use the software you wrote (Homebrew), but you can’t invert a binary tree on a whiteboard so fuck off.

https://twitter.com/mxcl/status/608682016205344768?lang=en




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: