Introduction to Model Predictive Control 68 points by shorts_theory 6 days ago | hide | past | favorite | 13 comments

 Can anyone more familiar with control theory recommend some simple projects / applications / simulations that might help learn more about this field? I've been interested in control theory for a while and I have a rough understanding of the math involved, but I feel like I need to tinker with something to really understand it!
 First, control the speed of a DC Motor. Here you can learn about transfer functions and system dynamics, PID Controllers, pole allocation, Phase-lead/phase-lag and etc.Then try to control the angle position of a DC Motor using a Cascade loop.Then you can try to control an inverted pendulum on a cart using State Feedback. Here you can use pole allocation, or other methods to find your controller such as Optimal Control, or even MPC. I have a toy program in C to simulate and control an inverted pendulum using C. It is a Literate program and you can use it as a guide to implement in other languages: https://github.com/Accacio/pendulum.If you are interested and have doubts, use the https://math.stackexchange.com and https://www.reddit.com/r/controlengineering/, we are eager to help
 I really like the Control System Bootcamp videos by Prof. Steve Brunton on youtube: https://www.youtube.com/watch?v=Pi7l8mMjYVE&list=PLMrJAkhIeN...Good book's to accompany the videos are: - linear systems theory by Joao Hespanha (has some MATLAB code) - Linear state-space control systems by Willams and Lawrence (has some MATLAB code) - Control system design by Bernard Friedland (20 usd dover paperback available) - Data-Driven Science & Engineering by Steve Brunton (video guy above) and KutzSome other interesting books: - Control system engineering: design and implementation using ARM Cortex-M Micro-controllers (some MATLAB and C code) - Linear Feedback Control: analysis and design with MATLAB by Xue, Chen and Atherton (MATLAB, obviously)...a lot basic MATLAB control system functions are also available in Octave, Scilab, Python, and Julia packages
 There's someone on reddit who is teaching a class on control theory looking for suitable references which may be of interest to you
 The best demonstrations of control theory I've seen are water filling a tank (or water filling a tank that has a hole that fills another tank), balancing a ball on a rotating stick or tilting board, car cruise control (especially with hills added)...In my opinion, the biggest obstacle is being able to sense your target. For the pendulum, you'll need a magnetic angle sensor or an encoder or a good/fast OpenCV angle identifier. For the water tank, you have different feedback options: resistive, floating ball (that can be tied to a regular sensor or identified by OpenCV), weight (although most household scales don't have a clean method of exporting to your controller), and more. I would only do a ball balancing demo with OpenCV.The difficulty with OpenCV is having to run it on a computer and communicate with your motor controller OR having to run it on a fairly powerful evaluation board (e.g. Raspberry Pi). For this reason, I wouldn't recommend it unless you're comfortable connecting to your controller via USB/serial port.Changing your goals will really highlight the differences between algorithms: getting to a stationary point, getting to a stationary point as accurately as possible, getting to a stationary point as quickly as possible, following a steadily changing predefined path as accurately as possible, traversing the same path as quickly as possible, following the path without overshooting any boundaries.Just replace "path" with the appropriate physical system e.g. filling the double tank as quickly as possible without spilling from either tank; speeding up a motor with inertia as quickly as possible without exceeding some angular velocity.When you want to move to really cool but reasonably difficult demos, look for videos and papers (Google Scholar) on "ball and plate", such as https://www.youtube.com/watch?v=wqmP-y-a2qY
 simulating an inverted pendulum on a cart is a good starting point, introduced in basic control theory courses
 Cool to see the author did F1Tenth (on his projects page). Great project with lots of material on the web for anyone motivated to build a self driving RC race car.
 Early in the article it introduces the A state transition matrix and B the input matrix. What is the intuition of those matrices and where do you get them from? It seems to assume a background in Optimal Control.The article would be better if it explained those matrixes A and B more in detail with a few simple examples. (maybe it did, but I stopped reading after that, since it builds on top of those undefined concepts)
 It comes from representing your system dynamics as a set of linear equations in your state variables (which might be position, velocity, etc). The A matrix is the linear map from the previous state at time t to the next state at time t+1. The B matrix is similar, except it determines the impact of your control input u on the next state of the system.
 Thanks. For an endeffector in a robot arm, the articulated motion, with friction, backlash, gearing, coriolis terms, contact etc. can be very complicated. Would/could you use a physics engine to compute the state in t and t+1 , and then compute the A matrix from that? (state as in world space position and orientation of the end effector)
 auxym 5 days ago [–] Generally this process would be called linearization.You can create a complex, nonlinear, numerical model of the full system in as much detail as you want (eg implemented in a block sim like Simulink, or MBD software like ADAMS, or just A and B matrices derived manually but incorporating nonlinearities like sin, etc).From that you can evaluate a whole bunch of finite differences around an arbitrary state (linearization point), which are the components of the A and B matrices. This is an approximate, linear model which is "close enough" in behaviour to your real model for states that are "close" to your linearization point.In practice if you're moving far away from your linearization point (eg a 6 DOF robot arm with large displacement of all joints), you'll need to re-compute the linearization each timestep, around the current state, and re-compute the LQR or MPC solution from this "new" approximate model.Here's some linearization code I wrote for a library that's used to teach controls and robotics in undergrad. It takes as a input a "model" that's an arbitrary function that generates the next state from a given state, and computes the jacobian matrices by finite differences: https://github.com/SherbyRobotics/pyro/blob/43dcb112427978ff...
 Thanks, the article could benefit from a bit more background like this, with a few simple examples, for example going from the (double) pendulum or cartpole equation of motions to the A matrix, same for B.

Search: