Then try to control the angle position of a DC Motor using a Cascade loop.
Then you can try to control an inverted pendulum on a cart using State Feedback. Here you can use pole allocation, or other methods to find your controller such as Optimal Control, or even MPC. I have a toy program in C to simulate and control an inverted pendulum using C. It is a Literate program and you can use it as a guide to implement in other languages: https://github.com/Accacio/pendulum.
If you are interested and have doubts, use the https://math.stackexchange.com and https://www.reddit.com/r/controlengineering/, we are eager to help
Good book's to accompany the videos are:
- linear systems theory by Joao Hespanha (has some MATLAB code)
- Linear state-space control systems by Willams and Lawrence (has some MATLAB code)
- Control system design by Bernard Friedland (20 usd dover paperback available)
- Data-Driven Science & Engineering by Steve Brunton (video guy above) and Kutz
Some other interesting books:
- Control system engineering: design and implementation using ARM Cortex-M Micro-controllers (some MATLAB and C code)
- Linear Feedback Control: analysis and design with MATLAB by Xue, Chen and Atherton (MATLAB, obviously)
...a lot basic MATLAB control system functions are also available in Octave, Scilab, Python, and Julia packages
In my opinion, the biggest obstacle is being able to sense your target. For the pendulum, you'll need a magnetic angle sensor or an encoder or a good/fast OpenCV angle identifier. For the water tank, you have different feedback options: resistive, floating ball (that can be tied to a regular sensor or identified by OpenCV), weight (although most household scales don't have a clean method of exporting to your controller), and more. I would only do a ball balancing demo with OpenCV.
The difficulty with OpenCV is having to run it on a computer and communicate with your motor controller OR having to run it on a fairly powerful evaluation board (e.g. Raspberry Pi). For this reason, I wouldn't recommend it unless you're comfortable connecting to your controller via USB/serial port.
Changing your goals will really highlight the differences between algorithms: getting to a stationary point, getting to a stationary point as accurately as possible, getting to a stationary point as quickly as possible, following a steadily changing predefined path as accurately as possible, traversing the same path as quickly as possible, following the path without overshooting any boundaries.
Just replace "path" with the appropriate physical system e.g. filling the double tank as quickly as possible without spilling from either tank; speeding up a motor with inertia as quickly as possible without exceeding some angular velocity.
When you want to move to really cool but reasonably difficult demos, look for videos and papers (Google Scholar) on "ball and plate", such as https://www.youtube.com/watch?v=wqmP-y-a2qY
The article would be better if it explained those matrixes A and B more in detail with a few simple examples. (maybe it did, but I stopped reading after that, since it builds on top of those undefined concepts)
You can create a complex, nonlinear, numerical model of the full system in as much detail as you want (eg implemented in a block sim like Simulink, or MBD software like ADAMS, or just A and B matrices derived manually but incorporating nonlinearities like sin, etc).
From that you can evaluate a whole bunch of finite differences around an arbitrary state (linearization point), which are the components of the A and B matrices. This is an approximate, linear model which is "close enough" in behaviour to your real model for states that are "close" to your linearization point.
In practice if you're moving far away from your linearization point (eg a 6 DOF robot arm with large displacement of all joints), you'll need to re-compute the linearization each timestep, around the current state, and re-compute the LQR or MPC solution from this "new" approximate model.
Here's some linearization code I wrote for a library that's used to teach controls and robotics in undergrad. It takes as a input a "model" that's an arbitrary function that generates the next state from a given state, and computes the jacobian matrices by finite differences: https://github.com/SherbyRobotics/pyro/blob/43dcb112427978ff...