It does a pretty good job of maintaining system responsiveness and latency when there's sustained memory pressure, at least much better than the simpler hysteresis loops that are commonly used for this sort of thing.
I'll be the first to admit that I hastily presumed you were conflating the control theory PID acronym with "process ID", then I looked at the code and had a genuine WT-actual-glorious-F moment.
It's an OK explanation of a PID controller. They're easy to code; tuning is the hard part. There are lots of approaches to tuning. This author offers a simple one.
I note that he cites "James Clerk Maxwell: “On Governors”. Proceedings of the Royal Society, #100,1868".[1]
Yes, 1868. Yes, that Maxwell, of Maxwell's Equations. That paper is worth reading. Right there, linear control theory was born. And that's about where the state of the art was until 1950 or so.
Modern control theory is closely tied to machine learning. Except that the control theory people want solutions that don't suddenly do funny stuff for some inputs. The math is way beyond me. Even control theory PhDs have trouble now.
Modern control theory (not counting RL, which is rarely, if ever, used on live controllers) is essentially two camps: PID with tuning and MPC (model predictive control). There is some optimal control theory not in either of these camps, but it’s not the norm.
The latter is often studied under the umbrella of optimization theory (convex optimization mostly), but most aspects of it are well understood for most systems we care about (with some notable exceptions, of course). I wouldn’t quite say that “even control theory PhDs have trouble now” as there is plenty of work done on the field and many cases are quite useful (but they certainly have many topics to choose from for their theses!) :)
I think there are some important steps between Maxwell and 1950 that are worthy of mention. Black's paper on negative feedback amplifiers, for one; the invention of the PID controller itself, for another...
Maxwell had P and I, but not D. Derivatives are hard to do mechanically, tend to be noisy, need filtering, and thus have lag. An aircraft vertical speed indicator is one of the few widely used mechanical differentiators.
Enough gain to allow throwing it away with negative feedback to get linearity was a ways off in 1868.
One of my coworkers (colmmacc@ on twitter) has been giving talks [1][2] for years on how PID loops and control systems theory apply to software, and that I’m totally convinced that it’s an undermined vein of excellent thought. I’m working on applying this to a piece of software I’m building and I’m very excited.
He’s not wrong, but the fundamental problem is that control theory has a very limited view of “correct behavior”: stability around a desired point or trajectory. If you can define your desired system state as a fixed or time varying value that can be determined ahead of time, and you would like to prove that your system stays close to that point for some definition of “close”, then this tool may be for you.
But if you can’t, it’s much harder to see how to apply it. And a lot of software problems can’t be defined in such strictly mathematical terms.
Also, if your system is linear or approximately linear then we have good tools. If your system is really honestly nonlinear it gets more dicey. You have to either hope that somebody has already derived a controller for systems of the class you are using, or you have to do novel numerical analysis in order to derive a controller that will stabilize it.
I did my PhD in controls too, and I’d pretty much agree with all of that. There is a ton of really heavy math flying around with little to show for it. If your system is anything beyond weakly nonlinear, you are basically screwd. The best and effectively only tool we have otherwise was invented over a 100 years ago by lyapunov. And using it requires, I’m not joking, nearly divine inspiration.
Lyapunov functions indicate Lyapunov stability. A system that is Lyapunov Stable will settle into a global minimum state and stay there stably indefinitely. Kind of like a ball in a bowl, where all gradients point in to the middle. Non linear systems can "refuse to settle". I may be misremembering something from my controls courses
This is an interesting point. I was recently working on research for how to apply model predictive control theory to TCP congestion control. We basically wanted to minimize latency while maximizing throughput. One of the challenges was that, while we could match latency and throughput set-points if known apriori, actually determining what the set-points should be was very difficult.
Here's an early paper on it https://arxiv.org/abs/2002.09825.
Note that if some of our control theory concepts seem a bit screwy, it's because we come from networking, not control theory :).
I know little about web services orchestration, but I feel that a lot of algorithms from control theory / signal processing / optimization / continuous math / etc will apply to software contexts if one “squints right”. And such solutions will often end up being more robust and better behaved (and arguably more intuitive) — an ounce of math often saves a thousand lines of clumsy code. As Peter Naur argued, writing software is primarily about building a functional theory of the domain.
I personally feel process control is a significantly under-appreciated discipline.
I've been at an academic lecture where a process control engineer used the example of air conditioning to explain that when your system is controlled, you have to be extremely careful about correlation and causation. If you're not, you can very easily conclude that because the room is hot whenever the AC is running hard, that the AC running hard makes the room hot. He then showed that biologists were making this mistake with respect to certain biomarkers and cancer.
Process control is interesting even in philosophy. For example, there is a process control theorem which proves that the maximally efficient regulator must have an accurate model of the process being regulated. Therefore, anyone who argues that evolution gives us no reason to expect our brains give us an accurate picture of the world must be ignorant of process control. After all, our brains are regulators of our environment for our survival; therefore the maximally efficient way for our brain to regulate our environment is for our brain to have an accurate model of our environment.
But the brain clearly doesn't have an accurate model of the (whole) environment. It falls for all kinds of stupid illusions and cognitive biases.
Does this mean evolution is bunk? No, clearly not. In fact, it's evolution's fault. Evolution is path-dependent and only maximizes one thing: reproductive fitness. Your eye, for example, has a blindspot in because the rods and cones are behind the fibers that project to the brain. This is an objectively stupid "design" and the eye would be better without it (as octopus eyes are), but...it's there because it was there in your ancestors, and theirs before them and it was good enough for all of them, and even better than whatever their competitors had at some point. Evolution doesn't improve anything but reproductive success.
The point is that the brain is a kind of controller, and evolution does in fact select for the controllers fitness. The theorem asserts that the maximally efficient controller is one which has an accurate model of the environment. That evolution hasn't produced a perfect controller isn't an argument against the point that evolution will select for brains with accurate models.
So you're correct: environmental physics which have never been survival-relevant will have no selection pressure.
But I think you are very incorrect to say that because our brain ignores the nose part of our eyesight that we have an inaccurate model. A model in process-control-speak means some kind of mathematical description, not the inputs to the controller.
Evolution will select for brains with more accurate models subject to other constraints and as a result, I doubt it will rarely (if ever) reach a global maximum.
More accurate models presumably have costs (e.g., bigger brains are metabolically expensive). The fitness landscape has local maxima, and they may be hard to move away from (as in the eyes I mentioned above). There are lots of caveats like those.
We know that a lot of internal models are not particularly accurate. The visual system only acquires a tiny bit of high-resolution data at time (with a hole in it[0]) and interpolates the rest. Loss aversion, the gamblers' fallacy, and the rest suggest we're not great at modelling uncertain outcomes. Even mental models of physics differ from reality in some key ways [1].
It's possible that these are still slowly improving, but I think it's more likely that much better models have an unfavorable cost-benenfit profile.
[0] The blind spot is different from the nose. There's a whole in your representation of the world, a few degrees away from the center of gaze. Your brain fills it in in surprising ways. Here's a fun demo that's a big hit with kids too: http://people.whitman.edu/~herbrawt/classes/110/blindspotdem...
> I've been at an academic lecture where a process control engineer used the example of air conditioning to explain that when your system is controlled, you have to be extremely careful about correlation and causation. If you're not, you can very easily conclude that because the room is hot whenever the AC is running hard, that the AC running hard makes the room hot. He then showed that biologists were making this mistake with respect to certain biomarkers and cancer.
I’d enjoy hearing more about this. Go on, or point to more info?
Not a process engineer, but I'm guessing that if you're looking at correlation between temperature data vs. AC power, and AC is reacting to changes in temperature (i.e. derivative, D in PID), you'll see AC ramping up power before the temperature meaningfully increases. A naive analysis could then conclude that it's AC that causes high temperature.
Or more simply than that, if you only ever look at "cold" and "hot" rooms (e.g. analysis of cancerous and non-cancerous patients) you're never going to see the dynamics at all.
But more fundamentally, I believe there is a theorem which asserts that it is in general impossible to extract the open loop behavior of a system (i.e. a model of how the system would respond without a controller) solely from closed loop data (i.e. data collected from the system when the controller is on.)
It was several years ago, so I don't remember the details, but I have heard echos of the sentiment in the recent criticism of alzheimers research. I.e. that amyloid plaques may actually be the equivalent of a cellular AC that's been misidentified as the reason the room is hot.
>“The current approach is based on the idea that amyloid-β is bad,” says Perry. “My idea is the opposite.” He suggests that amyloid-β and tau accumulation is actually a protective response to age-related metabolic pressures in the cell.
> will apply to software contexts if one “squints right"
Not just "squint right." All systems can be represented as a pair of functions that map current state to next state and the observation of current state, in response to some perturbation. Aka state evolution/observation.
F: s_n, x_n -> s_n+1
G: s_n, x_n -> y_n
Everything from Markov Chains, stochastic processes, ANNs, controls, signal processes, electrical systems, finite automata, Turing machines, etc can all be formulated in those terms. The cool part is when you break apart those specific formulations into the abstract notion of a signal flow graph (a directed graph that represents the operations of F and G) you start thinking a lot about homological algebra, category theory, and the general nature of things.
Abstractions in programming that express this nature cleanly are often the ones that we find the most beautiful, at least for me. For example, an iterator has a void perturbation, F is the next() function, and G is just the observation of the current element (the actual state may be hidden behind a getter of some kind). The abstractions that we build on top of iterators through combinators like map/flatten/fold, etc are then homomorphisms between various iterators by chaining the state evolution/observation functions with other operations.
So if you go one degree higher than an iterator you get a generator, where your perturbation might be non-void. In controls then, you can express your plant and controller (even the "hardware" stages) as one generator composed of smaller ones and combinators across them.
A lot of the math behind this in the general case is kind of heady, at least for me (just an undergrad experience here). There's a lot of ground to be covered in particular understanding for machine learning and other nonlinear dynamical systems where analysis/synthesis of a state-space formulation is currently lacking in formal methods.
> Everything from Markov Chains, stochastic processes, ANNs, controls, signal processes, electrical systems, finite automata, Turing machines, etc can all be formulated in those terms. The cool part is when you break apart those specific formulations into the abstract notion of a signal flow graph (a directed graph that represents the operations of F and G) you start thinking a lot about homological algebra, category theory, and the general nature of things.
Very interesting! Do you have any references where such reformulations are done? Or where control problems are formulated in terms of algebra or category theory?
I'm not really an academic, this is mostly something I read about in bits and pieces. It's my own observation that roughly everything that needs a formal model can be expressed that way, the formulation of those two functions can be pulled out with varying degrees of difficulty.
In controls/linear dynamical systems for example, the canonical forms express it directly [1]. Same goes for anything with a clean state-space representation, like Markov Processes.
The notion of "observation" could also be called "doing something useful with state." For a lot of simple state models, like finite automata, the observation is the identity function.
As for the algebra/category theory, I'm not aware of any formal works that get into it (but also I don't keep up with the literature at that level). It's mentioned by various people (because the work was classified) but Shannon was the first to use topological representations of controls. Mason published these as "signal flow graphs" where multiplication/addition/differentiation/integration operations are represented by the edges and vertices of a directed graph, which he used to derive his famous gain rule [2].
My own interest in the abstract algebra related to this was the fact that the edge of a SFG is a multiplication by a state variable and constant (g * s|z^k) and vertices are summations. SFGs themselves are algebraic structures inside what I understand to be an Abelian group (I'm self taught on this, so forgive terminology/notation).
In the pro-audio niche there's a transform on SFGs called the Topology Preserving Transform and has various derivations (Andy Simper, Will Pirkle, Vadim Zavalishin all have writings on it). It's a fairly manual approach, so about a year ago I wrote an algorithm to do it by reformulating Zavalishin's algebraic derivation [3] as a graph transformation. The interesting bit is that when you look at it that way, the TPT is a homomorphism between Abelian groups (the former in the continuous time, the latter discrete time). At least that's how I understand it.
Anyway, this is mostly my self study. I like the abstraction and it makes composing systems very simple. I'm using these approaches in a Rust crate with some combinators on state evolutions/observations like one would iterators, I find it makes things very composable and elegant [4].
> All systems can be represented as a pair of functions that map current state to next state and the observation of current state, in response to some perturbation.
If you are assuming the next state depends on the current state, doesn't this work for only Markov system i.e where you can make predictions for the future based solely on its present state?
There's a whole bunch of systems where this doesn't apply, no?
No, because a Markov process only depends on current state. All Markov processes are stateful systems, but not all stateful systems are Markov processes.
For example, an exponential moving average:
s_n+1 = a x_n + (1 - a) s_n
y_n = s_n
where a, x, s, y in Reals
The next state depends on all past state, including initial conditions.
Conceptually, stateful systems are the notion "where I am going depends on where I am." Markov processes are stateful systems where "where I am going depends on where I am, but not upon where I came from."
I do have all the material from "PID Without a PhD" in there -- just watch the three videos with "PID" in the title.
I also recommend Brian Douglas's channel: https://www.youtube.com/user/ControlLectures if for no other reason than because I haven't posted a new video for three years -- but then, he's been posting on some Matlab channel, so you'll want to start with his and then move to that one.
What would you recommend if the feedback is noisy enough that a derivative term would be mostly noise, but using a simple FIR filter to remove the noise introduces a delay which makes the system less responsive?
FIR filters are the last thing you want in closed loop -- for the amount of filtering they do, they have way excessive phase delay.
I'd use a derivative term with bandlimiting, and I'd accept that if I couldn't get the response I needed using feedback in that case then I'd either investigate using feedforward, or I'd work on changing the sensors on the plant so that they're less noisy.
This is a well explained and concise guide on theory and implementation behind it.
But like many things, there's no replacement for getting your hands on a real life system and gaining real life intuition.
The best way to learn this? Get/make a test bench with safe-ish motor and encoder system. Play with values and measure at the performance in the system. Increasing the P value gains, makes the system "stiffer" but at the expense of stability. Put your hand on the flywheel and "fight it" safely. You can feel the I (Integral) value winding up against your steady state errors. It's quite magical.
As you start to chase the highest performance for your system, you start to think about alternative control strategies. Eventually you will dream up the concept of feedforward control loop. Try to automate the the picking of PID values, then you start to learn about Ziegler–Nichols tuning method. Gain Scheduling. Non Linear Control theory. It never stops.
(If you do any of this, please make sure the motor/flywheel won't kill you if it goes unstable, and please wire a E-stop that's easy to access when it's unstable)
This is how I teach middle school and high school students basic controls. They implement and tune a proportional controller, then the derivative part, and then I give them a feedforward model for the plant (usually a brushed DC motor.) The PD controller and feedforward is very easy to write and is quite approachable once you see how simple it is, and it can be very exciting to see a motor “do as you command” before your eyes.
I avoid integral control because it’s less effective and much harder to deal with than good feedforward. Most of the mechanisms we use (in the FIRST Robotics context) have a feedforward model and good system-identification tools so this works out well.
Real life systems don't have to be physical. You can do a lot by playing around with digital PID controllers, graphing the sensor value over time and seeing how the graph changes as a response to changes in the gains. The benefit of this approach is that anyone with some programming skills can do it right away without any hardware.
I designed control systems for brushless servo motors and more recently did the power control for very power hungry ASICs of a very well known company.
Tuning is important but when you come to design a control system, you need to look at the overall system - and this is something textbooks don't really tell you. Control systems are all about reacting quickly enough and accurately enough. Delay and poor incoming/outgoing signal quality are the enemies of the control system:
1. The delay from sensor to controller (e.g. SPI bus incurred delay between your gyro sensor and your processor)
2. The bandwidth of each component. If your gyro sensor is only 10hz bandwidth, it'd be difficult to control anything reasonably fast. Note that bandwidth is often given in frequency to 3dB attenuation (half the amplitude) - that's not enough, you need to understand the phase distortion: in other words, as you approach the bandwidth limit, does your signal get distorted in time? because that's very bad for a control system.
3. The resolution of your actuator - if you end up using an on-off actuator, it'd be impossible to do any precision control. Do you want to position a motor accurately? Make sure you use more than 8-bit PWM.
4. Static/dynamic response: things are usually not nicely linear. Things at standstill take a different amount of effort to move than once they are in motion.
My experience has shown time and time again that if you get the system fundamentals correct, implementing and tuning the control loop is reasonably trivial. The secret is having a system that's an order of magnitude faster (higher bandwidth) than what you're trying to control.
You really need to think about control systems in the frequency domain: "what I'm controlling requires the bandwidth of X, I need to sample with with n * X, my processor is this fast, my output signal is that accurate, this is the end to end delay".
Someone once said "any non-linear system becomes linear when you sample it quickly enough". I don't know who said it or whether it's even remotely true for all parts of life - for control systems though, it's definitely a rule to go by.
It's called "PID Without a PhD" but it still assumes at least an undergrad level training.
I wish it had been around when I was studying EE-2-Control. That was not one of my favourite courses but is one of the bits of theory that I have reached for the most since and this guide has been a very useful reminder of much of it.
'PID Control The "PID" in "PID Control" stands for "Proportional, Integral, Derivative". These three terms describe the basic elements of a PID controller.'
Classic case of a really simple concept being obfuscated by jargon. If you've done any sort of game development and implemented smooth movement and animations, you've probably accidentally implemented a PID controller.
I bumped into PID controllers doing game dev. Trying to write an AI actor to drive a car that was subject to physics is what revealed the problem.
But, I had been writing smooth movement and animation for years and never needed a PID controller before, or implemented one accidentally. In fact using animation smoothing is what got me into trouble with the AI driving. In retrospect, I would call basic animation smoothing a “P” controller, you do something in proportion to your goal. To make it smoother, you just turn down the proportion. I tried making the AI driver use a P-controller to steer in order to stay on the optimal racing line.
When you introduce a delay in the system between input and reaction - which is what happens when your car is subject to physics, and when steering changes are rate limited - this is when animation smoothing (P-controller) causes a really surprising behavior. The lower you turn down the P value, the more unstable the system becomes. I was confused at first... why would reducing the input cause the system to go from oscillating to divergent? But when someone suggested reading up on PID controllers, it all made sense, and I was a bit stunned that I hadn’t even heard about them during my undergrad CS years.
My prof spent 8 whole months teaching us systems modeling and PID mathematics only to tell us there's no reasonable way to set your PID values... You gotta guess and check!
There are various structured ways to set PID gains.
The mentioned use of optimal control is one, although if you go there you need to be careful about setting your costs -- LQR and other optimal schemes assume a perfect model of the plant; the higher you set Q the higher the probability that your system will ring or go unstable.
There's various robust control methods, all of which I haven't used in a Good Long While, because for the most part swept-sine measurements work nice.
The one I've used most involves taking swept-sine measurements to get the plant response in Bode plot form, then using either Bode or Nyquist plots (or both) to tune my PID (or whatever controller I'm using).
For a large class of industrial problems, swept-sine measurements won't do, either because measurements need to be undertaken on production lines that are in operation, and the operators get cranky about things like noticeable sinusoidal variations in the product (think aluminum foil or paper), or because even when operated within safe limits, large machines undergoing swept-sine measurements can be downright scary. In such cases one usually ends up using random excitation or steps (often called "bump testing" if you're in an oil refinery or a paper plant) and some sort of system identification step like ARMA.
If you do end up doing testing followed by system ID, you'll most likely get an approximate plant transfer function -- so it's wise to either use a grain of salt when doing your optimal design, or to use some robust design method or other.
Hi! Just wanted to say thanks for the stuff you post on comp.dsp. I pop in and lurk every few months, and you stick out as a super valuable contributor. I might try to actually get my news client to be able to post now that I have so much idle time at home :)
What? If you have a good model of your system then you can (relatively easily) turn it into an optimization problem and use a LQR to choose gains for you. It’s true, this method still gives you knobs you might have to tune (Q and R), but these are easily conceptualized because they slide the cost of state excursions and control effort along a Pareto boundary. That means that you can get optimal PID gains just by saying how much you penalize a state excursion vs. how much you penalize control effort (e.g. distance from your reference point v.s. fuel use, or something like that.)
It’s also certainly true that a LQR won’t help you if you have no a priori knowledge of your system, but for many of the mechanisms we need to control this isn’t a problem.
Your whole argument is based around “if you have good model...”. This where most all control theory falls on its face. Getting a good is hard. For lqr that model better be mostly linear.
Oh, you system model isn’t first order? Now you need an estimator/Kalman filter. Or more sensors. That’s just more complexity for questionable benifit, which is why lqr is beloved by academics [0]. For anything that would be adaquetly controlled with pid, stick with that. After about 2 hours fiddling with the knobs, it’ll be close enough.
This whole idea of optimality is based on bad intuition. Which states do you care about? Why? Is that more valuable than control effort? Why? Who is doing the economic analysis to determine what ultimately costs us more money? In the end, this thing are tuned just like pid: you stop when the step response looks nice. Besides all that, for a lot of systems, and in particular flexible structures with a lot of states, “penalizing state excursions” isn’t really useful intuition to begin with for almost all state space represtations. You are better off with a pid and notch filter.
[0] I decided to complete my PhD in controls so could make statements like that with at least marginal credibility.
If you have a good linear model for your system you can pull out an LQR. It does not account for how bad your linearization of the true dynamics might be.
Also from my experience you still need to tune the Q and R the same amount that you would P I and D. and I'm not convinced that it is strictly better at all. In industry at least PID absolutely dominates.
It's also important to realize that just because the LQR is derived by solving an optimization problem, that doesn't mean it gives you the best possible gains for whatever you want. You got the optimal controller to minimize a quadratic cost you made up for a linearized approximation of your system.
Iterative design (guess and check) is absolutely still the state of the art for control design, and using an LQR does not escape that.
Huh? They must have been joking. There are many cases with models that are good enough that you can calculate the gains depending on the desired response. I have done it many times.
It is definitely true for a few simple cases where the model producing simulated gain values can get pretty close, but there's an entire sub-discipline of controls theory dedicated to the trade-offs of the various tuning methodologies [1][2] because for most real-world situations, you can't predict the system response for any given set of values accurately enough to be useful. You can get the initial values set from first principles by checking your poles to avoid clear regions of instability, but once you're past that point, I believe that it's mathematically true that you can't truly find the correct gains only from first principles.
If I'm missing something in the formal background, though, I'd love to hear it. My own implementation of digital PID controllers have been based on using Ziegler-Nichols for tuning bc I was under the impression above. Any literature or methods contrary to that would definitely be welcome!
(Edited because my first line didn't make any sense)
I think in the general case, say you made a black box PID controller then attach it to some system it is supposed to control then you have to tune it, which isn't so much "guessing" but a specific refining process. If you have a model for the system under control, then sure, it's math
Depends on your system!!! If you online tune and accidentally set P a little too big it is quite easy to set off an unstable oscillation and depending on your application can get a rather expensive earth shattering kaboom...
I usually end up saying "Screw it" and adjust the value under control to remove half the error at each timestep. Surprising how often that's good enough.
Many processes like temperature controls want to settle, you just have to help them out a bit. Applying a full PID algorithm to every control problem is a bit like using TCP/IP to solve every networking problem. It will work, but you'll often do better with a domain-focused solution.
Temperature control problems are only first-order (heat->temperature), so they're much easier to control than a second order system like force->velocity->position.
This appears to be a really nice series of articles, but wow, what an annoying format. Just give me a .PDF or single .html page, don't make me click 'Next' thirty times. :(
He's not even doing that to show ads. He's just doing it because, hey, it's not like he wants to read it.
In some cases especially hardware, it is essential that PID tuning is done safely.
I've implemented this reinforcement learning algorithm in C++ for safely tuning a PID controller on a hardware system with successful results i.e. has been successfully deployed at customer sites for in-situ tuning.
I assembled a controller for my smoker from a cheap PID from aliexpress, a relay and a fan. It works like a charm and it's an order of magnitude cheaper than buying a complete solution.
This guide is good at going in depth without being too heavy on the jargon. I've found that it's still a bit much for younger students learning about the concept and related math for the first time. George Gillard's intro to PID [1] has been a staple of VEX robotics instruction because it fills this niche -- it's an excellent resource for teaching high school students and younger.
I read this in a magazine years ago, super helpful. I learned the most when I sat down and worked though it by hand. In practice, I've never needed D, PI has been enough to do the job.
In the typical pedagogy, D is only necessary if you feel the PI isn’t responding fast enough (for rapid control). D, being very sensitive to high frequencies, helps with that.
And the reason you’d leave it out is because a PI controller is guaranteed to be stable with positive gains. Throw the D term in there and you have to start checking pole positions to make sure your system doesn’t blow up.
I'm currently in Georgia Techs OMSCS program and taking Sebastian Thrun's AI for Robotics class on Udacity (free to access). I thought the Lesson on PID controller was really solid. Here's the first lecture.
https://www.youtube.com/watch?v=-8w0prceask&feature=emb_logo
Ah I remember reading this in high school. It's a recommended read/resource for kids learning to control their FIRST Robotics Competition (FRC) robots in autonomous situations or non-drivetrain mechanisms.
I wrote that article in 2000, and the really scary thing for me is that I'm now seeing code that obviously either came right out of it, or is only a few copy-pasta steps away from it that's just taken as "the way things are done".
For the record -- it's one way to do it, but the code in there was primarily written to be as easy to understand as possible, not to be the World's Best Controller Implementation.
Even disciplines which typically do cover process control in undergrad often treat process control with a bit of a "there be dragons" mentality. Not for nothing are PhDs mentioned in the title; if some industry employs process control engineers, their group certainly has a concentration of PhDs well above average.
There are a lot of things that aren't part of the usual computer science curriculum, control theory being one of them. Many topics would be interesting to have some minimal exposure to, such as signal processing, operations research, etc.
I mean, why not just read the article? It's on the first page of the article (third page of the pdf, after the cover page and the author's note): The "PID" in "PID Control" stands for "Proportional, Integral, Derivative".
PID is very related to CS, especially in the domain of digital signal processing. And yup, I would agree that there are a lot of non-CS engineers here, we are pretty common :)
I bumped this because it's an underappreciated topic in CS. I never really had any formal education but PID and MPC have been in my bag of systems hacks for decades now. I wish I had more lives to expend in enjoyment of the pursuit of control theory. Frankly computing is starting to really get interesting again, I feel like it's 1979 redux.
That's odd. To me control theory is really an EE topic (granted EE and CS are often part of the same department). I did EECS (and did some control theory as part of the mandatory subjects of study) and it feels like CS/SE people usually have no idea that control theory even exists..
It does a pretty good job of maintaining system responsiveness and latency when there's sustained memory pressure, at least much better than the simpler hysteresis loops that are commonly used for this sort of thing.