I know little about web services orchestration, but I feel that a lot of algorithms from control theory / signal processing / optimization / continuous math / etc will apply to software contexts if one “squints right”. And such solutions will often end up being more robust and better behaved (and arguably more intuitive) — an ounce of math often saves a thousand lines of clumsy code. As Peter Naur argued, writing software is primarily about building a functional theory of the domain.
I personally feel process control is a significantly under-appreciated discipline.
I've been at an academic lecture where a process control engineer used the example of air conditioning to explain that when your system is controlled, you have to be extremely careful about correlation and causation. If you're not, you can very easily conclude that because the room is hot whenever the AC is running hard, that the AC running hard makes the room hot. He then showed that biologists were making this mistake with respect to certain biomarkers and cancer.
Process control is interesting even in philosophy. For example, there is a process control theorem which proves that the maximally efficient regulator must have an accurate model of the process being regulated. Therefore, anyone who argues that evolution gives us no reason to expect our brains give us an accurate picture of the world must be ignorant of process control. After all, our brains are regulators of our environment for our survival; therefore the maximally efficient way for our brain to regulate our environment is for our brain to have an accurate model of our environment.
But the brain clearly doesn't have an accurate model of the (whole) environment. It falls for all kinds of stupid illusions and cognitive biases.
Does this mean evolution is bunk? No, clearly not. In fact, it's evolution's fault. Evolution is path-dependent and only maximizes one thing: reproductive fitness. Your eye, for example, has a blindspot in because the rods and cones are behind the fibers that project to the brain. This is an objectively stupid "design" and the eye would be better without it (as octopus eyes are), but...it's there because it was there in your ancestors, and theirs before them and it was good enough for all of them, and even better than whatever their competitors had at some point. Evolution doesn't improve anything but reproductive success.
The point is that the brain is a kind of controller, and evolution does in fact select for the controllers fitness. The theorem asserts that the maximally efficient controller is one which has an accurate model of the environment. That evolution hasn't produced a perfect controller isn't an argument against the point that evolution will select for brains with accurate models.
So you're correct: environmental physics which have never been survival-relevant will have no selection pressure.
But I think you are very incorrect to say that because our brain ignores the nose part of our eyesight that we have an inaccurate model. A model in process-control-speak means some kind of mathematical description, not the inputs to the controller.
Evolution will select for brains with more accurate models subject to other constraints and as a result, I doubt it will rarely (if ever) reach a global maximum.
More accurate models presumably have costs (e.g., bigger brains are metabolically expensive). The fitness landscape has local maxima, and they may be hard to move away from (as in the eyes I mentioned above). There are lots of caveats like those.
We know that a lot of internal models are not particularly accurate. The visual system only acquires a tiny bit of high-resolution data at time (with a hole in it[0]) and interpolates the rest. Loss aversion, the gamblers' fallacy, and the rest suggest we're not great at modelling uncertain outcomes. Even mental models of physics differ from reality in some key ways [1].
It's possible that these are still slowly improving, but I think it's more likely that much better models have an unfavorable cost-benenfit profile.
[0] The blind spot is different from the nose. There's a whole in your representation of the world, a few degrees away from the center of gaze. Your brain fills it in in surprising ways. Here's a fun demo that's a big hit with kids too: http://people.whitman.edu/~herbrawt/classes/110/blindspotdem...
> I've been at an academic lecture where a process control engineer used the example of air conditioning to explain that when your system is controlled, you have to be extremely careful about correlation and causation. If you're not, you can very easily conclude that because the room is hot whenever the AC is running hard, that the AC running hard makes the room hot. He then showed that biologists were making this mistake with respect to certain biomarkers and cancer.
I’d enjoy hearing more about this. Go on, or point to more info?
Not a process engineer, but I'm guessing that if you're looking at correlation between temperature data vs. AC power, and AC is reacting to changes in temperature (i.e. derivative, D in PID), you'll see AC ramping up power before the temperature meaningfully increases. A naive analysis could then conclude that it's AC that causes high temperature.
Or more simply than that, if you only ever look at "cold" and "hot" rooms (e.g. analysis of cancerous and non-cancerous patients) you're never going to see the dynamics at all.
But more fundamentally, I believe there is a theorem which asserts that it is in general impossible to extract the open loop behavior of a system (i.e. a model of how the system would respond without a controller) solely from closed loop data (i.e. data collected from the system when the controller is on.)
It was several years ago, so I don't remember the details, but I have heard echos of the sentiment in the recent criticism of alzheimers research. I.e. that amyloid plaques may actually be the equivalent of a cellular AC that's been misidentified as the reason the room is hot.
>“The current approach is based on the idea that amyloid-β is bad,” says Perry. “My idea is the opposite.” He suggests that amyloid-β and tau accumulation is actually a protective response to age-related metabolic pressures in the cell.
> will apply to software contexts if one “squints right"
Not just "squint right." All systems can be represented as a pair of functions that map current state to next state and the observation of current state, in response to some perturbation. Aka state evolution/observation.
F: s_n, x_n -> s_n+1
G: s_n, x_n -> y_n
Everything from Markov Chains, stochastic processes, ANNs, controls, signal processes, electrical systems, finite automata, Turing machines, etc can all be formulated in those terms. The cool part is when you break apart those specific formulations into the abstract notion of a signal flow graph (a directed graph that represents the operations of F and G) you start thinking a lot about homological algebra, category theory, and the general nature of things.
Abstractions in programming that express this nature cleanly are often the ones that we find the most beautiful, at least for me. For example, an iterator has a void perturbation, F is the next() function, and G is just the observation of the current element (the actual state may be hidden behind a getter of some kind). The abstractions that we build on top of iterators through combinators like map/flatten/fold, etc are then homomorphisms between various iterators by chaining the state evolution/observation functions with other operations.
So if you go one degree higher than an iterator you get a generator, where your perturbation might be non-void. In controls then, you can express your plant and controller (even the "hardware" stages) as one generator composed of smaller ones and combinators across them.
A lot of the math behind this in the general case is kind of heady, at least for me (just an undergrad experience here). There's a lot of ground to be covered in particular understanding for machine learning and other nonlinear dynamical systems where analysis/synthesis of a state-space formulation is currently lacking in formal methods.
> Everything from Markov Chains, stochastic processes, ANNs, controls, signal processes, electrical systems, finite automata, Turing machines, etc can all be formulated in those terms. The cool part is when you break apart those specific formulations into the abstract notion of a signal flow graph (a directed graph that represents the operations of F and G) you start thinking a lot about homological algebra, category theory, and the general nature of things.
Very interesting! Do you have any references where such reformulations are done? Or where control problems are formulated in terms of algebra or category theory?
I'm not really an academic, this is mostly something I read about in bits and pieces. It's my own observation that roughly everything that needs a formal model can be expressed that way, the formulation of those two functions can be pulled out with varying degrees of difficulty.
In controls/linear dynamical systems for example, the canonical forms express it directly [1]. Same goes for anything with a clean state-space representation, like Markov Processes.
The notion of "observation" could also be called "doing something useful with state." For a lot of simple state models, like finite automata, the observation is the identity function.
As for the algebra/category theory, I'm not aware of any formal works that get into it (but also I don't keep up with the literature at that level). It's mentioned by various people (because the work was classified) but Shannon was the first to use topological representations of controls. Mason published these as "signal flow graphs" where multiplication/addition/differentiation/integration operations are represented by the edges and vertices of a directed graph, which he used to derive his famous gain rule [2].
My own interest in the abstract algebra related to this was the fact that the edge of a SFG is a multiplication by a state variable and constant (g * s|z^k) and vertices are summations. SFGs themselves are algebraic structures inside what I understand to be an Abelian group (I'm self taught on this, so forgive terminology/notation).
In the pro-audio niche there's a transform on SFGs called the Topology Preserving Transform and has various derivations (Andy Simper, Will Pirkle, Vadim Zavalishin all have writings on it). It's a fairly manual approach, so about a year ago I wrote an algorithm to do it by reformulating Zavalishin's algebraic derivation [3] as a graph transformation. The interesting bit is that when you look at it that way, the TPT is a homomorphism between Abelian groups (the former in the continuous time, the latter discrete time). At least that's how I understand it.
Anyway, this is mostly my self study. I like the abstraction and it makes composing systems very simple. I'm using these approaches in a Rust crate with some combinators on state evolutions/observations like one would iterators, I find it makes things very composable and elegant [4].
> All systems can be represented as a pair of functions that map current state to next state and the observation of current state, in response to some perturbation.
If you are assuming the next state depends on the current state, doesn't this work for only Markov system i.e where you can make predictions for the future based solely on its present state?
There's a whole bunch of systems where this doesn't apply, no?
No, because a Markov process only depends on current state. All Markov processes are stateful systems, but not all stateful systems are Markov processes.
For example, an exponential moving average:
s_n+1 = a x_n + (1 - a) s_n
y_n = s_n
where a, x, s, y in Reals
The next state depends on all past state, including initial conditions.
Conceptually, stateful systems are the notion "where I am going depends on where I am." Markov processes are stateful systems where "where I am going depends on where I am, but not upon where I came from."