In the article about PID controllers, we aimed to show how an "analog" or "classic" feedback control works. A simple water heater was our guinea ping along the article. A careful read of that article allows to reach a number of conclusions.
Feedback control is heuristic: it works even when it knows very little about the controlled system. But it needs to be tested, more so in the case of PI and PID controllers that depend on adjustment to work well.
One way to test the control is to develop a simulator, and in this case we need to know the controlled system very well in order to make a satisfactory simulator. Another way is to make empirical tests, straight on the realworld system. These alternatives negate part of the initial advantage (not having to know the system inside out).
Another important shortcoming of PID control is: only one input and only one output. In the heater case, the input was the water temperature (actually, the deviation from the desired temperature) and the output was the gas flow for burning.
In order to control two outputs (e.g. if the heater has a fan as well as a burner) we would need two PID controllers and two inputs. Each control does not know anything about the other one. (The fan heater might be controlled by the air temperature.)
But this may not be enough; perhaps the two outputs should depend on two or more inputs. The fan speed should take the gas flow into account as well, since it is responsible by ensuring a complete combustion as well as cooling the heater.
Or maybe there are more inputs than outputs. The heater might be equipped with CO_{2} sensor, flow sensor, input temperature sensor, etc. But we can't put these new inputs into good use because our PID controller is oneinput.
Of course, we can add gimmicks and kludges to the controller. This is quite easy if it is implemented in software. But these kludges are not covered by the theoretical framework, so it will be very difficult to prove that the kludge will work well. Moreover, every software engineer would implement the kludge in a different way, each implementation would have a different set of bugs, etc.
Last but not least, the PID control analysis is difficult, being based in Laplace transforms. Formula manipulation is complicated, engineers are not that used to it nowadays, and the numerical methods are limited for Laplace. Moreover, the PID framework is fundamentally analog (with continuous functions) but nearly all modern controllers are digital and they "see the world" in discrete steps.
The statespace representation approaches the problem of control from a completely different angle.
A historical curiosity: statespace representation was first studied in USSR, and was a factor in the head start that USSR had in Space Race. Despite the Cold War, Russian and Western researchers did share information and collectively developed the theory that was a common necessity for the aerospace industry.
The first thing is to model the system as faithfully as possible to the "real thing". To do that, it is necessary to know the system physics very well. Compare this to the feedback approach, whose biggest advantage was to ignore completely the system physics.
In the PID control we had to develop a separate simulator, to test the controller. This will not be necessary here! Once the statespace formulas are found, the system simulation is already there, for free.
The system state is directly expressed by an differential equation, time domain. (In systems with PID control, the system was expressed by transference functions, in frequency domain.)
The "canonical" form of the statestate representation of any system is as follows:
x' = Ax + Bu y = Cx + Du x: system state (matrix S x 1) S: number of states u: system inputs or controls (matrix E x 1) E: number of inputs or controls y: system output (matrix T x 1) T: number of outputs A: matrix S x S B: matrix S x E C: matrix T x S D: matrix T x E
Yeah, the equations above do use matrixes, which is a nuisance for everyone (myself included) that did not pay attention to the linear algebra classes at high school.
Matrixes are employed because they can represent a "package" of numbers in a very compact form. In the model above, we can have any number of system inputs, controls, states and outputs, and they can be fully interdependent. This is a major advancement in relationship to PID control.
Before saying anything else, let's model a simple, but complete, system: the water heater that has been our guinea pig since PID control.
The system state (matrix x) is the set of variables that we can measure and we want to control. In the heater, the only state is the outbound water temperature (q_{out}) in Celsius. For the sake of simplicity, we consider that all water inside the heater has the same temperature as the output.
The system input or control (matrix u) is the set of variables that actuate on the state. In our heater, we have three controls: inbound water temperature (q_{in}), set temperature (q_{tgt}), and the heating power (w) in kcal/min.
It may be difficult to decide if something is a state (x), control (u), input (u), output (y) or system characteristic (A or B). Inbound water temperature cannot be controller, but certainly weighs on the outbund temperature. Heater power changes the state (this is what actually heats the water) but it is also the variable that we most want to control. So, is power a state or a control?? In my opinion, it fits better as control.
System output is a simple transformation of variable to the physical world. For example, the power control (w) in kcal/min must be converted to kBTU/h or Watts. To make our example easier, we will assume that our heater is electric (so the output unit is Watts). The output equation is not differential and, whatever the result, it does not influence the state.
Some other elements that we need to fit into the statespace model:
The items above must be fitted in matrixes A and B, since they are system characteristics; they are not controls nor states. "But the flow rate will change over time!". Indeed, but all elements of the statespace equation can change over time anyway.
The elements x, u, A, B, C e D are actually functions over time: x(t), u(t), A(t), B(t), C(t) and D(t). So, there is no problem in fitting timevarying characteristics in matrixes A to D. There are systems whose AD matrixes are indeed constant; such systems are named LTI (linear timeinvariant). Our heater is not LTI.
We believe that we have a good enough description of our system, so here are the equations, one by one.
x' = A x + B u q' = [ (l+f/z) ] q + [ l+f/z, 0, 1/z ] [ qin ] [ qtgt ] [ w ] q' temperature variation (derivative), Celsius/min q outbund and stored water temperature, Celsius qin inbound water temperature, Celsius qtgt desired water temperature, Celsius l thermal losses, % per C degree f outbound water flow rate, liters/min z water volume inside the heater, liters w water power, kcal/min
The above equation simulates the heater quite closely to the real thing. The temperature rises as function of heater power, and falls (tending to q_{in}) as the warm water is drawn, or cools down. The desired temperature set by the user (q_{tgt}) still cannot influence the state, but we will fix that later.
The equation is still continuous. It is necessary to "discretize" it to allow it to be usable in a spreadsheet or a program. There are two ways to do it:
In our heater, the time unit is the minute, so if we make dt=0.1, we are simulating the state every six seconds, which seems to be enough. It is important to remember to scale (multiply) the derivative by dt — if the state derivative is 5, we must add 5x0.1=0.5 to the sate, since 5 is the variation rate for a whole minute.
Knowing the water volume inside the heater allows us to foresee the "lag" between heater powerup and the atual heating of water up to desired temperature. The simulation is faithful since the relationship between volume and flow is perfectly known. (In the PID control case, we tossed a simple moving average to simulate this lag.)
Now, the output equation:
y = C x + D u y = 0 q + [ 0 0 4186 ] [ qin ] [ qtgt ] [ w ]
As we said before, the output equation is a mere translation of calories to Watts. The result "y" drives the electric heater element directly.
The remaining issue of the model just developed is that we are not controlling the heater power (w), which is the whole point of a controller. The model is ok for a fixedpower heater, or if the user turns it on and off manually, but obviously we prefer an automatic control.
Let's appeal to the traditional proportional (P) feedback control, for a simple and illustrative solution. The feedback can be implemented by recalculating "u" matrix at every iteration of the state, accordingly to the following equation:
u = r  Kx u Control matrix (with all control variables) r Configuration matrix with ideal/desired values K Feedback control matrix
The matrix "r" can change over time, but it does not change in response to the state; it changes accordingly to the user's desires, when she sets the thermostat temperature; or when the environment temperature changes.
To model a functioning heater, it takes to define the "r" and "K" matrixes, that we do below.
u = r  K x u = [ qin ]  [ 0 ] q [ qtgt ] [ 0 ] [ qtgt . P ] [ P ] P: proportional controller gain That is, u = [ qin ] [ qtgt ] [ P.(qtgt  q) ]
At every state iteration, we recalculate the heater power, that just became proportional to the difference of current water temperature (q) and the desired temperature ( (q_{tgt}). It lacks some finishing touches, like limit the power to positive values (since our heater cannot refrigerate) but the form above is enough for our purposes.
It is noteworthy that we had to establish an "artificial" value for the third row of the matrix "r" for the numbers to add up. This is a recurrent characteristic of statespace — from time to time we need to manipulate things to make them fit in the form.
This P controller has the same shortcomings that we already studied at PID control. There is one improvement: this controller works even when the flow rate is zero; the heater will warm up the water contained in it, and stop when it reaches the desired temperature.
The following manipulation of formulas is very important:
x' = Ax + Bu u = r  Kx x' = Ax + B(r  Kx) x' = Ax + Br  BKx x' = (A  BK)x + Br
The final form omits the control matrix "u". This way, we have determined the system dynamics based solely on the configuration matrix "r". This equation is very useful to determine whether the feedbackcontrolled system is stable. In a real implemenation, we would still need to calculate the matrix "u" since it controls the heater power.
Now we can use the equations directly in a simulator! Press any button to start the simulation.
Water flow 
0.0 L/min 
Output temperature
°C
Power W 
Cold water

20.0°C  
Target temp.  40.0°C  
P gain  
I gain  
The weights, or gains, P and I can be manipulated. We haven't discussed the PI control implementation; it will be approached later. In order to experiment with PI control, just give I a nonzero gain.
The source code can be found in the HTML page. We have employed the Sylvester library to do matrix arithmetic.
Systems described by the statespace equations are named linear systems, because the matrix arithmetic imposes a linear relationship between controls, states and outputs.
Every realworld machine and device is nonlinear. Even a water heater is not linear (the definition of calorie is strictly true for water at 19.5C only). But the deviation is small enough to be ignored. In other systems, things are not so simple.
In general, it is desirable to use linearization techniques, so the original system is transformed to fit in the linear model and be controlled as if it were linear.
We can tell a lot about the system just looking at the matrixes A, B, C, D directly, without the need to appeal to Laplace transform.
This is one advantage of adopting the "canonical" form for the state equation. Whole libraries have been written about this subject, and reusing the established analysis techniques is much easier when everybody uses the same form.
A system is considered stable when all eigenvalues of matrix A are negative. (Eigenvalues of a matrix are similar to determinants; likewise, they exist only for square matrixes. A 3x3 matrix has 3 eigenvalues, while a determinant is always unique. Eigenvalue and determinant of a 1x1 matrix are both equal to the only element.)
In the heater case, matrix A is 1x1 and equal to (l+f/z). Since l, f, and z are always positive, the eigenvalue is always negative and the system is stable.
It is not difficult to see why this is true. Consider the differential equation
x' = Ax
the derivative of "x" can only tend to zero — and "x" can only tend to a finite value — if A is negative.
We have forgotten a detail: our heater is feedbackcontroller! So it does not make sense to analyse matrix A in isolation, since, in the presence of feedback,
x' = (A  BK)x + Br
We need to verify if ABK eigenvalues are negative as well.
A  B K (l+f/z)  [ l+f/z, 0, 1/z ] [ 0 ] [ 0 ] [ P ] (l+f/z)  P/z
Fortunately, there are no stability problems caused by feedback, because the P gain is always positive as well. Feedback is even capable of stabilizing a naturallyunstable system, since the composition ABK takes the place of matrix A.
A given system is considered controllable when it is possible to drive the state(s) to any value(s), in a finite time, given a set of control values (matrix u). Our heater is controllable, since it is enough to manipulate the cold water temperature and the heater power to reach any output temperature. Controlability can be easily tested with math tools on matrixes A and B.
A system with more states than controls might be controllable; it depends on the system. For example, a locomotive on a railway has at least two relevant states: position and speed. But only one control: acceleration. It is possible to reach any position in railway, at any desired speed, controlling only the acceleration (as any engineer knows). On the other hand, a plane cannot be controlled by the throttle alone (even though people have tried that).
A system is observable if whe can determine the current system state looking only at outputs and inputs/controls. Our heater is observable as long as we know the flow rate, using some sort of flow meter. On the other hand, if we measure the output temperature, we can calculate the flow (and a temperature sensor is cheaper).
At this point, you may be wondering: controlling the heater using PID feedback was so easier! It was enough to measure the output temperature. Here, we have to know flow rate, inbound temperature, etc. which seems to lead to a more expensive controller.
(At least, the heater internal volume and the heat loss are fixed values, easy to determine empirically; using these values won't make the controller more expensive at all.)
The good news is that we don't really need to implement the full equations in the realworld heater. The state (x) is known all the time, because we always have a temperature sensor (it would be unsafe not to have it). We only need the equations to verify our controller with mathematical rigour.
In the real thing, all it takes is to implement the feedback and output equations, and they can be "downsized" further. (The feedback equation is still necessary since it determines the heater power.)
u = [ qin ]  [ 0 ] q [ qtgt ] [ 0 ] [ qtgt . P ] [ P ] u = P (qtgt  q) y = 0 q + [ 0 0 4186 ] [ qin ] [ qtgt ] [ w ] y = 4186 . u
As we can see, the system was reduced to a P controller, that is solely based on the temperature chosen by the user (q_{tgt}) and the output temperature (q). The output equation is nothing more than a unit conversion.
Of course, this is the cheapest way to implement the system, not necessarily the best one. We could have kept all equations, and compare the estimated state with the actual temperature sensor — and do something useful with the difference. We will get back to this possibility at the end of the text.
In order to implement a PI (proportionalintegral) controller, that does not have the "droop" for higher flow rates, we need to fit the error integral term in our equations.
The best form is to take this error integral as part of the state of the system.
x' = A x + B u [ q' ] = [ (l+f/z), 0 ] [ q ] + [ l+f/z, 0, 1/z ] [ qin ] [ e' ] [ 1 , 0 ] [ e ] [ 0 , 1, 0 ] [ qtgt ] [ w ] e: error integral = sum of (qtgt  q) samples
At equation above, we can see that error variation is the instante difference between q and q_{tgt,} so the error state itself is cumulative, as we wanted it to be.
The output equation of our heater does not change since the output power is function of the control (u), that has not changed at all. On the other hand, the feedback equations must change since we need to put the error integral in good use:
u = r  K x u = [ qin ]  [ 0, 0 ] [ q ] [ qtgt ] [ 0, 0 ] [ e ] [ qtgt . P ] [ P, I ] I: integral gain of the PI controller u = [ qin ] [ qtgt ] [ P.(qtgt  q) + I.e ]
The PI system is almost as simple as the P system, and we can even "downsize" it in the same fashion as we did before, if we want it to be as cheap as possible.
The matrixes can express several parallel and independent and/or interdependent equations. More complex systems, described by multiple differential equations, still fit in the canonical form. Just make "x" a multirow matrix, and adjust the other matrixes accordingly.
The canonical form uses a firstorder differential equation, meaning a simple derivative. If the system takes second derivatives, third derivatives, etc. it is necessary to tweak the variables to make them fit in the form. Either we use substitute veriables, or we add the intermediate derivatives to the state, making a chain.
For example, in a system where we just want to control the position of an object using a force (i.e. accelerating the object). But acceleration is the second derivative of position, so it is not possible to express the position as a linear function of acceleration (the relationship between them is quadratic).
The solution is to include the speed as part of the state, because speed is a linear function of acceleration, and position is also a linear function of speed:
x' = A x + B y  speed  =  0 1   position  +  0  force  acceleration   0 0   speed   1/mass  acceleration = force / mass Updating x based on x': position = position + speed speed = speed + acceleration
With speed as part of the state, we found a linear expression that relates position with acceleration, and we can treat the system as linear.
As we said before, we don't need to abolish the statespace equation in our final implementation of a heater. We can use it in conjunction with a temperature sensor to make a better system.
The state calculated (or estimated) by the equations is noisefree, but it is just an estimation that can and will deviate from the real state. It is like a boat sailing in fog, "dead reckoning" its position based on speed and heading. The calculated position is "exact" but it deviates more and more from the real position as time goes by.
On the other hand, the state reported by a sensor is the "real thing", but it is polluted by measurement noise and other limitations. For example, GPS includes a random error by design, and there is a lag of 10 minutes after a cold startup (cell phones don't have this lag thanks to AGPS, that gets part of the GPS data from cell towers).
In a good system, we can combine both states (estimated and measured) in order to obtain a higherquality state estimation: more exact, with less noise, and minimum lag. Boat or aircraft navigation systems are like that: they combine dead reckoning with GPS to supply precise and steady information to the pilot.
To combine two or more state values, we could employ some sort of average, like moving average, but simple averages can't squeeze the best characteristics of each piece of data (the noiseless nature of estimation and the precision of a sensor). The tool for the job is the Kalman filter.
At the lowest level, a Kalman filter is a moving average, but the gain of this average is dynamically controlled, in order to increase the "quality" of the state. The basic formula is:
x[t+dt] = K[t].z[t] + (1  K[t]).x[t] x[t+dt] new state estimation K[t] Kalman gain (between 0 e 1) at moment t z[t] measured state (e.g. with sensor) at moment t x[t] last state estimation
The Kalman filter is naturally discrete, and its recursive nature uses very little memory in a computer implementation.
To calculate the Kalman gain at every iteration, with some simplification:
Pm = Ad.P.Adt + Q K = Pm / (Pm + R) (please calculate the new state at this point!) P = (I  K).Pm P: system estimated covariance, at this moment Q: estimation covariance R: measurement covariance (e.g. sensor's) K: Kalman gain Ad: matrix A from state equation, in integral form Adt: transposed Ad I: identity matrix
The formulas are intimidating but let's illustrate them with a good example, using again our Pcontrolled heater as guinea pig. Since the heater's state is a 1x1 matrix, all Kalman matrixes are 1x1 or simple numbers.
Let's estimate some characteristics of our heater, so we can test the filter:
Standard deviation of estimation: 0.4C Covariance = standard deviation squared Q = 0.4 x 0.4 = 0.16 Standard deviation of temperatur sensor: 2C R = 2 x 2 = 4 Water flow rate: zero Thermal loss (l): 1% per minute per degree C A = (l+f/z) = 0.01 Sampling rate (dt): 6 seconds or 0.1 minute Ad ~= 1 + A.dt = 1  (0.01).(0.1) = 0.999
The covariance of the estimation and of the sensor mean very different things. The sensor's covariance is indeed a measure of noise due to shortcomings of the sensor. On the other hand, estimate's covariance measures how much it can move. The movement renders it "less trustworthy", not because the math could be wrong, but rather because the model is not perfect.
We had to estimate the flow and heat loss since, in our heater model, these values show up in matrix A, that is used (in integral) form in the Kalman filter. To calculate the integral version of A, we have employed the 1+A.dt approximation instead of the exponential, which is acceptable when dt is relatively small (that is, the sampling rate is relatively high).
Some other initial variables:
Real temperature of water: 40C Estimated temperature: 40.5C Initial P: 0.5
Also, we will ignore the feedback control dynamics, even though it would react as the Kalman filter updates the system state.
First round:
Pm = Ad.P.Adt + Q Pm = 0.999 * 0.5 * 0.999 + 0.16 Pm = 0.659 K = Pm / (Pm + R) K = 0.659 / (0.659 + 4) K = 0.1414 z = 42.5047 (read from sensor) x = K.z + (1  K).x x = 0.1414 * 42.5047 + (1  0.1414) * 40.5 x = 40.7834 (new state estimation) P = (I  K).Pm P = (1  0.1414) * 0.659 P = 0.5658
Second round:
Pm = Ad.P.Adt + Q Pm = 0.999 * 0.5658 * 0.999 + 0.16 Pm = 0.7246 K = Pm / (Pm + R) K = 0.7246 / (0.7246 + 4) K = 0.15336 z = 39.7925 (read from sensor) x = K.z + (1  K).x x = 0.15336 * 39.7925 + (1  0.15336) * 40.7834 x = 40.6314 (new state estimation) P = (I  K).Pm P = (1  0.15336) * 0.7246 P = 0.6134
Third round:
Pm = Ad.P.Adt + Q Pm = 0.999 * 0.6134 * 0.999 + 0.16 Pm = 0.772 K = Pm / (Pm + R) K = 0.772 / (0.772 + 4) K = 0.16177 z = 38.8512 (read from sensor) x = K.z + (1  K).x x = 0.16177 * 38.8512 + (1  0.16177) * 40.6314 x = 40.3434 (new state estimation) P = (I  K).Pm P = (1  0.16177) * 0.772 P = 0.647
Since the sensor is noisy, the first round produced an estimation even worse than the original, but the third round already brings a state with better quality.
More rounds, simulated with a Python script:
t=0, Pm=0.659001, K=0.141447, z=42.504700, x=40.783558, P=0.565787 t=1, Pm=0.724656, K=0.153378, z=39.792500, x=40.631552, P=0.613510 t=2, Pm=0.772284, K=0.161827, z=38.851200, x=40.343443, P=0.647307 t=3, Pm=0.806013, K=0.167709, z=38.981818, x=40.115086, P=0.670837 t=4, Pm=0.829496, K=0.171756, z=39.364760, x=39.986213, P=0.687025 t=5, Pm=0.845652, K=0.174518, z=38.033263, x=39.645389, P=0.698071 t=6, Pm=0.856675, K=0.176391, z=39.438286, x=39.608857, P=0.705565 t=7, Pm=0.864155, K=0.177658, z=42.294169, x=40.085924, P=0.710631 t=8, Pm=0.869210, K=0.178512, z=40.801745, x=40.213706, P=0.714046 t=9, Pm=0.872619, K=0.179086, z=39.490366, x=40.084166, P=0.716345
Given the simplicity of this system, the Kalman gain tends fast to its final value (around 0.18) and it behaves identically to a moving average with weight 0.18. But we have already been spared from calculating this weight.
(Some simple systems actually use fixedgain Kalman filters, the gain being determined offline by some sort of simulation like we did above; this is actually a good alternative when the microcontroller that implements the software is extremely limited.)
A major advantage, not shown in the example above, is that Kalman filter continues to work well even in face of a highly dynamic system, where all variables (including noise levels) change as time goes by. This is the kind of feature that puts linear control in a higher league than "analog" PID controllers.
The Kalman filter is the "bread and butter" of every GPS receiver, whose readings are very noisy. The filter allows the user to obtain a stable position, whose precision increases as time goes by. More sophisticated GPS units employ speed and heading to estimate position change, improving even more the quality of the position, even if the signal is lost at times when the receiver passes through tunnels or "urban canyons".
The most defining characteristic of an oscillatory systems, like massspring, pendulum, etc. is a secondorder differential equation, whose solution involves trigonometry. See the elementary equation below:
v'' = v v = cos(t)
The statespace equation for this cosine wave, in the absence of any control, is:
x' = A x  v'  =  0.0, 1.0   v   v''   1.0, 0.0   v' 
It is almost obvious that this system is marginally stable, since it oscillates forever, without damping nor amplification. (In practice, a digital oscillator implemented this way would be flat unstable due to accumulation of small rounding errors. This article brings up a better technique for digital sinusoid oscillators.)
It is worthy to verify the eigenvalues of matrix A, that we already know to be marginally stable. There is a formula to calculate eigenvalues that is useful when the matrix is small:
determinant(A  λI) = 0 λ = some eigenvalue of matrix A I = identity matrix
Let's calculate A  λI and find the determinant equation:
 0 1    λ 0  =  λ 1   1 0   0 λ   1 λ  determinant: λλ + 1 eigenvalues satisfy the quadratic equation: λλ + 1 = 0 solutions: λ = +j, j
The eigenvalue's equation happens to be a quadratic one, and the solutions are the imaginary numbers λ=j,+j. The "signature" of an oscillatory system is the presence of conjugate complex eigenvalues, which is analogous to the complex poles of the Laplace transform. Purely imaginary eigenvalues or poles mean a marginally stable system.