I have written a review of the book "Feedback Control", by Phillip K. Janert. Quite inspiring book, I keep rereading pages of it all the time. The book and the subject deserve more than a simple review. This article tries to fill this gap.

Feedback is a powerful control mechanism. For starters, our own body relies on any number of feedback loops. We are hungry when blood glucose falls, and feel satisfied when the food reaches blood and increases the glucose. We feel thirst when some organic sensor detects salinity above normal, and kidneys work more when the blood has salinity below normal.

And it goes on and on. God and/or Darwin like feedback and have been employing it well before engineers do. The biggest merit of feedback is to function even when the system under control is not perfectly known. The hunger mechanism works almost as well for a baby and for an adult giant.

Since biology is not my lot, a better illustration is a mechanical, very simple system: a water tank, with the float that controls a valve. Independent of valve flow rate, water consumption, tank size or capacity of supply pipes, the mechanism generally works well.

It is difficult to think of a better control system. Perhaps the float could open the valve proportionally to water consumption, open the valve only at certain hours, etc. But such improvements don't pay off.

Still related to water, we have a more interesting device to analyze, in terms of control: the gas-powered water heater.

The device at Figure 1 is the one I have at home. It is a midrange model; the biggest issue is not working during electricity black-outs (ironic!). More advanced models have an internal rechargeable battery to handle this case.

Old-time gas heaters and electric showers delegate the feedback to the user: controlling the water flow, she controls the temperature. The heating capacity of those devices is fixed, the only automatic action is to turn on and off in response to water flow.

The heater in Figure 1 allows to choose the exact desired temperature. It is a very nice feature: you set it to the maximum temperature you need (e.g. 105F or 38C), no need to mix with cold water in the tap. No risk of a children geting burnt because he opened the hot tap.

The heater is not that precise. The peculiarity is: at summer, putting the thermostat at 38C works as expected. At winter, it is necessary to increase the thermostat to 39, 40 and even 41C, otherwise the water gets a bit cold. Why's that?

There are two basic approaches to implement a controller for a heater:

One approach, with no feedback, is to measure the inbound water temperature, the flow rate, and burn enough gas to warm up the water to the desired level. The gas' caloric power must be known beforehand, and the gas flow rate must be perfectly controller — which implies that the gas line pressure must be known and perfectly, or measured. The bill of materials already has four or five sensors, plus a precision gas valve. An outbound temperature sensor is still necessary to guarantee that water is not too hot due to some failure.

The other approach, with feedback, is simply measure the outbound temperature. If too hot, throttle the gas valve. If too cold, open the valve a bit more. Adjust until stabilized. We only need one sensor (two for a safety net against excessive temperature) and a simpler variable-flow gas valve.

Certainly the second form is cheaper, and will work almost as well as the first one. Perhaps better, if something goes wrong (e.g. abnormal gas line pressure).

In order to implement feedback control, we need to define two crucial variables: the **error**
and the **error response.**

In a heater, the error is the **difference** between outbound temperature and the value
chosen by the user. For example, if water leaves the heater at 36C and the user wants 38C,
the error is +2. Since the heater can't refrigerate, it is more practical to define "water too cold"
as a positive-signed error.

The error response is the gas flow to the burner. The bigger the error, the bigger the gas flow. When the error is negative (e.g. error -1 for outbound water at 39C) the gas valve is throttled or even closed.

Most probably, my heater's control is based on the P controller model.

P = weight, or gain, of the proportional response (constant) response = error x P

In the box below, there is a heater simulator, where you can control the water flow and inbound temperature.

Flow |
0.0 L/min |
Output temperature
---°C
Gas burnt --- kBTU/h |

Cold water |
20.0°C | |

Target | 38.0°C | |

The temperature graph shows only values between 35 and 41 degrees (3 degrees above or below the target). Values outside this range are very discomfortable for the user and the controller should never allow them to occur.

The simulator above was configured to work well for a water flow between 5 and 15 liters per minute. The "P" constant weight is unitary, that is, the response is to burn 1 kBTU of gas per minute (60 kBTU/h) for every degree of error.

By the way, BTU is a measure of work still popular for gases, thermal systems and HVAC. BTU/h and kBTU/h have the dimension of power. Burning 60 kBTU/h is equivalent to 18kW/h. The simulator is limited to a maximum power of 200 kBTU/h.

If we play hard enough with the simulator, we will find some problems:

- It oscillates too much when the water flow is low (less than 3L/min);
- It settles on temperatures lower than 38C, the lower the temperature the higher the water flow rate (especially above 15 L/min).

My real-world water heater handles the first problem: it simply does not turn on when the water flow rate is too low. But it does have the second problem, and I assume that it has a P controller due to that.

This second problem, called "droop", is inherent to this type of controller. We can increase the gain P, which amplifies the error response. This would improve the performance for higher flow rates, but would increase the oscillation at smaller flows.

The P weight must be well-calibrated for the "typical usage scenario", so the "droop" is negligible in normal use but no accidents should happen in atypical scenarios.

We need to talk about the simulation implementation. You can take a look in the page's source code; I will only mention some highlights.

Calculating the **expected** water temperature increase is easy; just divide power by
flow rate, converting BTUs to calories, etc. But the water inside a heater does not heat
up instantly. The real temperature changes smoothly, like a moving average.

This is certainly the weakest point of my simulator, since I threw an arbitrary factor to the moving average algorithm, without much consideration. But yes, this is a lot more faithful to the real thing than no averaging at all.

The temperature moving average is the cause of oscillations, because the response is based on a past error. Any real-world heater needs to deal with this problem as well.

Another feature of the simulation is the valve opening and closing. A heater cannot release or throttle gas abruptly, otherwise it could flame out. Moreover, no real valve operates instantly. In our simulator, gas flow variation is limited to 25 kBTU/h per second. This delay is also a source of oscillations, since the controller's response takes a while to be effective and the error takes even more time to decrease.

In theory, a P-type system should not oscillate, but the simulator above shows that it does oscillate in face of real-world delays and under extreme conditions (e.g. very small water flow rate).

Last but not least, the water tank system is kind of a "P" controller. The biggest difference is the lack of response proportionality. The valve is either on or off.

Now, we will solve the problem in my heater. I present you the PI-controlled heater, play with it as you like:

Flow |
0.0 L/min |
Output temperature
---°C
Gas burnt --- kBTU/h |

Cold water |
20.0°C | |

Target | 38.0°C | |

This heater also oscillates when flow rate is too low, but it can deliver warm water at exact 38C for any flow rate between 3L/min and 46L/min. (Higher flow rates overwhelm the heater capacity of 200 kBTU/h.)

In a PI controller, the response is:

P = weight or gain of proportionality (constant) I = weight or gain of integration (constant) Σerror = error sum or integral response = error x P + Σerror x I

In digital systems we need to use sum instead of an integral, since the error is calculated at discrete intervals (5 times per second, in our simulator).

The PI controller can eliminate the "droop" since, even when current error is zero, it "remembers" persistent errors in the past. The integral tends to accumulate until the error vanishes. If it accumulates too much, the error becomes negative, decreasing the integral and adjusting back the response.

We mentioned before that the delay in water heating in response to gas burning gives cause to oscillations. The PI controller can and will oscillate as well due to the integral factor. Empirical tests and/or rigorous analysis must be employed to find good values for weights P and I.

In the case of our PI heater simulator, the factors are P=0.5 and I=0.1, chosen by trail and error. Generally, the P factor should carry most of the control burden. Even though the I part works almost miraculously, it also brings violent oscillations when it dominates over P.

The PI controller is employed in 80% of cases, since it works well enough. It is difficult to justify the additional complexity of the PID controller (that we will see soon).

The PI controller has a big potential problem: locking up or "windup", caused by excessive accumulation in the integral. The symptom is a response completely inadequate to the present error. It only recovers after enough samples of the new error are accumulated. And this takes too long.

This problem can happen when:

- A very big error is found during a short time, e.g. when the heater is started up cold.
- A small but persistent error is accumulated over a long time span, e.g. when the heater capacity is exceeded, either because flow rate is too high or the inbound water temperature is too low.

A water tank with a badly implemented PI controller would overflow because water supply was interrupted the day before. This is obviously unnaceptable.

In our simulator, this problem can be provoked by increasing flow rate to more than 50L/min, leaving it alone for a minute or two, and then reducing it to 10L/min or less. The water will be too hot for a while, and too cold sometime later (due to a second, smaller windup).

The implementation solution that avoids the windup is to limit the maximum absolute value of the integral. The maximum value should be enough to eliminate the droop in typical usage scenarios.

An additional protection would be not to integrate errors that are too big, and even make I=0 in abnormal situations ("gain scheduling — change weights in some scenarios). Making I=0 is the same as downgrading the controller from PI to P.

These protections were not implemented in our simulator, so you can provoke a windup, and also see the violent oscillations at "start up".

We have used the error and the error integral to calculate a response. Now, what's lacking? To use the error derivative!

P = weight or gain of proporcionality (constant) I = weight or gain of integration (constant) D = weight or gain of diferentation (constant) Σerror = error sum or integral Δerror = error difference or derivative response = error x P + Σerror x I + Δerror x D

The simulator below is a PID heater, with gains P=0.4, I=0.0833 and D=0.1.

Flow |
0.0 L/min |
Output temperature
---°C
Gas burnt --- kBTU/h |

Cold water |
20.0°C | |

Target | 38.0°C | |

Generally speaking, the PID heater is not *that* better than PI version. It also
oscillates with low water flow (e.g. 1L/min). However, at 3L/min, the PID
version manages to be stable while the PI version still oscillates.

The D gain works as an *speculator* that tries to guess where the error
is going to — and changes the response accordingly:

- If error is increasing, D increments the response, so the increase is contained. The PI section handles the persistent part of the error.
- If the error is decreasing too fast (which is good in a first moment, but the error may overshoot), D works as a brake, depressing the PI response.

In digital systems, as we replaced integral by sum, we replace derivative by difference between the current error and the last observed one, scaled by the sampling rate.

Even though the D gain is intended to work as a brake, it can actually be a cause of wild response spikes as well. If some astronomical error shows up (e.g. due to sensor noise) the derivative/difference will be astronomical as well, causing an exaggerated (and wrong) response. PI controllers fare better in face of noise since the integral is not unlike a moving average.

In every practical PID implementation, the error derivative must be "smoothed out" by some kind of low-pass filter, like a moving average. This is a protection against noise and violent swings.

But then we are adding yet another variable to tweak: the time constant of this derivative filter. The time must not be too long, otherwise the usefulness of D gain (make short-term predictions) is lost. Time too short will let noise to pass through. The definition of "good time constant" will depend on the application.

Besides that, calibrating a PID controller is complicated; much more than a PI controller. It is employed only when really necessary, and yet a fair share PID controllers in operation is considered inadequate (ill-calibrated).

A "defect" of the previous simulators is the fixed gain for P, I and D. In the simulator, you can fiddle with the weights. You will discover by yourself that it is not trivial to find good values. In order to simulate a P or PI controller, just make I/D or D equal to zero.

Flow |
0.0 L/min |
Output temperature
---°C
Gas burnt --- kBTU/h |

Cold water |
20.0°C | |

Target | 38.0°C | |

Gain P | ||

Gain I | ||

Gain D | ||

Feedback-based systems, in which a PID controller takes part, are naturally described by differential equaitons, since its state depends on itself.

A simple cup of hot coffee is a system like that, since the cooling rate is proportional to coffee temperature.

The Laplace transform is a very useful tool to handle differential equations, since the transformed equation is often simpler to resolve. Integrals are transformed into divisions, derivatives become multiplications, etc. Actually, the rigorous proof of many practical methods to solve differential equations depend on Laplace.

In the old days, there were no computers to make simulations, and Laplace was all the engineers had to "test" a system. It still can be used to make a strong verification of a system.

Beginning with the coffee cup differential equation:

y(t) = coffee temperature (above the environment) y0 = y(0) = initial coffee temperature = constant dy/dt = temperature variation c = heat transfer constant (due to cup materials, etc.) dy/dt = -y(t).c dy/dt + y(t).c = 0 Laplace transform: [ s.Y(s) - y0 ] + c.Y(s) = 0 Manipulating the transformed equation: (s + c).Y(s) = y0 Y(s) = y0 / (s + c) Inverse Laplace transform: y(t) = y0.exp(-c.t)

We have solved the differential equation using nothing more than basic algebra. Even the initial condition (y0), that is often difficult to fit in the differential equation, showed up "magically" when the derivative was Laplace'd.

Note that the Laplace-transformed vresion had a "pole", that is, a point that it tended to infinite, for a certain negative value of "s":

Y(s) = y0 / (s + c) se s = -c, Y(s) = y0 / (c - c) = infinite

This pole became the negative exponential in the final result. This means that y(t) tends to zero, and the coffee tends to the environment temperature. If the pole were positive, the inverse transform would reveal a positive exponential.

The response of a feedback control depends on its own output. This can be modeled in the following form:

C = controller R = device (e.g. heater) y = output (outbound water temperature) r = set temperature e = error = r - y y = R(C(e)) y = R(C(r - y))

Assuming that both controller and device are linear, that is, they only multiply the error by some value (even if this can change over time), the equation can be rewritten as:

y = R.C.r - R.C.y y.(1 + R.C) = R.C.r y = r . [ R.C / (1 + R.C) ]

This result already says a lot how a feedback-based system works. For example, if the controller C is of type P, C is nothing more than a positive constant (the P weight), and the above formula can only tend to "r", never reaching it.

This is the theoretical explanation why the P controller has the "droop". At least the controller is inherently stable for every weight.

In Laplace, the transference function from r to y (that is, the calculation that happens between the user-set tmperature and the warm mater leaves the heater) is

T(s) = R(s).C(s) / (1 + R(s).C(s))

The transference function of the heater R(s), from controller output to warm water, is very simple, it only depends on water flow:

v = water flow rate R(s) = 1/v

Let's see now how a PI controller works, and how it fits in the T(s) system:

PI controller in Laplace: C(s) = P + I/s T(s) = R(s).[P + I/s] / [ 1 + R(s).(P + I/s) ] removing R(s) since it is constant while water flow is canstant T(s) = [ P + I/s ] / [ 1 + P + I/s ] T(s) = [ Ps + I ] / [ s + Ps + I ]

This system can emit responses above 1, that is, it can have positive feedback even when error is currently null. This is possible when "s" is negative (the "s" variable can take any complex value in Laplace). In a P-controlled system, "s" did not show up at the denominator.

It is also possible to verify the stability of the system, by finding the poles of the above equation, that is, the points where the T(s) denominator tends to zero (and T(s) tends to infinite).

The PI-controlled system above has a pole when "s" is equal to -I(1+P). Since I and P are always positive, the pole is always negative. A system is stable only when all poles have negative real values. The imaginary part may be positive or negative. A complex pole (with non-zero imaginary value) means that the system can oscillate.

Since the pole above is always negative and purely real, the system is always stable and does not oscillate.

A positive-valued pole would mean an unstable system, because the inverse Laplace transform would bring a positive exponential, that grows without bounds over time. Complex-valued poles are inverse-transformed into senoidal funtions, that oscillate periodically. Such oscillations are more often tolerated than desired.

Negative poles become negative exponentials, that converge to some value over time (like in the coffee equation). This convergence is what we value most in a controller.

Now, let's improve our system model, by improving the device model R(s). The original model was too simplistic. Let's take into account the delay between combustion and water heating. This delay is very simple to express in Laplace mode:

v = water flow rate a = response delay delay formula = exp(-s.a)/s R(s) = 1/v . exp(-s.a) = exp(-s.a)/v

Having this more faithful R(s) in hand, we revisit the transference function of the P-controlled heater.

R(s) = exp(-s.a)/v T(s) = [ P . exp(-s.a) / v ] / [ 1 + P . exp(-s.a) / v ] T(s) = [ P . exp(-s.a) ] / [ v + P . exp(-s.a) ]

P is only a constant, but "s" showed up in the equation due to the delay factor. The system behavior is no longer trivial.

Looking at the denominator, it can only be zero if "s" is complex
with imaginary part equal to πi. The real part depends on "a",
"P" and "v", and unfortunately it can be made
**positive.** So, the system is unstable and
oscillates if the constants are ill-chosen.

After playing with the denominator formula, we find out that bigger values of "a" (delay) makes the pole less negative, which means decreased stability. Small values of "v" have the same effect. Making the gain "P" higher than unity is also a sure way to move the pole to positive side.

Now we revisit the PI transference function, using the augmented R(s) device:

R(s) = exp(-s.a)/v T(s) = R(s).[P + I/s] / [ 1 + R(s).(P + I/s) ] = [ exp(-s.a).[P + I/s] / v ] / [ 1 + exp(-s.a).(P + I/s)/v ] = exp(-s.a).[P + I/s] / [ v + exp(-s.a).(P + I/s) ]

The denominator of this function is more difficult to analyse. Playing a bit with a spreadsheet or with scripts, we also find positive poles for certain constants. So, we conclude that both P and PI controllers need calibration in order to be stable, when the device has a response delay.

Since a high "P" weight causes stability problems, the PI controller ends up being more stable, since it allows to use lower "P" gains, and the increased "droop" that a lower "P" would cause, is avoided by the "I".

Dealing with transference functions is way more complicated than making simulations. Normally, the PID controller weights are found by simulation, empirical tests on real apparatus and/or ready-made formulas. The point here is to show that is possible to prove in math why a PID controller works, and how it will behave in every situation.