When is an Automation System too Slow and too Fast?

The usual concern is whether an automation system is too slow. There are some applications where an automation system is disruptive by being too fast. Here we look at what determines whether a system should be faster or slower and what are the limiting factors and thus the solution to meeting a speed of response objective. In the process, we will find there are a lot of misconceptions. The good news is that most of corrections needed are within the realm of the automation engineer’s responsibility.

The more general case with possible safety and process performance consequences is when the final control element (e.g., control valve or variable frequency drive), transportation delay, sensor lag(s), transmitter damping, signal filtering, wireless update rate and PID execution rate is too slow. The question is what are the criteria and priorities in terms of increasing the speed of response.

The key to understanding the impact of slowness is to realize that the minimum peak error and integrated absolute error are proportional to the deadtime and deadtime squared, respectively. The exception is deadtime dominant loops that basically have a peak error equal to the open loop error (error if the PID is in manual) and thus an integrated error that is proportional to deadtime. It is important to realize that this deadtime is not just the process deadtime but a total loop deadtime that is the summation of all the pure delays and the equivalent deadtime from lags in control loop whether in the process, valve, measurement or controller.

These minimum errors are only achieved by aggressive tuning seen in the literature but not used in practice because of the inevitable changes and unknowns concerning gains, deadtime, and lags. There is always a tradeoff between minimization of errors and robustness. Less aggressive and more robust tuning while necessary results in a greater impact of deadtime in that the gain margin (ratio of ultimate gain to PID gain) and the phase margin (degrees that a process time constant can decrease) is achieved by setting the tuning to be a greater factor of deadtime. For example, to achieve a gain margin of 6 and a phase margin of 76 degrees, lambda is set as 3 times the deadtime.

The actual errors get larger as the tuning becomes less aggressive. The actual peak error is inversely proportional to the PID gain. The actual integrated error is proportional to the ratio of the integral time (reset time) to PID gain. Consider the use of lambda integrating process tuning rules for a near integrating process where lambda is an arrest time. If you triple the deadtime used in setting the PID gain and reset to maintain a gain margin of about six and a phase margin of 76 degrees, you decrease the PID gain by about a factor of two times the new deadtime and increase the reset time by about a factor of two times the new deadtime increasing the actual integrated error by a factor of thirty six when the new deadtime is 3 times the original deadtime.

Consequently, how fast automation system components need to be depends on how much they increase the total loop deadtime. The components to make the loop faster is first chosen based on ease such as decreasing PID and wireless execution rate, signal filtering and transmitter damping assuming these are more than ten percent of  total loop deadtime.  Next you need to decrease the largest source of deadtime that may take more time and money such as a better thermowell or electrode design, location and installation or a more precise and faster valve. The deadtime from PID and wireless update rates is about ½ the time between updates. The deadtimes from transmitter damping or sensor lags increase logarithmically from about 0.28 to 0.88 times the lag as the ratio of the lag to the largest open loop time constant decreases from 1 to 0.01. The deadtime from backlash, stiction and poor sensitivity is the deadband or resolution limit divided by the rate of change of the controller output.   Fortunately, deadtime is generally easier and quicker to identify than the open loop time constant and open loop gain. See the Control Talk Blog “Deadtime, the Simple Easy Key to Better Control.”

For flow and pressure processes, the process deadtime is often less than one second making by far the control system components the largest source of deadtime. For compressor, liquid pressure and furnace pressure control, the control valve is the largest source of deadtime even when a booster is added. Transmitter damping is generally the next largest source followed by PID execution rate.

There is a common misconception that the wireless update time should be less than a fraction (e.g., 1/6) of the response time. For the more interesting processes such as temperature and pH, the time constant is much larger than the deadtime. A well-mixed vessel could have a process time constant that is more than 40 times the process deadtime. If you use the criteria of 1/6 the response time assuming the best case scenario of a 63% response time, the increase in deadtime can be as large as 3 times the deadtime from the wireless update rate.  Fortunately, wireless update rates are never that slow. Another reason not to focus on response time is because in integrating processes where there is no  steady state, a response time is irrelevant.

The remaining question is when is the automation system too fast? The example that most comes to mind is when the faster system causes greater resonance or interaction. You want the most important loops to be able to see an oscillation from less important loops whose period is at least four times its ultimate period to reduce resonance and interaction. Hopefully, this is done by making the more important loop faster but if necessary is done by making the less important loops slower.  A less recognized but very common case of needing to slow down an automation loop is when it creates a load disturbance to other loops (e.g., feed rate change). While step changes are what are analyzed in the literature so far as disturbances, in real applications there are seldom any step changes due the tuning of the PID and the response of the valve. This effect can be approximated by applying a time constant to the load disturbance and realizing that the resulting errors are reduced compared to the step disturbance by a factor that is one minus the base e to the negative power of lambda divided by the disturbance time constant.

Overshoot of a temperature or pH setpoint is extremely detrimental to bioreactor cell life and productivity. Making the loop response much slower by much less aggressive tuning settings and a PID structure of Integral on Error and Proportional -Derivative on Process Variable (I on E and PD on PV) is greatly needed and permitted because the load disturbances from cell growth rate or production rate are incredibly slow (effective process time constant in days).  In fact, fast disturbances are the result of one loop affecting another (e.g., pH and dissolved oxygen control). 

In dryer control, the difference between inlet and outlet temperatures that is used as the inferential measurement of dryer moisture is filtered by a large time constant that is greater than the moisture controller’s reset time. This is necessary to prevent a spiraling oscillation from positive feedback.

Filters on setpoints are used in loops whose setpoint is set by an operator or a valve position controller to change the process operating point or production rate. This filter can provide synchronization in ratio control of reactant flow maintaining the ability of each flow loop to be tuned to deal with supply pressure disturbances and positioner sensitivity limits. However, a filter on a secondary lower loop setpoint in cascade control is generally detrimental because it slows down the ability of the primary loop to react to disturbances.

Finally, more controversial but potentially useful is a filter on the pH at the outlet of static mixer for a strong acid and base to control in the neutral region. Here the filter acts to average the inevitable extremely large oscillations due to nearly non-existent back mixing and the steep titration curve. The result is a happier valve and operator. The averaged pH setpoint should be corrected by a downstream pH loop that is on a well-mixed vessel that sees a much smoother pH on a much narrower region of the titration curve. A better solution is signal characterization. The static mixer controlled variable becomes the abscissa of the titration curve (reagent demand) rather than the ordinate (pH). This linearization greatly reduces the oscillations from the steep portion of the titration curve and enables a larger PID gain to be used. The titration curve must not be very accurate but must include the effect of absorption of carbon dioxide from exposure to air and the change in dissociation constants and consequently actual solution pH with temperature not addressed by a standard temperature compensator that is simply addressing the temperature effect in the Nernst equation. You need to be also aware that the pH of process samples and consequently the shape of the titration curve can change due to changes in sample liquid phase composition from reaction, evaporation, absorption and dissolution. The longer the time is between the sample being taken and titrated, the more problematic are these changes.