*Posts on this page are from the Control Talk blog, which is one of the ControlGlobal.com blogs for process automation and instrumentation professionals and Greg McMillan’s contributions to the ISA Interchange blog.

Tips for New Process Automation Folks
  • Insights to Process and Loop Performance

    The post, Insights to Process and Loop Performance , originally appeared on ControlGlobal.com's Control Talk blog. Here we look at a myriad of metrics on process and control loop performance and show how to see through the complexity and diversity to recognize the commonality and underlying principles. We will see how dozens of metrics simplify to two classes each for the process and the loop. We also provide a concise view of how to compute and use these metrics and what affects them. Let’s start with process metrics because while as automation engineers we are tuned into control metrics, our ultimate goal is improvement in the process and thus, process metrics. The improvement in profitability of a process comes down to improving process efficiency and/or capacity. Often these are interrelated in that an increase in process capacity is often associated with a decrease in process efficiency. Also an increase in the metrics for a particular part of a process may decrease the metrics for other parts of the process. The following example cited in the April 2017 Control Talk Column “ An ‘entitlement’ approach to process control improvement ” is indicative of the need to have metrics and an understanding for the entire process: “In a recent application of MPC for thermal oxidizer temperature control that had a compound response complicating the PID control scheme, there was a $700K per year benefit clearly seen in reduced natural gas usage. However, the improvement also reduced steam make to a turbo-generator, reducing electricity generated by $300K per year. We reached a compromise of about $400K per year in net benefit because of lost electrical power generation from less steam to the turbo-generators. We spent many hours to align the benefit with measureable accounting for the natural gas reduction and the electrical purchases. Sometimes the loss of benefits is greater than expected. You need to be upfront and make sure you don’t just shift costs to a different cost area.” Process efficiency can be increased by reducing energy use (e.g., electricity, steam, coolant and other utilities) and raw materials (e.g., reactants, reagents, additives and other feeds). The efficiency is first expressed as a ratio of the energy use per unit mass of product produced (e.g., kJ/kg) or energy produced (kJ/kJ) and then ideally in terms of ratio of cost to revenue by including the cost of energy used (e.g., $ per kJ) and the value of revenue for product produced (e.g., $ per kg) or energy produced (e.g., $ per kJ). The kJ of energy and kg and mass are running totals where the oldest value of mass flow or energy multiplied by a time interval between measurements is replaced in the total by the current value. A deadtime block can provide the oldest value. The time interval between measurements and the deadtime representative of the time period for the running total should both be chosen to provide a good signal to noise ratio. The deadtime block time period should also be chosen to help focus on the source of changes in process efficiency. For batch operations, the time period is usually the cycle time of a key phase in the batch and may simply be the totals at the end of the phase or batch. For continuous operations, I favor a time period that is an operator shift to recognize the key effect of operators on process performance. This time period is also suitable for evaluating other sources of variability, such as the effect of ambient conditions (day to night operation and weather) and feeds and recycle and heat integration (upstream, downstream and parallel unit operations). The periods of best operation can be used to as a goal to be possibly achieved by smarter instruments or better installations less sensitive to ambient conditions or smarter controls thru procedural automation or state based control as discussed in the in the Sept 2016 Control Talk Column “ Continuous improvement of continuous processes ”. The metrics that affect process capacity are more diverse and complicated. Process capacity can be affected by feed rates, onstream time, startup time, shutdown time, maintenance time, transition time, spectrum of products and their value, recycle, and off spec product. An increase in off spec product that can be recycled can be taken as a loss in product capacity if the raw material feed rate is kept the same or taken as a loss in process efficiency if the raw material feed rate is increased. If the off spec product can be sold as a lower revenue product, the $ per kg must be correspondingly adjusted. For batch operations, an increase in batch end point in terms of kg of product produced and a decrease in batch cycle time including time in-between batches can translate to an increase in process capacity. If a higher endpoint can be reached by holding or running the batch longer, there is a likely increase in process efficiency assuming a negligible increase in raw material but there may be an increase or decrease in process capacity. The optimum time to end a batch is best determined by looking at the rate of change of product formation (batch slope) and if necessary the rate of change of raw material and energy use to determine the optimum time to end the batch and move on. A deadtime block is again used to provide a fast update with a good signal to noise ratio to compute the slope of the batch profile and the prediction of batch end point. Of course whether downstream units for recovery and purification are able to handle an increase in batch capacity and their metrics must be included in the total picture. For example in ethanol production, a reduction in fermenter cycle time may not translate to an increase in process capacity because of limitations in distillation columns downstream or the dryer for recovery of dried solids byproduct sold as animal feed. For more on the optimization of batch end points see the Sept 2012 Control feature article “ Getting the Most Out of your Batch ”. The metrics that indicate loop performance can be classified as load response and setpoint response metrics. The load response is often most important in that the desired setpoint response can be achieved for the best load response by the proper use of PID options. The load response should in nearly all cases be based on disturbances that enter as inputs to the process whereas many academic and model based studies are based on disturbances entering in the process output. For self-regulating processes where the process deadtime is comparable to or larger than the process time constant, the point of entry does not matter because the intervening process time constant does not appreciably slow down input disturbances in the time frame of the PID response (e.g., 2 to 4 deadtimes). However, most of the more interesting temperature and composition control loops in my career did not have a negligible process time constant and in fact had a near-integrating, true integrating or runaway open loop response. The load metrics are peak error and integrated error. The peak error is the maximum excursion after a load upset. The integrated error is most often an absolute integrated error (IAE) but can be an integrated square error. If the response is non oscillatory, the integrated error and IAE are the same. There are also metrics indicative of oscillations such as settling time and undershoot. The ultimate and practical limits to peak error are proportional to the deadtime and inversely proportional to controller gain, respectively. The ultimate and practical limits to integrated error are proportional to the deadtime squared and the ratio of controller reset time to controller gain, respectively. For setpoint metrics, there is the time to get close to setpoint, which I call rise time, important for process capacity. I am sure there is a better name because the metric must be indicative of the performance for an increase or decrease in setpoint. The other setpoint metrics are overshoot, undershoot and settling time that can affect process capacity and efficiency. The use of a setpoint lead-lag or PID structure that minimizes proportional and derivative action on setpoint changes can reduce overshoot, despite using good load disturbance rejection tuning. A setpoint lag equal to the reset time (no lead) corresponds to a PID structure of Proportional and Derivative on the Process Variable and Integral action on the Error (PD on PV and I on E). See the Sept and Oct 2016 Control Talk Blogs “ PID Options and Solutions - Part 1 ” and “ PID Options and Solutions - Parts 2 and 3 ” for a discussion of loop metrics in great detail including when they are important and how to improve them. Also look at the presentation for the ISA Mentor Program WebExs “ ISA-Mentor-Program-WebEx-PID-Options-and-Solutions.pdf ”. My last bit of advice is to ask your spouse for metrics on your marriage. Minimizing the deadtime while still having a good signal to noise ratio is particularly important. For men, the saying “Happy wife, happy life” I think would work the other way as well. I just need a rhyme.
  • Webinar Recording: How to Get the Most out of Control Valves

    The post Webinar Recording: How to Get the Most out of Control Valves first appeared on the ISA Interchange blog site. This educational ISA webinar on control valves was presented by Greg McMillan in conjunction with the ISA Mentor Program . Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). The data that is really needed when selecting and sizing a control valve is rarely understood and specified, which leads to excessive variability originating from the valve. In this presentation, ISA mentor Greg McMillan discusses pervasive problems and rampant misconceptions. He then provides guidance—supported by test results—on how to select a good throttling control valve. He also explains PID tuning adjustments and a key PID feature that can be utilized to provide precise, smooth, and fast control. The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.
  • How to Get the Most out of Control Valves

    The post How to Get the Most out of Control Valves first appeared on the ISA Interchange blog site. This educational ISA webinar on control valves was presented by Greg McMillan in conjunction with the ISA Mentor Program . Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemicals). The data that is really needed when selecting and sizing a control valve is rarely understood and specified, which leads to excessive variability originating from the valve. In this presentation, ISA mentor Greg McMillan discusses pervasive problems and rampant misconceptions. He then provides guidance—supported by test results—on how to select a good throttling control valve. He also explains PID tuning adjustments and a key PID feature that can be utilized to provide precise, smooth, and fast control. The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.
  • Fixes for Deadly Deadband

    While there are some cases where deadband is helpful, in most applications the effect is extremely detrimental and confusing. Deadband can arise from any sources either intentionally or inadvertently. Deadband creates deadtime and for certain conditions excessive and persistent oscillations. The increase in loop deadtime is the deadband divided by the rate of change of controller output.  The increase in deadtime can increase the peak error and integrated error from a load disturbance. If there are two or more integrators in the system due to integral action in the valve positioner, variable speed drive, controller(s), or process, a limit cycle will develop. The biggest and most troublesome source of deadband is backlash from an on-off or isolation valve (tight shutoff valve) posing as a throttling valve. The positioner seeing feedback from the actuator shaft of such rotary valves often does not realize the internal closure member (e.g. ball or disk) is not responding due to backlash from the connections between the shaft, stem and ball or disk or the shaft windup from seal friction. The positioner diagnostics say everything is fine even meeting the requirements set by the ISA-75.25.01 Standard for Measuring Valve Response. Creative story telling develops to explain the oscillations in the process. An on-off or isolation valve offers a great advantage when used in series with a throttle valve. Besides achieving tight shutoff, the placement of a quickly stroked completely open or closed on-off or isolation valve close-coupled to the connection into the process eliminates the deadtime and any unbalance between ratioed flows during the start and stop of reactant and reagents enabling more precise composition and pH control. The throttle valve is located at a position that is more accessible for better maintenance and with some straight runs upstream and downstream. The throttle valve straight run requirements are rather minimal but can give a more consistent flow relationship between valve position and flow. For the throttle valve, the best solution is to get rid of the excessive deadband.  Given that you are literally and figuratively, stuck with deadband principally when the source is a big valve, an increase in the PID gain will reduce the peak and integrated absolute error (IAE) by increasing the rate of change of the PID output and thus decreasing the additional deadtime from deadband. If there is a limit cycle, increasing the PID gain reduces the amplitude and period of the limit cycle, decreasing the persistent IAE and increasing the ability of downstream volumes to filter out the oscillations. Open loop step tests don’t reveal the additional deadtime but show a decrease in process gain upon a reversal of direction of step change. A filter time can be judiciously added that is less than 20% of the total loop deadtime seen in the test to prevent changes in the PID output from noise exceeding the deadband of the valve. For more on the effects of backlash see the May 2016 Control article “ How to specify control valves that don’t compromise control ” and the recording of the YouTube recording to be posted in June on the “ ISA Mentor Program Webinar Playlist ” of my ISA Mentor WebEx “ ISA-Mentor-Program-WebEx-Best-Control-Valve-Rev0.pdf ”.  The article white paper and presentation also shows that an increase in PID gain eliminates an oscillation from poor positioner sensitivity by making changes in the valve signal larger than sensitivity limit. A simple algorithm can be configured to increase the change in PID output by an amount slightly less than the deadband when the output changes direction and the change is greater than the noise band seen in the PID output. The kick of the output upon a change in direction eliminates the deadtime and lost motion from backlash. The practical issue is the deadband may vary with valve position, time, operating conditions, and positioner tuning. These algorithms are often used for Model Predictive Control besides PID control. A lead-lag on the valve signal can reduce the effect of deadband, resolution and positioner sensitivity but the valve movement can quickly become erratic for a lead much larger than the lag time and noise. Often deadband is a parameter in a Variable Speed Drive (VSD) setup to reduce changes in speed from noise. Often deadband is set too large because of a lack of understanding of the detrimental effect. The deadband should be just slightly larger than the noise band seen in the VSD setpoint. Dynamic simulation with a backlash-stiction block and a PID with external reset feedback can show this and much more. The virtual plant is my lab to rapidly explore, discover, prototype and test solutions. I recently went to a Grateful Dead tribute band concert. The “dead heads” were grateful the music of the band was not dead. Keep your control system alive by not succumbing to the deadly deadband.
  • Fixes for Deadly Deadband

    The post, Fixes for Deadly Deadband , first appeared on ControlGlobal.com's Control Talk blog. While there are some cases where deadband is helpful, in most applications the effect is extremely detrimental and confusing. Deadband can arise from any sources either intentionally or inadvertently. Deadband creates deadtime and for certain conditions excessive and persistent oscillations. The increase in loop deadtime is the deadband divided by the rate of change of controller output. The increase in deadtime can increase the peak error and integrated error from a load disturbance. If there are two or more integrators in the system due to integral action in the valve positioner, variable speed drive, controller(s), or process, a limit cycle will develop. The biggest and most troublesome source of deadband is backlash from an on-off or isolation valve (tight shutoff valve) posing as a throttling valve. The positioner seeing feedback from the actuator shaft of such rotary valves often does not realizer the internal closure member (e.g. ball or disk) is not responding due to backlash from the connections between the shaft, stem and ball or disk or the shaft windup from seal friction. The positioner diagnostics say everything is fine even meeting the requirements set by the ISA-75.25.01 Standard for Measuring Valve Response. Creative story telling develops to explain the oscillations in the process. An on-off or isolation valve offers a great advantage when used in series with a throttle valve. Besides achieving tight shutoff, the placement of a quickly stroked completely open or closed on-off or isolation valve close-coupled to the connection into the process eliminates the deadtime and any unbalance between ratioed flows during the start and stop of reactant and reagents enabling more precise composition and pH control. The throttle valve is located at a position that is more accessible for better maintenance and with some straight runs upstream and downstream. The throttle valve straight run requirements are rather minimal but can give a more consistent flow relationship between valve position and flow. For the throttle valve, the best solution is to get rid of the excessive deadband. Given that you are literally and figuratively, stuck with deadband principally when the source is a big valve, an increase in the PID gain will reduce the peak and integrated absolute error (IAE) by increasing the rate of change of the PID output and thus decreasing the additional deadtime from deadband. If there is a limit cycle, increasing the PID gain reduces the amplitude and period of the limit cycle, decreasing the persistent IAE and increasing the ability of downstream volumes to filter out the oscillations. Open loop step tests don’t reveal the additional deadtime but show a decrease in process gain upon a reversal of direction of step change. A filter time can be judiciously added that is less than 20% of the total loop deadtime seen in the test to prevent changes in the PID output from noise exceeding the deadband of the valve. For more on the effects of backlash see the May 2016 Control article “ How to specify control valves that don’t compromise control ” and the recording of the YouTube recording to be posted in June on the “ ISA Mentor Program Webinar Playlist ” of my ISA Mentor WebEx “ ISA-Mentor-Program-WebEx-Best-Control-Valve-Rev0.pdf ”. The article white paper and presentation also shows that an increase in PID gain eliminates an oscillation from poor positioner sensitivity by making changes in the valve signal larger than sensitivity limit. A simple algorithm can be configured to increase the change in PID output by an amount slightly less than the deadband when the output changes direction and the change is greater than the noise band seen in the PID output. The kick of the output upon a change in direction eliminates the deadtime and lost motion from backlash. The practical issue is the deadband may vary with valve position, time, operating conditions, and positioner tuning. These algorithms are often used for Model Predictive Control besides PID control. A lead-lag on the valve signal can reduce the effect of deadband, resolution and positioner sensitivity but the valve movement can quickly become erratic for a lead much larger than the lag time and noise. Often deadband is a parameter in a Variable Speed Drive (VSD) setup to reduce changes in speed from noise. Often deadband is set too large because of a lack of understanding of the detrimental effect. The deadband should be just slightly larger than ½ the noise band seen in the VSD setpoint. Dynamic simulation with a backlash-stiction block and a PID with external reset feedback can show this and much more. The virtual plant is my lab to rapidly explore, discover, prototype and test solutions. I recently went to a Grateful Dead tribute band concert. The “dead heads” were grateful the music of the band was not dead. Keep your control system alive by not succumbing to the deadly deadband.
  • How to Overcome Challenges of PID Control and Analyzer Applications via Wireless Measurements

    The post How to Overcome Challenges of PID Control and Analyzer Applications via Wireless Measurements first appeared on the ISA Interchange blog site. This article was authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemicals). Wireless measurements offer significant life-cycle cost savings by eliminating the installation, troubleshooting, and modification of wiring systems for new and relocated measurements. Some of the less recognized benefits are the eradication of EMI spikes from pump and agitator variable speed drives, the optimization of sensor location, and the demonstration of process control improvements. However, loss of transmission can result in process conditions outside of the normal operating range. Large periodic and exception reporting settings to increase battery life can cause loop instability and limit cycles when using a traditional PID (proportional-integral-derivative) for control. Analyzers offer composition measurements key to a higher level of process control but often have a less-than-ideal reliability record, sample system, cycle time, and resolution or sensitivity limit. A modification of the integral and derivative mode calculations can inherently prevent PID response problems, simplify tuning requirements, and improve loop performance for wireless measurements and sampled analyzers. Wireless measurements The combination of periodic and exception reporting by wireless measurements can be quite effective. The use of a refresh time (maximum time between communications) enables the use of a larger exception setting (minimum change for communication). Correspondingly, the use of an exception setting enables a larger refresh time setting. The time delay between the communicated and actual change in process variable depends upon when the change occurs in the time interval between updates (sample time). Since the time interval between a measured and communicated value (latency) is normally negligible, on the average, the true change can be considered to have occurred in the middle of the sample time. This delay limits how quickly control action is taken to correct changes introduced by process disturbances. Analytical measurements Since ultimately what you often want to control is composition in a process stream, online analyzers can raise process performance to a new level. However, analyzers, such as chromatographs, have large sample transportation and processing time delays that contribute to the total loop deadtime and are generally not as reliable or as sensitive as the pressure, level, and temperature measurements. The sample transportation delay from the process to the analyzer is the sample system volume divided by the sample flow rate. This delay can be five or more minutes when analyzers are grouped in an analyzer house. Once the sample arrives, the processing and analysis cycle time normally ranges from 10 to 30 minutes. The analysis result is available at the end of the cycle time. If you consider the change in the sample composition occurs in the middle of the cycle time and is not reported until the end of the next cycle time, the analysis delay is 1½ times the cycle time. This cycle time delay is added to the sample transportation delay, process deadtime, and final control element delay to get the total loop deadtime. The sum of the 1½ analyzer cycle time plus the sample transportation delay will be referred to as the sample time. Smart PID Most of the undesirable reaction to discontinuous measurement communication is the result of integral and derivative action in a traditional PID. Integral action will continue to drive the output to eliminate the last known offset from the setpoint even if the measurement information is old. Since the measurement is rarely exactly at the setpoint within the A/D and microprocessor resolution, the output is continually ramped by reset. The problem is particularly onerous if the current error is erroneous. Derivative action will see any sudden change in a communicated measurement value as occurring all within the PID execution time. Thus, a change in the measurement causes a spike in the controller output. The spike is especially large for restoration of the signal after a loss in communication. The spike can hit the output limit opposite from the output limit driven to from integral action. The spike from large refresh time can also cause a significant spike, because the rate of change calculation uses the PID execution time. A smart PID has been developed that makes an integral mode calculation only when there is a measurement update. The change in controller output from the proportional mode reaction to a measurement update is fed back through an exponential response calculation with a time constant equal to the reset time setting to provide an integral calculation via the external reset method. For applications where there is an output signal selection (e.g., override control) or where there is a slowly responding secondary loop or final control element, the change in an external reset signal can be used instead of the change in PID output for the input to exponential response calculation. The feedback of actual valve position as the external reset signal can prevent integral action from driving the PID output in response to a stuck valve. The use of a smart positioner provides the readback of actual position and drives the pneumatic output to the actuator to correct for the wrong position without the help of the process controller. For a reset time set equal to the process time constant so the closed loop time constant is equal to the open loop time constant, the response of the integral mode of the smart PID matches the response of the process. This inherent compensation of process response simplifies controller tuning and stabilizes the loop. For single loops dominated by a large time in between updates (large sample time), whether due to wireless measurements or analyzers, the controller gain can be the inverse of the process gain. In the smart PID, the time interval used for the derivative mode calculation is the elapsed time from the last measurement update. Upon the restoration of communication, derivative action considers the change to have occurred over the time duration of the communication failure. Similarly, the derivative response to a large sample time or exception setting spreads the measurement change over the entire elapsed time. The reaction to measurement noise is also attenuated. This smarter derivative calculation combined with the derivative mode filter eliminates spikes in the controller output. The proportional mode is active during each execution of the PID module to provide an immediate response to setpoint changes. The module execution time is kept fast so the delay is negligible for a corrective change in the setpoint of a secondary loop or signal to a final control element. With a controller gain approximately equal to the inverse of the process gain, the step change in PID output puts the actual value of the process variable extremely close to the final value needed to match the setpoint. The delay in the correction is only the final control element delay and process deadtime. After the process variable changes, the change in the measured value is delayed by a factor of the measurement sample time. Consequently, the observed speed of response is not as fast as the true speed of process response, a common deception from measurements with large signal delay or lag times. Communication failure Communication failure is not just a concern for wireless measurements. Any measurement device can fail to sense or transmit a new value. For pH measurements, the broken glass electrode or broken wire will result in a 7 pH reading, the most common setpoint. The response of coated or aged electrodes and large air gaps in thermowells can be so slow to show no appreciable change. Plugged impulse lines and sample lines can result in no new information from pressure transmitters and analyzers. Digitally communicated measurements can fail to update due to bus or transmitter problems. If a load upset occurs and is reported just before the last communication, integral action in the traditional controller drives the PID output to its low limit. The smart PID can make an output change that almost exactly corrects for the last reported load upset, since the controller gain is the inverse of the process gain. Sample time The wireless measurement sample time and transport delay associated with sample analyzers must be taken into account when using these measurements in control. A minimum wireless refresh time of 16 seconds is significant compared to the process response for flow, liquid pressure, desuperheater temperature, and static mixer composition and pH control. The sample time of chromatographs makes nearly all composition loops deadtime dominant except for industrial distillation columns and extremely large vessels. To eliminate excessive oscillations and valve travel caused by sample time and transport delay, a traditional PID controller is tuned for nearly an integral-only type of response by reducing the controller gain by a factor of 5. Increasing the reset time instead of reducing could also provide stability, but the offset is often unacceptable especially for flow feedforward and ratio control. The smart PID can be aggressively tuned by setting the gain equal to the inverse of the process gain for deadtime dominant loops. The result is a dramatic reduction in integrated absolute error and rise time (time to reach setpoint). The immediate response of the smart PID is particularly advantageous for ratio control of feeds to wild flows and for cascade and model predictive control by higher level loops. The advantage may not be visible in the wireless or analyzer reported value because of the large measurement delay. The improvement in performance is observed in the speed and degree of correction by the controller output and reduced variability in upper level measurements and process quality. A similar deception also occurs for measurements with a large lag time relative to the true process response due to large signal filters and transmitter damping settings, and slow sensor response times. An understanding of these relationships and the temporary use of fast measurements can help realize and justify process control improvement. The ability to temporarily set a fast wakeup time and tight exception reporting for a portable wireless transmitter could lead to automation system upgrades. Level loops on large volumes can use the largest refresh time of 60 seconds without any adverse affect because the integrating process gain is so slow (ramp rate is less than 1% per minute). Temperature loops on large vessels and columns can use an intermediate refresh time (30 seconds) and the maximum refresh time (60 seconds), respectively, because the process time constant is so large. However, gas and steam pressure control of volumes and headers will be adversely affected by a refresh time of 16 seconds because the integrating response ramp is so fast that the pressure can move outside of the control band (allowable control error) within the refresh time. Furnace draft pressure can ramp off scale in seconds. Highly exothermic reactors (polymerization reactors) can possibly run away if the largest refresh time of 60 seconds is used. To mitigate the effect of a large refresh time, the exception reporting setting is lowered to provide more frequent updates. Measurement sensitivity Measurements have a limit to the smallest detectable or reportable change in the process variable. If the entire change beyond threshold for detection is communicated, the limit is termed sensitivity . If a quantized or stepped change beyond the threshold is reported, the limit is termed resolution . Ideally, the resolution limit is less than the sensitivity limit.  Often, these terms are used indiscriminately. Wireless measurements have a sensitivity setting called deadband that is the minimum change in the measurement from the last value communicated that will trigger a communication when the sensor is awake. In the near future, the wakeup time in most wireless transmitters of 8 seconds is expected to be reduced. pH transmitters already have a wakeup time of only 1 second enabling a more effective use on static mixers. A traditional PID will develop a limit cycle whose amplitude is the sensitivity and resolution limit, whichever is larger, from integral action. The period of the limit cycle will increase as the gain setting is reduced and the reset time is increased. A smart PID will inherently prevent the limit cycle. Bottom line Wireless and composition measurements offer a significant opportunity for optimizing process operation. A smart PID can dramatically improve the stability, reliability, and speed of response for wireless measurements and analyzers. The result is tighter control of the true process variables and longer battery and valve packing life. A version of this article originally was published at InTech magazine .
  • Deadtime, the Simple Easy Key to Better Control

    Deadtime is the easiest dynamic parameter to identify and the one that holds the key to better control. Deadtime found visually or by a simple method can tell you what is limiting the ability of the loop and what the remedy is.  In most loops, you as the automation engineer can gain a much greater understanding and make a dramatic improvement. You can become famous by Friday (assuming you read this on a Monday). You can make a small setpoint change in automatic or a small output change (e.g., 0.5% ) by momentarily putting the loop in manual. The time to a change in the process variable in the correct direction is the deadtime. To detect the deadtime and noise visually, compression must be turned off. The remaining principal limit to identifying the deadtime in fast loops is the update time of the historian. For loops with a relatively large deadtime (e.g., greater than 10 sec), the deadtime can be visually identified assuming the update time is 1 sec or less. For loops with smaller deadtimes, you can put a few function blocks together in a module executing as fast as possible (e.g., 0.1 sec for many DCS) to tell you the deadtime. Of course, good tuning software that is executing very fast can tell you the deadtime and a lot more. The point here is that just knowing the deadtime offers exceptional insight and power to do what is right. So without delay let’s explore how we all can become more responsive. The sum of all the discrete update times such as PID module execution rate and wireless update time and all the signal filter times and transmitter damping time should be less than 20% of the total loop deadtime to limit deterioration in achievable loop performance to be appreciably less than 20%.  The valve response time should also be less than 40% of the deadtime. This is going to be difficult to achieve and to measure if you don’t have a true throttling valve and is nearly impossible in pressure systems because the process deadtime is usually so small (e.g., less than 1 sec). Almost as difficult but perhaps less important is the size of the valve response time compared to process deadtime in level systems assuming liquid flows into or out of the volume are manipulated for level control and the sensitivity limit and noise in the level measurement is extremely small enabling a detection of a small level change. The time for the level to get through a sensitivity limit or noise band is additional deadtime. Consider the case where a loop in manual has no oscillations and develops oscillations when the loop is put in automatic. If the period of oscillations is 3 to 4 times the deadtime, the PID gain is too high. If the period is 6 to 10 times the deadtime, the reset time is probably too small.  If the period of a level, gas pressure, or temperature loop on a vessel or column is more than 20 times the deadtime and decaying, it is most likely due to small of a PID gain - actually the product of PID gain and reset time is too small but it is most often caused by a PID gain needed for a normal reset setting being much greater than what is used due to the comfort zone of operations (e.g., many of these loops should have a PID gain that is 50 to 100 unless the reset time is greatly increased). If the period is more than 20 times the deadtime and the amplitude is constant indicating a limit cycle, the source is deadband, backlash, stiction, or resolution limit. If the source is deadband or backlash, increasing the PID gain should be able to reduce the oscillation amplitude and period. Now let’s look at the situation where a loop in manual has an existing oscillation. If the oscillation period is less than the deadtime, it is essentially noise and the PID should not react to it. If the period is between 2 and 10 times the deadtime, the PID gain must be considerably reduced to prevent amplification of the oscillation due to resonance. If the period is more than 10 times the deadtime, the PID gain should be made as aggressive as possible to reduce the amplitude of the oscillation. Of course, the best solution is to find and eliminate the source of the oscillation. What the controller sees in the first four deadtimes is most important in terms in controller tuning because unless the PID is seriously detuned, the PID should have reacted to arrest the response from a load disturbance. This corresponds to a lambda setting of 3 or less deadtimes. For a near-integrating, integrating, and runaway process, the maximum ramp rate in % of PV scale (%/sec) in the first four deadtimes divided by the step % change in PID output is approximately the integrating process gain (1/sec) that can be used with the deadtime to tune the PID using integrating process tuning rules. If there is a compound response, having the PID appreciably do its job within 4 deadtime simplifies the tuning and reduces what the PID sees and has to deal with in terms of the consequences of a later response typically due to recycle effects. The reset time should be greater than 3 deadtimes for a PID with the exception being a truly deadtime dominant process (a rather rare case).  The more likely scenario as mentioned before, is that the reset time must be increased because the product of the PID gain and reset time is too small for near-integrating, true integrating and runaway processes. For many loops on vessels and columns, the reset time is several orders of magnitude too small. Deadtime also determines the limit as to loop performance even if the loop is tuned aggressively. The minimum peak error is proportional to the deadtime and the minimum integrated absolute error is proportional to the deadtime squared. If a PID gain is detuned, the effect can be equated to an increase in an effective deadtime greater than the actual deadtime. In other words, if you spend money to decrease deadtime in the process by better equipment or piping design or in the automation system by faster valves, measurements and discrete actions, if the PID is not tuned to match the decrease in actual deadtime, you do not see an improvement due to an effective deadtime from sluggish PID control. For more on how deadtime limits performance, see slides 12-14 of ISA-Mentor-Program-WebEx-PID-Options-and-Solutions.pdf and for much more see the associated ISA Mentor Program Webinars . The effect of an operating point non linearity is reduced by decreasing the total deadtime. My goal in pH control on difficult systems was to make the total deadtime as small as possible by better mixing and reagent injection so that the excursion on the nonlinear titration curve was as small as possible from tighter pH control. In other words, an increase in deadtime causes an increase in the nonlinearity seen, which causes a further deterioration in control (a spiraling effect, literally and figuratively). If the deadtime is zero, the controller gain could be theoretically infinite and control perfect. Without deadtime, I would be out of job. The good news is that negligible deadtime only exists in simulations without volumes in series, transport and mixing delays, heat transfer lags and automation system dynamics. Even if the process deadtime is extremely small, just having an automation system creates a deadtime that must be dealt with. The bad news is that deadtime is extremely detrimental and is not given the proper consideration as to what it is telling you and the PID. Also, the misuse of the term of “process deadtime” rather than “total loop deadtime” leads people into missing the important opportunities to reduce deadtime in the valve, measurement and controller, which is usually more readily done, more in your realm of responsibility and typically much less expensive than reducing deadtime in the process.
  • Deadtime, the Simple Easy Key to Better Control

    The post, Deadtime, the Simple Easy Key to Better Control , first appeared on ControlGlobal.com's Control Talk blog. Deadtime is the easiest dynamic parameter to identify and the one that holds the key to better control. Deadtime found visually or by a simple method can tell you what is limiting the ability of the loop and what the remedy is. In most loops, you as the automation engineer can gain a much greater understanding and make a dramatic improvement. You can become famous by Friday (assuming you read this on a Monday). You can make a small setpoint change in automatic or a small output change (e.g., 0.5% ) by momentarily putting the loop in manual. The time to a change in the process variable in the correct direction is the deadtime. To detect the deadtime and noise visually, compression must be turned off. The remaining principal limit to identifying the deadtime in fast loops is the update time of the historian. For loops with a relatively large deadtime (e.g., greater than 10 sec), the deadtime can be visually identified assuming the update time is 1 sec or less. For loops with smaller deadtimes, you can put a few function blocks together in a module executing as fast as possible (e.g., 0.1 sec for many DCS) to tell you the deadtime. Of course, good tuning software that is executing very fast can tell you the deadtime and a lot more. The point here is that just knowing the deadtime offers exceptional insight and power to do what is right. So without delay let’s explore how we all can become more responsive. The sum of all the discrete update times such as PID module execution rate and wireless update time and all the signal filter times and transmitter damping time should be less than 20% of the total loop deadtime to limit deterioration in achievable loop performance to be appreciably less than 20%. The valve response time should also be less than 40% of the deadtime. This is going to be difficult to achieve and to measure if you don’t have a true throttling valve and is nearly impossible in pressure systems because the process deadtime is usually so small (e.g., less than 1 sec). Almost as difficult but perhaps less important is the size of the valve response time compared to process deadtime in level systems assuming liquid flows into or out of the volume are manipulated for level control and the sensitivity limit and noise in the level measurement is extremely small enabling a detection of a small level change. The time for the level to get through a sensitivity limit or noise band is additional deadtime. Consider the case where a loop in manual has no oscillations and develops oscillations when the loop is put in automatic. If the period of oscillations is 3 to 4 times the deadtime, the PID gain is too high. If the period is 6 to 10 times the deadtime, the reset time is probably too small. If the period of a level, gas pressure, or temperature loop on a vessel or column is more than 20 times the deadtime and decaying, it is most likely due to small of a PID gain - actually the product of PID gain and reset time is too small but it is most often caused by a PID gain needed for a normal reset setting being much greater than what is used due to the comfort zone of operations (e.g., many of these loops should have a PID gain that is 50 to 100 unless the reset time is greatly increased). If the period is more than 20 times the deadtime and the amplitude is constant indicating a limit cycle, the source is deadband, backlash, stiction, or resolution limit. If the source is deadband or backlash, increasing the PID gain should be able to reduce the oscillation amplitude and period. Now let’s look at the situation where a loop in manual has an existing oscillation. If the oscillation period is less than the deadtime, it is essentially noise and the PID should not react to it. If the period is between 2 and 10 times the deadtime, the PID gain must be considerably reduced to prevent amplification of the oscillation due to resonance. If the period is more than 10 times the deadtime, the PID gain should be made as aggressive as possible to reduce the amplitude of the oscillation. Of course, the best solution is to find and eliminate the source of the oscillation. What the controller sees in the first four deadtimes is most important in terms in controller tuning because unless the PID is seriously detuned, the PID should have reacted to arrest the response from a load disturbance. This corresponds to a lambda setting of 3 or less deadtimes. For a near-integrating, integrating, and runaway process, the maximum ramp rate in % of PV scale (%/sec) in the first four deadtimes divided by the step % change in PID output is approximately the integrating process gain (1/sec) that can be used with the deadtime to tune the PID using integrating process tuning rules. If there is a compound response, having the PID appreciably do its job within 4 deadtime simplifies the tuning and reduces what the PID sees and has to deal with in terms of the consequences of a later response typically due to recycle effects. The reset time should be greater than 3 deadtimes for a PID with the exception being a truly deadtime dominant process (a rather rare case). The more likely scenario as mentioned before, is that the reset time must be increased because the product of the PID gain and reset time is too small for near-integrating, true integrating and runaway processes. For many loops on vessels and columns, the reset time is several orders of magnitude too small. Deadtime also determines the limit as to loop performance even if the loop is tuned aggressively. The minimum peak error is proportional to the deadtime and the minimum integrated absolute error is proportional to the deadtime squared. If a PID gain is detuned, the effect can be equated to an increase in an effective deadtime greater than the actual deadtime. In other words, if you spend money to decrease deadtime in the process by better equipment or piping design or in the automation system by faster valves, measurements and discrete actions, if the PID is not tuned to match the decrease in actual deadtime, you do not see an improvement due to an effective deadtime from sluggish PID control. For more on how deadtime limits performance, see slides 12-14 of ISA-Mentor-Program-WebEx-PID-Options-and-Solutions.pdf and for much more see the associated ISA Mentor Program Webinars . The effect of an operating point non linearity is reduced by decreasing the total deadtime. My goal in pH control on difficult systems was to make the total deadtime as small as possible by better mixing and reagent injection so that the excursion on the nonlinear titration curve was as small as possible from tighter pH control. In other words, an increase in deadtime causes an increase in the nonlinearity seen, which causes a further deterioration in control (a spiraling effect, literally and figuratively). If the deadtime is zero, the controller gain could be theoretically infinite and control perfect. Without deadtime, I would be out of job. The good news is that negligible deadtime only exists in simulations. Even if the process deadtime is extremely small, just having an automation system creates a deadtime that must be dealt with. The bad news is that deadtime is extremely detrimental and is not given the proper consideration as to what it is telling you and the PID. Also, the misuse of the term of “process deadtime” rather than “total loop deadtime” leads people into missing the important opportunities to reduce deadtime in the valve, measurement and controller, which is usually more readily done, more in your realm of responsibility and typically much less expensive than reducing deadtime in the process.
  • PID Controller Tuning Rules

    The post PID Controller Tuning Rules first appeared on the ISA Interchange blog site. This article was authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Nearly every automation system supplier, consultant, control theory professor, and user has a favorite set of PID tuning rules. Many of these experts are convinced their set is the best. A handbook devoted to tuning has over 500 pages of rules. The enthusiasm and sheer number of rules is a testament to the importance of tuning and the wide variety of application dynamics, requirements, and complications. The good news is these methods converge for a common objective. The addition of PID features, such as setpoint lead-lag, dynamic reset and output velocity limits, and intelligent suspension of integral action enable the use of disturbance rejection tuning to achieve other system requirements, such as maximizing setpoint response, coordinating loops, extending valve packing life, and minimizing upsets to operations and other control loops.   Potential performance The purpose of a control loop is to reject undesired changes, ignore extraneous changes, and achieve desired changes, such as new setpoints. PID control provides the best possible rejection of unmeasured disturbances (regulatory control) when properly tuned. The addition of a simple deadtime block in the external reset path can enhance the PID regulatory control capability more than other controllers with intelligence built-in to process dynamics, such as model predictive control. In plants, unknown and extraneous changes are a reality, and the PID is the best tool if properly tuned. The test time has been significantly reduced for the most difficult loops. Simple equations have been developed to estimate tuning and resulting performance for a unified approach. (Equation derivations and a simple tuning method are in the online version.) Control requirements The foremost requirement of a PID is to prevent the activation of a safety instrumentation system or a relief device and the prevention of an environmental violation (RCRA pH), compressor surge, and shutdown from a process excursion. The peak error (maximum deviation from setpoint) is the most applicable metric. The most disruptive upset is an unmeasured step disturbance that would cause an open loop error ( E o ) if the PID was in manual or did not exist. The fraction of open loop error seen in feedback control is more dependent upon the controller gain than the integral time since the proportional mode provides the initial reaction important for minimizing the peak error. Equation (1) shows if the product of the controller gain ( K c ) and open loop gain ( K o ) is much greater than one, the peak error ( E x ) is significantly less than the open loop error. The open loop gain ( K o ) is the product of the final element, process, and measurement gain and is the percent change in process variable divided by the percent change in controller output for a setpoint change. For most vessel and column temperature and pressure control loops, the process rate of change is much slower than the deadtime. Consequently, the controller gain can be set large enough where the denominator becomes simply the inverse of the product of the gains. Conversely, for loops dominated by deadtime, the denominator approaches one, and the peak error is essentially the open loop error. The peak error is critical for product quality in the final processing of melts, solids, or paste, such as extruders, sheet lines, and spin lines. Peak errors show up as rejected product due to color, consistency, optical clarity, thickness, size, shape, and in the case of food, palatability. Unfortunately, these systems are dominated by transportation delays. The peak errors and disruptions from upstream processes must be minimized. The most widely cited metric is an integrated absolute error (IAE), which is the area between process variable and the setpoint. For a non-oscillatory response, the IAE and the integrated error (IE) are the same. Since proportional and integral action are important for minimizing this error, Equation (2) shows the IE increases as the integral time ( T i ) increases and the controller gain decreases. Equation (2) also shows how the IE increases with controller execution time ( Δt x ) and signal filter time ( τ f ). The equivalent deadtime from these terms also decreases the minimum allowable integral time and maximum allowable controller gain, further degrading the maximum possible performance. In many cases, the original controller tuning is slower than allowed and remains unchanged, so the only deterioration observed is from these terms in the numerator of Equation (2). Studies on the effect of automation system dynamics and innovations can lead to conflicting results because of the lack of recognition of the effect of tuning on the starting case and comparative case performance. In other words, you can readily prove anything you want by how you tune the controller. IE is indicative of the quantity of product that is off-spec that can lead to a reduced yield and higher cost ratio of raw material or recycle processing to product. If the off-spec cannot be recycled or the feed rate cannot be increased, there is a loss in production rate. If the off-spec is not recoverable, there is a waste treatment cost. A controller tuned for maximum performance will have a closed loop response to an unmeasured disturbance that resembles two right triangles placed back to back. The base of each triangle is the total loop deadtime and the altitude is the peak error. If the integral time (reset time) is too slow, there is slower return to setpoint. If the controller gain is too small, the peak error is increased, and the right triangle is larger for the return to setpoint. Process dynamics The major types of process dynamics are differentiated by the final path of the open loop response to a change in manual controller output assuming no disturbances. (The online version shows the three major types of responses and the associated dynamic terms.) If the response lines out to a new steady state, the process is self-regulating with an open loop time constant ( τ o ) that is the largest time constant in the loop. Flow and continuous operation temperature and concentration are self-regulating processes. If the response continues to ramp, the process is integrating. Level, column and vessel pressure, batch operation temperature, and concentration are integrating processes. If the response accelerates, reaching a point of no return, the process has positive feedback leading to a runaway. Batch or continuous temperature in highly exothermic reactors (e.g., polymerization) can become runaway processes. Prolonged open loop tests are not permitted, and setpoint changes are limited. Consequently, the acceleration is rarely intentionally observed.   Unified approach The three major types of responses have an initial period of no response that is the total loop deadtime ( θ o ) followed by the ramp before the deceleration (inflection point) of a self-regulating response and the acceleration of the runaway response. The percent ramp rate divided by the change in percent controller output is the integrating process gain ( K i ) with units of %/sec/%, which reduces to 1/sec. For at least 10 years, slow self-regulating processes with a long time to deceleration have shown to be effectively identified and tuned as “near integrating” or “pseudo integrating” processes, leading to a “short cut tuning method” where only the deadtime and initial ramp rate need to be recognized. The tuning test time for these “near integrating” processes can be reduced by over 90% by not waiting for a steady state. Recently, the method was extended to runaway processes and to deadtime dominant self-regulating processes by the use of a deadtime block to compute the ramp rate over a deadtime interval. Furthermore, other tuning rules were found to give the same equation for controller gain when the performance objective was maximum unmeasured disturbance rejection. For example, the use of a closed loop time constant ( λ ) equal to the total loop deadtime in Lambda tuning yields the same result as the Ziegler Nichols (ZN) ultimate oscillation and reaction curve methods if the ZN gain is cut in half for smoothness and robustness. Equation (3) shows the controller gain is half the inverse of the product of integrating process gain and deadtime. The profession realizes that too large of a controller gain will cause relatively rapid oscillations and can instigate instability (growing oscillations). Unrealized for integrating process is that too small of a controller gain can cause extremely slow oscillations that take longer to decay as the gain is decreased. Also unrealized for a runaway process is that a controller gain set less than the inverse of the open loop gain causes an increase in temperature to accelerate to a point of no return. There is a window of allowable controller gains. Also realized is too small of an integral time will cause overshoot and can lead to a reset cycle. Almost completely unrealized is that too slow of an integral time will result in a sustained overshoot of a setpoint that gets larger and more persistent as the integral time is increased for integrating processes. Hence a window of allowable integral times exists. Equation 4a provides the right size of integral time for integrating processes. If we substitute Equation 3 into Equation 4a, we end up with Equation 4b, which is a common expression for the integral time for maximum disturbance rejection. Equation 4a is extremely important because most integrating processes have a controller gain five to 10 times smaller than allowed. The coefficient in Equation 4b can be decreased for self-regulating processes as the deadtime becomes larger than the open loop time constant ( τ o ) estimated by Equation 5. The tuning used for maximum load rejection can be used for an effective and smooth setpoint response if the setpoint change is passed through a lead-lag. The lag time is set equal to the integral time, and the lead time is set approximately equal to ¼ the lag time. For startup, grade transitions, and optimization of continuous processes and batch operations, setpoint response is important. Minimizing the time to reach a new setpoint (rise time) can in many cases maximize process efficiency and capacity. The rise time ( T r ) for no output saturation, no setpoint feedforward, and no special logic is the inverse of the product of the integrating process gain and the controller gain plus the total loop deadtime. Equation 6 is independent of the setpoint change. Complications, easy solutions Fast changes in controller output can cause oscillations from a slow secondary loop or a slow final control element. The problem is insidious in that oscillations may only develop for large disturbances or large setpoint changes. The enabling of the dynamic reset limit option and the timely external reset feedback of the secondary loop or final control element process variable will prevent the primary PID controller output from changing faster than the secondary or final control element can respond, preventing oscillations. Aggressive controller tuning can also upset operations, disturb other loops, and cause continual crossing of the split range point. Velocity limits can be added to the analog output block, the dynamic reset limit option enabled, and the block process variable used as the external reset to provide directional move suppression to smooth out the response as necessary without retuning. The different closed loop response of loops can reduce the coordination, especially important for blending and simplification of the identification of models for advanced process control systems that manipulate these loops. Process nonlinearities may cause the response in one direction to be faster. Directional output velocity limits and the dynamic reset limit option can be used to equalize closed loop time constants without retuning. Final control element resolution limits (stick-slip) and deadband (backlash) can cause a limit cycle if one or two or more integrators, respectively, exist in the loop. The integrator can be in the process or in the secondary or primary PID controller via the integral mode. Increasing the integral time will make the cycle period slower but cannot eliminate the oscillation. However, a total suspension of integral action when there is no significant change in the process variable and when the process is close to the setpoint can stop the limit cycle. The output velocity limits can also be used to prevent oscillations in the controller output from measurement noise exceeding the deadband or resolution limit of a control valve preventing dither, which further reduces valve wear. Bottom line Controllers can be tuned for maximum disturbance rejection by a unified method for the major types of processes. PID options in today’s DCS, such as setpoint lead-lag, directional output velocity limits, dynamic reset limit, and intelligent suspension of integral action, can eliminate oscillations without retuning. Less oscillations reduces process variability, enables better recognition of trends, offers easier identification of dynamics, and provides an increase in valve packing life. About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly “Control Talk” columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg : A version of this article also was published at InTech magazine .
  • PID Tuning Rules

    The post PID Tuning Rules first appeared on the ISA Interchange blog site. This article was authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemicals). Nearly every automation system supplier, consultant, control theory professor, and user has a favorite set of PID tuning rules. Many of these experts are convinced their set is the best. A handbook devoted to tuning has over 500 pages of rules. The enthusiasm and sheer number of rules is a testament to the importance of tuning and the wide variety of application dynamics, requirements, and complications. The good news is these methods converge for a common objective. The addition of PID features, such as setpoint lead-lag, dynamic reset and output velocity limits, and intelligent suspension of integral action enable the use of disturbance rejection tuning to achieve other system requirements, such as maximizing setpoint response, coordinating loops, extending valve packing life, and minimizing upsets to operations and other control loops. Potential performance The purpose of a control loop is to reject undesired changes, ignore extraneous changes, and achieve desired changes, such as new setpoints. PID control provides the best possible rejection of unmeasured disturbances (regulatory control) when properly tuned. The addition of a simple deadtime block in the external reset path can enhance the PID regulatory control capability more than other controllers with intelligence built-in to process dynamics, such as model predictive control. In plants, unknown and extraneous changes are a reality, and the PID is the best tool if properly tuned. The test time has been significantly reduced for the most difficult loops. Simple equations have been developed to estimate tuning and resulting performance for a unified approach. (Equation derivations and a simple tuning method are in the online version.) Control requirements The foremost requirement of a PID is to prevent the activation of a safety instrumentation system or a relief device and the prevention of an environmental violation (RCRA pH), compressor surge, and shutdown from a process excursion. The peak error (maximum deviation from setpoint) is the most applicable metric. The most disruptive upset is an unmeasured step disturbance that would cause an open loop error ( E o ) if the PID was in manual or did not exist. The fraction of open loop error seen in feedback control is more dependent upon the controller gain than the integral time since the proportional mode provides the initial reaction important for minimizing the peak error. Equation (1) shows if the product of the controller gain ( K c ) and open loop gain ( K o ) is much greater than one, the peak error ( E x ) is significantly less than the open loop error. The open loop gain ( K o ) is the product of the final element, process, and measurement gain and is the percent change in process variable divided by the percent change in controller output for a setpoint change. For most vessel and column temperature and pressure control loops, the process rate of change is much slower than the deadtime. Consequently, the controller gain can be set large enough where the denominator becomes simply the inverse of the product of the gains. Conversely, for loops dominated by deadtime, the denominator approaches one, and the peak error is essentially the open loop error. The peak error is critical for product quality in the final processing of melts, solids, or paste, such as extruders, sheet lines, and spin lines. Peak errors show up as rejected product due to color, consistency, optical clarity, thickness, size, shape, and in the case of food, palatability. Unfortunately, these systems are dominated by transportation delays. The peak errors and disruptions from upstream processes must be minimized. The most widely cited metric is an integrated absolute error (IAE), which is the area between process variable and the setpoint. For a non-oscillatory response, the IAE and the integrated error (IE) are the same. Since proportional and integral action are important for minimizing this error, Equation (2) shows the IE increases as the integral time ( T i ) increases and the controller gain decreases. Equation (2) also shows how the IE increases with controller execution time ( Δt x ) and signal filter time ( τ f ). The equivalent deadtime from these terms also decreases the minimum allowable integral time and maximum allowable controller gain, further degrading the maximum possible performance. In many cases, the original controller tuning is slower than allowed and remains unchanged, so the only deterioration observed is from these terms in the numerator of Equation (2). Studies on the effect of automation system dynamics and innovations can lead to conflicting results because of the lack of recognition of the effect of tuning on the starting case and comparative case performance. In other words, you can readily prove anything you want by how you tune the controller. IE is indicative of the quantity of product that is off-spec that can lead to a reduced yield and higher cost ratio of raw material or recycle processing to product. If the off-spec cannot be recycled or the feed rate cannot be increased, there is a loss in production rate. If the off-spec is not recoverable, there is a waste treatment cost. A controller tuned for maximum performance will have a closed loop response to an unmeasured disturbance that resembles two right triangles placed back to back. The base of each triangle is the total loop deadtime and the altitude is the peak error. If the integral time (reset time) is too slow, there is slower return to setpoint. If the controller gain is too small, the peak error is increased, and the right triangle is larger for the return to setpoint. Process dynamics The major types of process dynamics are differentiated by the final path of the open loop response to a change in manual controller output assuming no disturbances. (The online version shows the three major types of responses and the associated dynamic terms.) If the response lines out to a new steady state, the process is self-regulating with an open loop time constant ( τ o ) that is the largest time constant in the loop. Flow and continuous operation temperature and concentration are self-regulating processes. If the response continues to ramp, the process is integrating. Level, column and vessel pressure, batch operation temperature, and concentration are integrating processes. If the response accelerates, reaching a point of no return, the process has positive feedback leading to a runaway. Batch or continuous temperature in highly exothermic reactors (e.g., polymerization) can become runaway processes. Prolonged open loop tests are not permitted, and setpoint changes are limited. Consequently, the acceleration is rarely intentionally observed. Unified approach The three major types of responses have an initial period of no response that is the total loop deadtime ( θ o ) followed by the ramp before the deceleration (inflection point) of a self-regulating response and the acceleration of the runaway response. The percent ramp rate divided by the change in percent controller output is the integrating process gain ( K i ) with units of %/sec/%, which reduces to 1/sec. For at least 10 years, slow self-regulating processes with a long time to deceleration have shown to be effectively identified and tuned as “near integrating” or “pseudo integrating” processes, leading to a “short cut tuning method” where only the deadtime and initial ramp rate need to be recognized. The tuning test time for these “near integrating” processes can be reduced by over 90% by not waiting for a steady state. Recently, the method was extended to runaway processes and to deadtime dominant self-regulating processes by the use of a deadtime block to compute the ramp rate over a deadtime interval. Furthermore, other tuning rules were found to give the same equation for controller gain when the performance objective was maximum unmeasured disturbance rejection. For example, the use of a closed loop time constant ( λ ) equal to the total loop deadtime in Lambda tuning yields the same result as the Ziegler Nichols (ZN) ultimate oscillation and reaction curve methods if the ZN gain is cut in half for smoothness and robustness. Equation (3) shows the controller gain is half the inverse of the product of integrating process gain and deadtime. The profession realizes that too large of a controller gain will cause relatively rapid oscillations and can instigate instability (growing oscillations). Unrealized for integrating process is that too small of a controller gain can cause extremely slow oscillations that take longer to decay as the gain is decreased. Also unrealized for a runaway process is that a controller gain set less than the inverse of the open loop gain causes an increase in temperature to accelerate to a point of no return. There is a window of allowable controller gains. Also realized is too small of an integral time will cause overshoot and can lead to a reset cycle. Almost completely unrealized is that too slow of an integral time will result in a sustained overshoot of a setpoint that gets larger and more persistent as the integral time is increased for integrating processes. Hence a window of allowable integral times exists. Equation 4a provides the right size of integral time for integrating processes. If we substitute Equation 3 into Equation 4a, we end up with Equation 4b, which is a common expression for the integral time for maximum disturbance rejection. Equation 4a is extremely important because most integrating processes have a controller gain five to 10 times smaller than allowed. The coefficient in Equation 4b can be decreased for self-regulating processes as the deadtime becomes larger than the open loop time constant ( τ o ) estimated by Equation 5. The tuning used for maximum load rejection can be used for an effective and smooth setpoint response if the setpoint change is passed through a lead-lag. The lag time is set equal to the integral time, and the lead time is set approximately equal to ¼ the lag time. For startup, grade transitions, and optimization of continuous processes and batch operations, setpoint response is important. Minimizing the time to reach a new setpoint (rise time) can in many cases maximize process efficiency and capacity. The rise time ( T r ) for no output saturation, no setpoint feedforward, and no special logic is the inverse of the product of the integrating process gain and the controller gain plus the total loop deadtime. Equation 6 is independent of the setpoint change. Complications, easy solutions Fast changes in controller output can cause oscillations from a slow secondary loop or a slow final control element. The problem is insidious in that oscillations may only develop for large disturbances or large setpoint changes. The enabling of the dynamic reset limit option and the timely external reset feedback of the secondary loop or final control element process variable will prevent the primary PID controller output from changing faster than the secondary or final control element can respond, preventing oscillations. Aggressive controller tuning can also upset operations, disturb other loops, and cause continual crossing of the split range point. Velocity limits can be added to the analog output block, the dynamic reset limit option enabled, and the block process variable used as the external reset to provide directional move suppression to smooth out the response as necessary without retuning. The different closed loop response of loops can reduce the coordination, especially important for blending and simplification of the identification of models for advanced process control systems that manipulate these loops. Process nonlinearities may cause the response in one direction to be faster. Directional output velocity limits and the dynamic reset limit option can be used to equalize closed loop time constants without retuning. Final control element resolution limits (stick-slip) and deadband (backlash) can cause a limit cycle if one or two or more integrators, respectively, exist in the loop. The integrator can be in the process or in the secondary or primary PID controller via the integral mode. Increasing the integral time will make the cycle period slower but cannot eliminate the oscillation. However, a total suspension of integral action when there is no significant change in the process variable and when the process is close to the setpoint can stop the limit cycle. The output velocity limits can also be used to prevent oscillations in the controller output from measurement noise exceeding the deadband or resolution limit of a control valve preventing dither, which further reduces valve wear. Bottom line Controllers can be tuned for maximum disturbance rejection by a unified method for the major types of processes. PID options in today’s DCS, such as setpoint lead-lag, directional output velocity limits, dynamic reset limit, and intelligent suspension of integral action, can eliminate oscillations without retuning. Less oscillations reduces process variability, enables better recognition of trends, offers easier identification of dynamics, and provides an increase in valve packing life. A version of this article originally was published at InTech magazine .
  • When and How to use Pulse Width Modulation of Controller Outputs

    In some applications, throttling of the manipulated flows is difficult or impossible. In the biochemical industry, where precise (good resolution and sensitivity) throttling valves without any crevices (to meet sanitary requirements) are rather limited (there are exceptions, such as the Fisher Baumann 83000-89000 series). Often, pulse width modulation (PWM) is used to turn nutrient and reagent pumps on and off.  In the chemical industry, PWM is used to open and close valves whose trim would plug or whose stem would stick if throttled. The sudden burst of flow from on-off action helps flush out the trim and wipe the stem clean. PWM is correspondingly used for small reagent flows, corrosive fluids, and slurries. It is also used to prevent flashing by a valve position that ensures a pressure drop above the critical pressure drop. PWM is also used in temperature loops to turn heaters on and off. Here, it is commonly called “time proportioning control” but the action is principally the same. Temperature loops for extruders, silicon crystal pullers and environmental chambers often use this technique. All the applications of PWM have one thing in common; a capacity to filter or dilute the pulses so that they do not appear as measurement noise in the controlled variable. PWM provides a train of pulses that show up as a sawtooth in the measurement unless attenuated.  The mass of fluid and metal in a reactor, extruder, or crystal puller and mass of air in an environment chamber must be large enough and the maximum pulse width small enough so that the amplitude of the sawtooth seen is negligible. The rangeability achieved by PWM is basically equal to the maximum pulse width divided by the minimum pulse width. Since a valve must reach a set position and the pump must reach a set speed during the pulse, the minimum pulse width is fixed by the pre-stroke deadtime and stroking time of the valve or the rate limiting of the speed and acceleration time of pump. Usually, four seconds is adequate for small valves and pumps. The maximum pulse width is the pulse cycle time when the pulse is almost continuously on. Since the pulse cycle time also sets the time between successive pulses, it adds a maximum deadtime to the loop that is about equal to the cycle time when the pulse is almost continuously off. For an average controller output of 50%, the deadtime added is about ½ the cycle time. Thus, the cycle time chosen represents a compromise of the desire to maximize rangeability and minimize the sawtooth amplitude seen in measurement and minimize loop deadtime. An additional consideration is the wear and tear on the final control element. Pumps, agitator and motor driven valves have a maximum duty cycle that must not be exceeded. Also, heaters in the motor starter will trip for too short a cycle time because the temperature rise from lack of cool down is equated to an overload current.  For valves, periodic opening and closing will eventually cause packing, seat, seal or trim failure. The consequences and methods of mitigation of pulses are discussed in the 12/15/2014 Control Talk Blog “ Controller Attenuation and Resonance Tips ”. A simple equation to predict the amplitude of pulses after attenuation by the process or a filter that are seen by the controller is discussed in the 12/02/2104 Control Talk Blog “ Measurement Attenuation and Deception Tips ”. The generation of a pulse train is done by special output cards or by the configuration of function blocks on the PID output.  The heart of the configuration is a ramp that resets itself periodically. An integrator (INT) function block is employed to generate the ramp. The configuration depends upon what version of Distributed Control System (DCS) or Programmable Logic Controller (PLC) is used. For an integrator that will ramp up towards a setpoint, the input to the integrator is set equal to 100% divided by the desired cycle time. The integrator setpoint is set slightly larger than 100%. The ramp “on” time or pulse width is determined by comparing percent ramp value (integrator output) to the percent controller output via a high signal monitor (HSM) block. When the ramp value exceeds the controller output, a discrete is see equal to one (true), which opens a discrete output or transfers in an analog output value that corresponds to the closed valve position or minimum speed. If the controller output drops below the minimum pulse width, the pulse is turned off by transferring in a negative value before the ramp value is used as the operand of the HSM block on the output of the INT block. The PID low output limit should be set to be slightly less than this minimum pulse width. The functionality of blocks depends upon the DCS or PLC used and any configuration must be extensively tested before being used online. For viscous fluids and slurries, a precise control valve may be continuously throttled until the valve position gets so small that laminar flow or plugging can occur. At this point (e.g., below 20% PID output), PWM starts. The throttling valve position then stays open (e.g., 20%) and an inexpensive on-off valve in series with the control valve is open and closed by PWM. There are many other applications of PWM. Pulsed flows have been shown to increase the yield of reactors, the separation in distillation columns and the combustion efficiency of burners. Pulse reagent flow has been very successfully used to mimic a titration for batch pH control. While many well designed pulsed strategies can work for this application, PWM on a proportional-only pH PID controller retains a conventional operator interface via the PID operator faceplate and tuning via the PID gain setting. Also, the controller output can be transferred in for the analog output to reduce batch processing time by providing pulses that are not only longer but that are also larger (pulse width plus pulse amplitude modulation). The gain of the manipulated variable is now nonlinear and is proportional to the controller output. However, for proportional-only control of batch pH processes, this gain change may be advantageous and offset the low pH process gain from the operating point being on the flat portion of the titration curve at the beginning of the batch cycle moving to the steep part of the curve at the end of the batch. This is an example of how a continuous control technique is also useful for batch processing. PWM also dramatically reduces the effects of deadband and resolution limit in the control valve or variable speed drive assuming the pulse amplitude is at least 5 times as large as the suspected deadband and resolution limit. This normally is the case if the amplitude is > 5%. However, for valves designed for tight shutoff, the backlash and stiction may be as large as 10% requiring 50% amplitude. You may want to check your pulse now to see how excited you are about PWM opportunities.
  • When and How to use Pulse Width Modulation of Controller Outputs

    The post, When and How to use Pulse Width Modulation of Controller Outputs , first appeared on ControlGlobal.com's ControlTalk blog. In some applications, throttling of the manipulated flows is difficult or impossible. In the biochemical industry, where precise (good resolution and sensitivity) throttling valves without any crevices (to meet sanitary requirements) are rather limited (there are exceptions, such as the Fisher Baumann 83000-89000 series). Often, pulse width modulation (PWM) is used to turn nutrient and reagent pumps on and off. In the chemical industry, PWM is used to open and close valves whose trim would plug or whose stem would stick if throttled. The sudden burst of flow from on-off action helps flush out the trim and wipe the stem clean. PWM is correspondingly used for small reagent flows, corrosive fluids, and slurries. It is also used to prevent flashing by a valve position that ensures a pressure drop above the critical pressure drop. PWM is also used in temperature loops to turn heaters on and off. Here, it is commonly called “time proportioning control” but the action is principally the same. Temperature loops for extruders, silicon crystal pullers and environmental chambers often use this technique. All the applications of PWM have one thing in common; a capacity to filter or dilute the pulses so that they do not appear as measurement noise in the controlled variable. PWM provides a train of pulses that show up as a sawtooth in the measurement unless attenuated. The mass of fluid and metal in a reactor, extruder, or crystal puller and mass of air in an environment chamber must be large enough and the maximum pulse width small enough so that the amplitude of the sawtooth seen is negligible. The rangeability achieved by PWM is basically equal to the maximum pulse width divided by the minimum pulse width. Since a valve must reach a set position and the pump must reach a set speed during the pulse, the minimum pulse width is fixed by the pre-stroke deadtime and stroking time of the valve or the rate limiting of the speed and acceleration time of pump. Usually, four seconds is adequate for small valves and pumps. The maximum pulse width is the pulse cycle time when the pulse is almost continuously on. Since the pulse cycle time also sets the time between successive pulses, it adds a maximum deadtime to the loop that is about equal to the cycle time when the pulse is almost continuously off. For an average controller output of 50%, the deadtime added is about ½ the cycle time. Thus, the cycle time chosen represents a compromise of the desire to maximize rangeability and minimize the sawtooth amplitude seen in measurement and minimize loop deadtime. An additional consideration is the wear and tear on the final control element. Pumps, agitator and motor driven valves have a maximum duty cycle that must not be exceeded. Also, heaters in the motor starter will trip for too short a cycle time because the temperature rise from lack of cool down is equated to an overload current. For valves, periodic opening and closing will eventually cause packing, seat, seal or trim failure. The consequences and methods of mitigation of pulses are discussed in the 12/15/2014 Control Talk Blog “ Controller Attenuation and Resonance Tips ”. A simple equation to predict the amplitude of pulses after attenuation by the process or a filter that are seen by the controller is discussed in the 12/02/2104 Control Talk Blog “ Measurement Attenuation and Deception Tips ”. The generation of a pulse train is done by special output cards or by the configuration of function blocks on the PID output. The heart of the configuration is a ramp that resets itself periodically. An integrator (INT) function block is employed to generate the ramp. The configuration depends upon what version of Distributed Control System (DCS) or Programmable Logic Controller (PLC) is used. For an integrator that will ramp up towards a setpoint, the input to the integrator is set equal to 100% divided by the desired cycle time. The integrator setpoint is set slightly larger than 100%. The ramp “on” time or pulse width is determined by comparing percent ramp value (integrator output) to the percent controller output via a high signal monitor (HSM) block. When the ramp value exceeds the controller output, a discrete is see equal to one (true), which opens a discrete output or transfers in an analog output value that corresponds to the closed valve position or minimum speed. If the controller output drops below the minimum pulse width, the pulse is turned off by transferring in a negative value before the ramp value is used as the operand of the HSM block on the output of the INT block. The PID low output limit should be set to be slightly less than this minimum pulse width. The functionality of blocks depends upon the DCS or PLC used and any configuration must be extensively tested before being used online. For viscous fluids and slurries, a precise control valve may be continuously throttled until the valve position gets so small that laminar flow or plugging can occur. At this point (e.g., below 20% PID output), PWM starts. The throttling valve position then stays open (e.g., 20%) and an inexpensive on-off valve in series with the control valve is open and closed by PWM. There are many other applications of PWM. Pulsed flows have been shown to increase the yield of reactors, the separation in distillation columns and the combustion efficiency of burners. Pulse reagent flow has been very successfully used to mimic a titration for batch pH control. While many well designed pulsed strategies can work for this application, PWM on a proportional-only pH PID controller retains a conventional operator interface via the PID operator faceplate and tuning via the PID gain setting. Also, the controller output can be transferred in for the analog output to reduce batch processing time by providing pulses that are not only longer but that are also larger (pulse width plus pulse amplitude modulation). The gain of the manipulated variable is now nonlinear and is proportional to the controller output. However, for proportional-only control of batch pH processes, this gain change may be advantageous and offset the low pH process gain from the operating point being on the flat portion of the titration curve at the beginning of the batch cycle moving to the steep part of the curve at the end of the batch. This is an example of how a continuous control technique is also useful for batch processing. PWM also dramatically reduces the effects of deadband and resolution limit in the control valve or variable speed drive assuming the pulse amplitude is at least 5 times as large as the suspected deadband and resolution limit. This normally is the case if the amplitude is > 5%. However, for valves designed for tight shutoff, the backlash and stiction may be as large as 10% requiring 50% amplitude. You may want to check your pulse now to see how excited you are about PWM opportunities.
  • Getting the most out of valve positioners

    Today’s smart digital valve positioners have incredible capability and flexibility as to tuning, performance and diagnostics. Here we look at how to get the most out of these positioners by tuning and by making sure the valve assembly does not hinder performance and gives the position feedback needed. First of all, the positioner needs to know the actual position of the internal flow element (e.g., plug, ball, or disk). “High Performance” control valves often have the lowest cost and least leakage and often a straightforward compliance with the piping specification for isolation valves. The appearance of a win-win situation is the root cause of poor performance that often cannot be fixed by even the best valve positioner.  The feedback measurement is often on the actuator shaft. Since “High Performance” valves tend to be rotary valves, there is backlash and consequently deadband in the linkages and connections that translate actuator movement into valve stem movement. Then due to the high sealing friction particularly near the closed position from the plug, ball or disk rubbing against the seal, there is considerable friction and a poor resolution. In many cases, the valve stem may be moving but the actual plug, ball or disk is not. This stem windup (twisting) may eventually cause the internal flow element to jump to a new position overshooting the desired position. The smart valve positioner that is measuring actuator shaft position doesn’t see what is really happening in terms of stem and internal flow element position. So what you need is a true throttling valve and not an isolation or on-off valve posing as a control valve. A true throttling valve has a diaphragm actuator, splined shaft to stem connections, a stem that is integrally cast with the internal flow element (no stem to element connections), and no rubbing of the element against the seal once it opens. To achieve isolation you then install a cheap low leakage on-off valve in series downstream of the throttling valve. For pH control, the on-off valve should be close to the injection point to reduce reagent delivery delay upon opening. Diaphragm actuators are now available with much higher actuation pressures enabling their use on larger valves and higher pressure drops. If you still need to go to a piston actuator, the one with the least sliding friction giving the best resolution (best sensitivity) is the best choice (given the reliability is good).  Again, the position shaft connections need to be splined (keylock and pinned connections have a surprising amount of play causing backlash and shaft windup). The positioner feedback mechanism must be properly adjusted to give as accurate an indication of position as possible. Not all smart positioners have a good sensitivity and sufficient air flow capacity. Spool type positioners and low air consumption positioners will cause long response times for small and large changes in signal, respectively. Most tests done for step changes in signal of 2% to 10% don’t reveal a problem. High performance valves will be lying to even the best positioners, making diagnostics and supposed step response capabilities invalid. For faster response on fast loops, volume boosters should not replace positioners but be used on the output of the positioner with its bypass cracked open for stability. Given that you have a good actuator, throttling valve and positioner you are still not home free until you tune the positioner. We know from PID control, that a loop’s performance is only as good as the PID tuning. In fact the Integrated Absolute Error and Peak Error are functions of the tuning settings. A loop with great valve, process and sensor dynamics will perform as bad as a loop with poor dynamics if the controller is poorly tuned. Positioners have traditionally been high gain proportional only controllers. If a high gain sensitive pneumatic relay is used in the positioner, position control can be quite tight since the offset from setpoint for a proportional only controller is inversely proportional to the controller gain.  The offset is also of little consequence, since the effect is rather minor and short term with the process controller correcting the offset.  What the process controller needs is an immediate fast total response. There are much larger nonlinearities and offsets that the process controller has to deal with. The original idea of cascade control is to make the inner loop (in this case the positioner) as fast as possible by maximizing inner controller gain, which means going to proportional or proportional plus derivative control. Integral action in the inner loop is hurtful unless we are talking about a secondary flow loop for ratio control or feedforward control. The advent of smarter positioners has led to much more complex control algorithms that include integral action. The use of integral action may make the valve step response tests look better by the final position more closely matching the signal. Not realized is that the positioner gain has to be reduced and that integral action in the positioner increases the instances of limit cycles. In fact, with the process controller in manual (positioner signal constant) a limit cycle will develop from stiction in the valve unless an integral deadband is set. Also, the increase in the number of integrators in the control system means that the process controller with integral action will develop a limit cycle from backlash since there are now two integrators. So here we have the common situation where an attempt to make appearances look better we have created a problem due to lack of a fundamental understanding. Many positioners now come with the integral action turned on as a default. The solution is to omit integral action and use the most aggressive gain setting. For the Digital Valve Controller, this means going to travel control instead of pressure control. Overshoot of the setpoint is not a problem as long as the oscillation quickly settles out. Some overshoot helps in terms of working through deadband and resolution limits faster and increasing the size of the step seen by the positioner algorithm with its sensitivity limits. In fact, a lead/lag with lead time greater than lag time on the input signal is sometimes used to accomplish the same result. You should not get hung up on the exact change in position for a change in signal. For small signal changes, the linearity due to resolution limits is going to look bad because the resolution as a fraction of a small step is large. The really important thing is that the position changes quickly and the 86% response time is fast. Positioners with poor sensitivity and tuning may have a response time that is an order of magnitude larger than possible. What the process loop really wants is the manipulated variable to respond quickly to its demands and corrections. Also, for backlash, limit cycles can be eliminated or at least the amplitude reduced by a higher gain. For much more on how to get good valve rangeability and a sensitive and fast response see the 5/01/2016 Control Talk Blog “ Sizing up Valve Sizing Opportunities ” and the Control May 2016 feature article “ How to specify valves and positioners that don’t compromise control ”. Make sure this is not just talk. Put yourself into a position to get the most out of valve positioners.
  • Getting the Most Out of Valve Positioners

    The post, Getting the Most Out of Valve Positioners , first appeared on ControlGlobal.com's Control Talk blog. Today’s smart digital valve positioners have incredible capability and flexibility as to tuning, performance and diagnostics. Here we look at how to get the most out of these positioners by tuning and by making sure the valve assembly does not hinder performance and gives the position feedback needed. First of all, the positioner needs to know the actual position of the internal flow element (e.g., plug, ball, or disk). “High Performance” control valves often have the lowest cost and least leakage and often a straightforward compliance with the piping specification for isolation valves. The appearance of a win-win situation is the root cause of poor performance that often cannot be fixed by even the best valve positioner. The feedback measurement is often on the actuator shaft. Since “High Performance” valves tend to be rotary valves, there is backlash and consequently deadband in the linkages and connections that translate actuator movement into valve stem movement. Then due to the high sealing friction particularly near the closed position from the plug, ball or disk rubbing against the seal, there is considerable friction and a poor resolution. In many cases, the valve stem may be moving but the actual plug, ball or disk is not. This stem windup (twisting) may eventually cause the internal flow element to jump to a new position overshooting the desired position. The smart valve positioner that is measuring actuator shaft position doesn’t see what is really happening in terms of stem and internal flow element position. So what you need is a true throttling valve and not an isolation or on-off valve posing as a control valve. A true throttling valve has a diaphragm actuator, splined shaft to stem connections, a stem that is integrally cast with the internal flow element (no stem to element connections), and no rubbing of the element against the seal once it opens. To achieve isolation you then install a cheap low leakage on-off valve in series downstream of the throttling valve. For pH control, the on-off valve should be close to the injection point to reduce reagent delivery delay upon opening. Diaphragm actuators are now available with much higher actuation pressures enabling their use on larger valves and higher pressure drops. If you still need to go to a piston actuator, the one with the least sliding friction giving the best resolution (best sensitivity) is the best choice (given the reliability is good). Again, the position shaft connections need to be splined (keylock and pinned connections have a surprising amount of play causing backlash and shaft windup). The positioner feedback mechanism must be properly adjusted to give as accurate an indication of position as possible. Not all smart positioners have a good sensitivity and sufficient air flow capacity. Spool type positioners and low air consumption positioners will cause long response times for small and large changes in signal, respectively. Most tests done for step changes in signal of 2% to 10% don’t reveal a problem. High performance valves will be lying to even the best positioners, making diagnostics and supposed step response capabilities invalid. For faster response on fast loops, volume boosters should not replace positioners but be used on the output of the positioner with its bypass cracked open for stability. Given that you have a good actuator, throttling valve and positioner you are still not home free until you tune the positioner. We know from PID control, that a loop’s performance is only as good as the PID tuning. In fact the Integrated Absolute Error and Peak Error are functions of the tuning settings. A loop with great valve, process and sensor dynamics will perform as bad as a loop with poor dynamics if the controller is poorly tuned. Positioners have traditionally been high gain proportional only controllers. If a high gain sensitive pneumatic relay is used in the positioner, position control can be quite tight since the offset from setpoint for a proportional only controller is inversely proportional to the controller gain. The offset is also of little consequence, since the effect is rather minor and short term with the process controller correcting the offset. What the process controller needs is an immediate fast total response. There are much larger nonlinearities and offsets that the process controller has to deal with. The original idea of cascade control is to make the inner loop (in this case the positioner) as fast as possible by maximizing inner controller gain, which means going to proportional or proportional plus derivative control. Integral action in the inner loop is hurtful unless we are talking about a secondary flow loop for ratio control or feedforward control. The advent of smarter positioners has led to much more complex control algorithms that include integral action. The use of integral action may make the valve step response tests look better by the final position more closely matching the signal. Not realized is that the positioner gain has to be reduced and that integral action in the positioner increases the instances of limit cycles. In fact, with the process controller in manual (positioner signal constant) a limit cycle will develop from stiction in the valve unless an integral deadband is set. Also, the increase in the number of integrators in the control system means that the process controller with integral action will develop a limit cycle from backlash since there are now two integrators. So here we have the common situation where an attempt to make appearances look better we have created a problem due to lack of a fundamental understanding. Many positioners now come with the integral action turned on as a default. The solution is to omit integral action and use the most aggressive gain setting. For the Digital Valve Controller, this means going to travel control instead of pressure control. Overshoot of the setpoint is not a problem as long as the oscillation quickly settles out. Some overshoot helps in terms of working through deadband and resolution limits faster and increasing the size of the step seen by the positioner algorithm with its sensitivity limits. In fact, a lead/lag with lead time greater than lag time on the input signal is sometimes used to accomplish the same result. You should not get hung up on the exact change in position for a change in signal. For small signal changes, the linearity due to resolution limits is going to look bad because the resolution as a fraction of a small step is large. The really important thing is that the position changes quickly and the 86% response time is fast. Positioners with poor sensitivity and tuning may have a response time that is an order of magnitude larger than possible. What the process loop really wants is the manipulated variable to respond quickly to its demands and corrections. Also, for backlash, limit cycles can be eliminated or at least the amplitude reduced by a higher gain. For much more on how to get good valve rangeability and a sensitive and fast response see the 5/01/2016 Control Talk Blog “ Sizing up Valve Sizing Opportunities ” and the Control May 2016 feature article “ How to specify valves and positioners that don’t compromise control ”. Make sure this is not just talk. Put yourself into a position to get the most out of valve positioners.
  • Uncommon Knowledge for Achieving Best Feedforward and Ratio Control

    Here are key relatively straightforward principles and practices not commonly known that can make a world of difference in the performance of feedforward and ratio control systems. Also, guidance is offered for the setup and automatic identification of the parameters needed by the use of software designed to tune PID feedback controllers. Even though this capability may be well within the existing functionality of the software, often the instructions needed are insufficient or missing in action. The desired and undesired changes in process outputs can be largely be traced back to changes in the flow rate of process or utility streams. These changes are relatively fast and can propagate downstream as changes in stream temperature and composition besides flow. In comparison, the changes in raw material composition and temperature in storage tanks are relatively slow and small. Feedforward and ratio control seek to maintain the stream flow and consequently the stream composition and temperature at the values that would be seen in an accurate Process Flow Diagram (PFD).  Unrealized is that most PFDs and the feedforward gains and ratios used could be updated online as the flow rates are corrected by the feedback controller by better visibility of the correction. Also, a PI controller similar to a valve position controller (VPC) can be simply set up to automatically adapt the feedforward gain or ratio by seeking to gradually and smoothly minimize the feedback correction. We must first define some nomenclature. The “leader” flow leads the way and may be a disturbance flow or a manipulated flow that first sets the direction and production rate. Feedforward or ratio control sets the “follower” flow that will seek to stay in a given ratio to the “leader” flow. This simple strategy enables preemptive disturbance rejection to reduce load upsets, coordination for changes in production rate or state (e.g., startup or transition) to reduce unbalances, and decoupling to reduce interaction. It is most important that the feedforward or ratio correction in a manipulated “follower” flow arrive in the process at the same time at the same point as the “leader” flow. The correction in the “follower” flow must not arrive too soon causing inverse response or too late creating a secondary disturbance. The criterion for what is too soon and too late is tight and loose, respectively. Arriving just 10% of the loop deadtime too soon is significantly disruptive whereas arriving 100% of the deadtime too late may just require a reduction in the ratio or feedforward gain. Overcorrection can cause the opposite response and is much worse than under correction. Since we never know the feedforward gain or ratio exactly due to uncertainties in measurements or process stream conditions and our feedforward timing is never right, the full theoretical feedforward gain or ratio is not used in the setting. Often the actual value used is less than 80% of the theoretical value. To get the timing of the “follower” flow better, dynamic compensation of the feedforward or ratio control is done by insertion of a Deadtime block and Lead/Lag block in the “follower” flow required for feedforward or ratio control. The feedforward or ratio control block deadtime is equal to deadtime for a change in “leader” or load flow minus the loop deadtime for a change in “follower” flow resulting in a change in the controlled process variable (PV). If the “leader” deadtime is significantly less than the “follower” deadtime, you are out of luck unless you can insert a deadtime in the setpoint changes to the “leader” flow. In this case, the feedforward or ratio control deadtime is the total deadtime from a “leader” setpoint change minus the deadtime from a “follower” flow change. The Lead time in the Lead/Lag block is set equal to the lag from a change in the “follower” flow in terms of affecting the PV (Lead time setting = “follower” lag). The Lag time in the Lead/Lag block is set equal to the lag from a change in the “leader” flow resulting in a change in the PV (Lag time setting = “leader” lag). The Lag time in the Lead/Lag block should be at least 1/5 the Lead time to prevent an erratic and noisy “follower” flow. The feedforward gain or ratio is set to make the contribution of the “follower” flow cancel out the “leader” flow. The feedforward scale of a feedforward summer for direct manipulation of a final control element should be set to match to the final control element capacity with signal characterization added to reduce nonlinearity. You can use software for tuning a PID controller to tell you what is the “follower” and “leader” deadtime, lag time and process gain. The process gains should be expressed in terms of the change in the PV divided by the change in “leader” flow and “follower” flow realizing that the feedforward scale or secondary controller takes care of the final control element gain. If there is no “leader” flow controller, a dummy controller can be set up as if there was such a flow controller and the tuning software used. Of course, changes in the "leader" flow must be made for identification of dynamics. Due to the lack of guidance on how to identify the feedforward gain and dynamic compensation for PID control, Model Predictive Control (MPC) may be used because it automatically does this during the test and identification process that is largely automated. If there are multiple feedforwards, complex or compound dynamics due to recycle streams or heat integration or full decoupling is needed, MPC is a much better choice unless you have exceptional PID expertise.  However, when ratio control is needed for visibility and accessibility, many of the same practices sited here for PID control are also needed for an MPC solution. Furthermore, the temptation to use a controlled variable that is a ratio of two flows only works over a narrow flow range because of the extreme nonlinearity created by division of one flow by another flow. As the devisor significantly decreases, the process gain greatly increases. For fast processes where the process deadtime or secondary lag is small or are nearly the same for both the “leader” or “follower”, the dynamic compensation largely depends upon dynamic response of the secondary flow or speed loop and final control element (control valve or variable frequency drive). While lags and delays in the process response of the PV may be similar or negligible for both “follower” and “leader” flows, the “follower” flow correction must work through the secondary flow or speed loop and a final control element with uncertain and non-ideal dynamics.  Thus, poor secondary controller tuning and/or control valve or VFD nonlinearity, slow response time, deadband and resolution limits deteriorates feedforward and ratio control besides feedback control. If there is no secondary flow loop and the correction is done by direct manipulation of the final control element, a feedforward summer is best despite what might be said in the literature about whether the intercept or slope of feedforward changes. These plots do not take into account that bias errors exist in valve positions and leader flow measurements and that uncertainty and nonlinearity is more robustly handled by a bias correction as seen in model predictive control and neural networks. A feedforward multiplier introduces a nonlinearity that is problematic if there is no secondary flow or speed loop. Most gas and steam pressure systems do not have a secondary loop and therefore use feedforward summers. For steam header pressure controller a feedforward summer can be used to provide decoupling and minimize upsets to upper headers from lower headers. The change in steam flow through a letdown valve to the respective header is a feedforward input to the letdown valve of a higher pressure header. If the valves are linear and the feedforward scale is set based on the valve flow capacity, the theoretical feedforward gain is one. If the letdown valves have similar dynamics, no dynamic compensation of this feedforward signal for decoupling is needed. This simple half decoupling can be carried up through multiple headers so that changes in the lowest pressure header steam use or generation does not upset any of the higher pressure headers.  Changes in steam use and generation are also a feedforward signal to the letdown valve for the respective header. Again, linear valves with a feedforward scale set based on valve flow capacity makes the theoretical feedforward gain one if consistent steam flow units are used. If the steam suppliers entering the header and the steam users exiting the header are within a few thousand feet of each other, the process deadtime and secondary lag is the same for both changes in supplier or user flow and letdown flow.  In this case, the dynamic compensation needed is simply based on the letdown valve response time.  For a letdown valve 86% response time (T86) of 4 seconds, a Lead time setting of 4 seconds and a Lag time setting of 1 second is a good starting point to help get through the resolution limit for the integrating or dominant lag process response of steam header pressure. If there is a secondary flow or speed loop, a ratio control block and bias/gain block is used to provide operator visibility and accessibility important for changes in process conditions and states. This is a significant consideration often lacking in the operator interface. For level, composition, pH and temperature control of back mixed volumes (e.g. liquid vessels with recirculation and/or agitation and columns), the bias is corrected by feedback control effectively being a feedforward summer. For composition, quality, temperature and pH control by manipulation of feeds for plug flow volumes (e.g., fluidized catalyst bed reactors, static mixers, clarifiers, inline polymer reactors, conveyors, settling tanks, extruders, and sheet lines), the ratio is corrected by feedback control effectively being a feedforward multiplier.  Whether the ratio or bias is corrected is determined by whether a significant process time constant is present due to back mixing or missing due to plug flow.  The manipulation of heating or cooling for control may necessitate the correction of the bias due to the introduction of a heat transfer lag. Whatever is not corrected by feedback control can be adapted by the setup of a simple PI controller with mostly integral action and effectively directional move suppression that seeks to make the feedback correction negligible (bias and ratio correction factor approach zero and one, respectively). For ratio control of reactants, dynamic compensation is not normally used. However, unbalances in the stoichiometric ratio of reactants can accumulate in back mixed volumes with prolonged delayed effects and promote more immediate and abrupt effects in plug flow volumes for changes in production rate, from correction by analyzers and changes in state. The more commonly cited solution is to tune all the reactant flow loops to have the same closed loop time constant of the reactant loop with the poorest and slowest response. However, this degrades the ability of the faster and better loops to deal with disturbances such as pressure, temperature or composition changes to mass flow loops particularly when coriolis meters are used. The better solution is to tune all the reactant flow controllers for the fastest disturbance rejection and use an equal setpoint filter on the reactant flow loops to make the “follower” loops respond in unison via ratio control with changes in the “leader” loop setpoint.  For ratio control of column process flows for temperature or composition control, the relative location that the process flow enters the column has a great effect on the Lead/Lag settings. For level control, the dynamic compensation of the ratio control is usually not needed if a process flow is manipulated that directly enters or exits the volume. However, if level control is achieved by the manipulation of heat transfer, overhead condensation or by the addition of a stream to another stream or volume introducing delays and lags in the “follower” response, dynamic compensation is necessary.  Final control element and flow measurement rangeability can be a severe limiting factor. Oversized meters and control valves and improper installation make the actual rangeability much less than the stated rangeability by the supplier. Sliding stem low friction packing control valves with diaphragm actuators and digital valve controllers and VFDs with pulse width modulated inverters and speed to torque control and low static head, and magmeters and coriolis flow meters offer the best rangeability.  If there is good final control element rangeability, insufficient flow meter rangeability can be dealt with by an inferential flow measurement computed from valve position or VFD speed at low flows. There must be a smooth transition from measured to inferred flow.  The feedforward gain or ratio factor should be reduced since the inferential measurement is less accurate. Note the VFD rangeability and linearity dramatically deteriorates as the static head increases and approaches the system pressure drop from friction. For a better understanding of this and much more see a preview of a future ISA Mentor Program WebEx presentation “ ISA-Mentor-Program-WebEx-Feedforward-and-Ratio-Control.pdf ” and of course, buy one of my books. I have found a book to update that will showcase a great mind. Now I just need to find the great mind.
  • Uncommon Knowledge for Achieving Best Feedforward and Ratio Control

    The post, Uncommon Knowledge for Achieving Best Feedforward and Ratio Control , first appeared on ControlGlobal.com's Control Talk blog . Here are key relatively straightforward principles and practices not commonly known that can make a world of difference in the performance of feedforward and ratio control systems. Also, guidance is offered for the setup and automatic identification of the parameters needed by the use of software designed to tune PID feedback controllers. Even though this capability may be well within the existing functionality of the software, often the instructions needed are insufficient or missing in action. The desired and undesired changes in process outputs can be largely be traced back to changes in the flow rate of process or utility streams. These changes are relatively fast and can propagate downstream as changes in stream temperature and composition besides flow. In comparison, the changes in raw material composition and temperature in storage tanks are relatively slow and small. Feedforward and ratio control seek to maintain the stream flow and consequently the stream composition and temperature at the values that would be seen in an accurate Process Flow Diagram (PFD). Unrealized is that most PFDs and the feedforward gains and ratios used could be updated online as the flow rates are corrected by the feedback controller by better visibility of the correction. Also, a PI controller similar to a valve position controller (VPC) can be simply set up to automatically adapt the feedforward gain or ratio by seeking to gradually and smoothly minimize the feedback correction. We must first define some nomenclature. The “leader” flow leads the way and may be a disturbance flow or a manipulated flow that first sets the direction and production rate. Feedforward or ratio control sets the “follower” flow that will seek to stay in a given ratio to the “leader” flow. This simple strategy enables preemptive disturbance rejection to reduce load upsets, coordination for changes in production rate or state (e.g., startup or transition) to reduce unbalances, and decoupling to reduce interaction. It is most important that the feedforward or ratio correction in a manipulated “follower” flow arrive in the process at the same time at the same point as the “leader” flow. The correction in the “follower” flow must not arrive too soon causing inverse response or too late creating a secondary disturbance. The criterion for what is too soon and too late is tight and loose, respectively. Arriving just 10% of the loop deadtime too soon is significantly disruptive whereas arriving 100% of the deadtime too late may just require a reduction in the ratio or feedforward gain. Overcorrection can cause the opposite response and is much worse than under correction. Since we never know the feedforward gain or ratio exactly due to uncertainties in measurements or process stream conditions and our feedforward timing is never right, the full theoretical feedforward gain or ratio is not used in the setting. Often the actual value used is less than 80% of the theoretical value. To get the timing of the “follower” flow better, dynamic compensation of the feedforward or ratio control is done by insertion of a Deadtime block and Lead/Lag block in the “follower” flow required for feedforward or ratio control. The feedforward or ratio control block deadtime is equal to deadtime for a change in “leader” or load flow minus the loop deadtime for a change in “follower” flow resulting in a change in the controlled process variable (PV). If the “leader” deadtime is significantly less than the “follower” deadtime, you are out of luck unless you can insert a deadtime in the setpoint changes to the “leader” flow. In this case, the feedforward or ratio control deadtime is the total deadtime from a “leader” setpoint change minus the deadtime from a “follower” flow change. The Lead time in the Lead/Lag block is set equal to the lag from a change in the “follower” flow in terms of affecting the PV (Lead time setting = “follower” lag). The Lag time in the Lead/Lag block is set equal to the lag from a change in the “leader” flow resulting in a change in the PV (Lag time setting = “leader” lag). The Lag time in the Lead/Lag block should be at least 1/5 the Lead time to prevent an erratic and noisy “follower” flow. The feedforward gain or ratio is set to make the contribution of the “follower” flow cancel out the “leader” flow. The feedforward scale of a feedforward summer for direct manipulation of a final control element should be set to match to the final control element capacity with signal characterization added to reduce nonlinearity. You can use software for tuning a PID controller to tell you what is the “follower” and “leader” deadtime, lag time and process gain. The process gains should be expressed in terms of the change in the PV divided by the change in “leader” flow and “follower” flow realizing that the feedforward scale or secondary controller takes care of the final control element gain. If there is no “leader” flow controller, a dummy controller can be set up as if there was such a flow controller and the tuning software used. Of course, changes in the "leader" flow must be made for identification of dynamics. Due to the lack of guidance on how to identify the feedforward gain and dynamic compensation for PID control, Model Predictive Control (MPC) may be used because it automatically does this during the test and identification process that is largely automated. If there are multiple feedforwards, complex or compound dynamics due to recycle streams or heat integration or full decoupling is needed, MPC is a much better choice unless you have exceptional PID expertise. However, when ratio control is needed for visibility and accessibility, many of the same practices sited here for PID control are also needed for an MPC solution. Furthermore, the temptation to use a controlled variable that is a ratio of two flows only works over a narrow flow range because of the extreme nonlinearity created by division of one flow by another flow. As the devisor significantly decreases, the process gain greatly increases. For fast processes where the process deadtime or secondary lag is small or are nearly the same for both the “leader” or “follower”, the dynamic compensation largely depends upon dynamic response of the secondary flow or speed loop and final control element (control valve or variable frequency drive). While lags and delays in the process response of the PV may be similar or negligible for both “follower” and “leader” flows, the “follower” flow correction must work through the secondary flow or speed loop and a final control element with uncertain and non-ideal dynamics. Thus, poor secondary controller tuning and/or control valve or VFD nonlinearity, slow response time, deadband and resolution limits deteriorates feedforward and ratio control besides feedback control. If there is no secondary flow loop and the correction is done by direct manipulation of the final control element, a feedforward summer is best despite what might be said in the literature about whether the intercept or slope of feedforward changes. These plots do not take into account that bias errors exist in valve positions and leader flow measurements and that uncertainty and nonlinearity is more robustly handled by a bias correction as seen in model predictive control and neural networks. A feedforward multiplier introduces a nonlinearity that is problematic if there is no secondary flow or speed loop. Most gas and steam pressure systems do not have a secondary loop and therefore use feedforward summers. For steam header pressure controller a feedforward summer can be used to provide decoupling and minimize upsets to upper headers from lower headers. The change in steam flow through a letdown valve to the respective header is a feedforward input to the letdown valve of a higher pressure header. If the valves are linear and the feedforward scale is set based on the valve flow capacity, the theoretical feedforward gain is one. If the letdown valves have similar dynamics, no dynamic compensation of this feedforward signal for decoupling is needed. This simple half decoupling can be carried up through multiple headers so that changes in the lowest pressure header steam use or generation does not upset any of the higher pressure headers. Changes in steam use and generation are also a feedforward signal to the letdown valve for the respective header. Again, linear valves with a feedforward scale set based on valve flow capacity makes the theoretical feedforward gain one if consistent steam flow units are used. If the steam suppliers entering the header and the steam users exiting the header are within a few thousand feet of each other, the process deadtime and secondary lag is the same for both changes in supplier or user flow and letdown flow. In this case, the dynamic compensation needed is simply based on the letdown valve response time. For a letdown valve 86% response time (T86) of 4 seconds, a Lead time setting of 4 seconds and a Lag time setting of 1 second is a good starting point to help get through the resolution limit for the integrating or dominant lag process response of steam header pressure. If there is a secondary flow or speed loop, a ratio control block and bias/gain block is used to provide operator visibility and accessibility important for changes in process conditions and states. This is a significant consideration often lacking in the operator interface. For level, composition, pH and temperature control of back mixed volumes (e.g. liquid vessels with recirculation and/or agitation and columns), the bias is corrected by feedback control effectively being a feedforward summer. For composition, quality, temperature and pH control by manipulation of feeds for plug flow volumes (e.g., fluidized catalyst bed reactors, static mixers, clarifiers, inline polymer reactors, conveyors, settling tanks, extruders, and sheet lines), the ratio is corrected by feedback control effectively being a feedforward multiplier. Whether the ratio or bias is corrected is determined by whether a significant process time constant is present due to back mixing or missing due to plug flow. The manipulation of heating or cooling for control may necessitate the correction of the bias due to the introduction of a heat transfer lag. Whatever is not corrected by feedback control can be adapted by the setup of a simple PI controller with mostly integral action and effectively directional move suppression that seeks to make the feedback correction negligible (bias and ratio correction factor approach zero and one, respectively). For ratio control of reactants, dynamic compensation is not normally used. However, unbalances in the stoichiometric ratio of reactants can accumulate in back mixed volumes with prolonged delayed effects and promote more immediate and abrupt effects in plug flow volumes for changes in production rate, from correction by analyzers and changes in state. The more commonly cited solution is to tune all the reactant flow loops to have the same closed loop time constant of the reactant loop with the poorest and slowest response. However, this degrades the ability of the faster and better loops to deal with disturbances such as pressure, temperature or composition changes to mass flow loops particularly when coriolis meters are used. The better solution is to tune all the reactant flow controllers for the fastest disturbance rejection and use an equal setpoint filter on the reactant flow loops to make the “follower” loops respond in unison via ratio control with changes in the “leader” loop setpoint. For ratio control of column process flows for temperature or composition control, the relative location that the process flow enters the column has a great effect on the Lead/Lag settings. For level control, the dynamic compensation of the ratio control is usually not needed if a process flow is manipulated that directly enters or exits the volume. However, if level control is achieved by the manipulation of heat transfer, overhead condensation or by the addition of a stream to another stream or volume introducing delays and lags in the “follower” response, dynamic compensation is necessary. Final control element and flow measurement rangeability can be a severe limiting factor. Oversized meters and control valves and improper installation make the actual rangeability much less than the stated rangeability by the supplier. Sliding stem low friction packing control valves with diaphragm actuators and digital valve controllers and VFDs with pulse width modulated inverters and speed to torque control and low static head, and magmeters and coriolis flow meters offer the best rangeability. If there is good final control element rangeability, insufficient flow meter rangeability can be dealt with by an inferential flow measurement computed from valve position or VFD speed at low flows. There must be a smooth transition from measured to inferred flow. The feedforward gain or ratio factor should be reduced since the inferential measurement is less accurate. Note the VFD rangeability and linearity dramatically deteriorates as the static head increases and approaches the system pressure drop from friction. For a better understanding of this and much more see a preview of a future ISA Mentor Program WebEx presentation “ ISA-Mentor-Program-WebEx-Feedforward-and-Ratio-Control.pdf ” and of course, buy one of my books. I have found a book to update that will showcase a great mind. Now I just need to find the great mind.
  • Keys to successful process control technologies

    Why do many process control technologies fail to make prime time being relegated to special applications that are few and far between? Here I give what I see as the keys to a technology being successful and widely used in plant applications. How and to what degree each of the major technologies achieves these keys is discussed along with what is left on the table. Keys to Success (1) Technology addresses the actual dynamics of industrial process applications (open loop steady state or integrating process gain, open loop time constant(s) and deadtime without which I could retire per my last Control Talk Blog) (2) Technology successfully makes automatic adjustments to process without operator intervention (3) Technology achieves benefits in terms of an increase in process efficiency and/or capacity (4) Tools exist to identify the adjustments needed to achieve the best process performance (5) Technology can be adjusted to perform well for different production rates and process conditions (6) Operator can understand what the technology is doing to the process (7) Application can be readily monitored for performance and problems easily identified and fixed (8) Technology can be applied, maintained and improved by the average process control engineer (9) Technology is widely applicable (10) Technology does not distract operator and does not create false information (11) Technology identifies unsuspected relationships (12) Technology identifies unsuspected causes and effects PID Control The PID is widely applicable and can be applied by the average process control engineer. Studies have shown that for control of a single process variable, the PID is near optimal in terms of rejecting load disturbances as documented in a study by Bohl and McAvoy in a landmark paper in 1976. This has not stopped thousands of papers and millions of hours being spent on trying to invent a controller to replace the PID for single loop control. If the process has a runaway response (e.g., highly exothermic reactor temperature) or requires very fast execution (e.g., < 1 sec), the PID is the only safe solution. The PID can be readily applied but it is amazing how many of these controllers are not properly tuned. Part of this is due to the lack of really understanding the modes and options. To confuse users, the integral mode gives the type of response by humans in terms of never being satisfied accentuated by digital displays to excessive resolution and having no anticipation or understanding of the trajectory. The result is most loops have an order of magnitude or more too much integral action (too small of a reset time) except for deadtime dominant loops as discussed in my last Control Talk Blog.  Confusing the situation are over a 100 tuning rules with the advocates having a closed mind to the relevance of other tuning rules as discussed in my Control Global Whitepaper “ So Many Tuning Rules, So Little Time ”. While the average user can apply almost any set of tuning rules to get good performance, a consultant is often useful to get the best performance. Tuning software can automatically identify the actual dynamics for feedback control and provide recommended settings that can be scheduled online for different production rates and process conditions. PID output for startup, batch operation and abnormal conditions can be scheduled automatically by the use of the Track or the Remote Output modes. Cascade control can be effectively used to compensate for fast secondary disturbances and nonlinearities. Advanced Regulatory Control techniques such as feedforward, override and valve position control can quickly implemented to increase process capacity or efficiency. When override control is used, the selected controller is evident to the operator. The external reset feedback feature must be used so that the unselected PID offset in output from the selected PID output is the unselected PID error multiplied by its gain providing predictability as to when an unselected PID will take over control. An enhanced PID can be used to enable the PID to use at-line and even off-line analyzer results for composition control with sensitivity of tuning settings to cycle time eliminated for self-regulating process responses and  minimized for integrating process responses as described in the 7/06/2015 Control Talk Blog “ Batch and Continuous Control with At-Line and Offline Analyzers Tips ”. While the PID can be configured and tuned to provide automatic control for all sorts of situations resulting in its use in over 99% of the feedback control systems, there is a lack of a straightforward general approach to increase its effective. While my previous blogs this past year supporting my YouTube Videos on PID Options and Solutions give a useful perspective and a lot details, what is needed is for someone to step back and give a clear step by step approach that allows for different objectives and tuning methods.  By the way, these videos are now posted as part of a very practical and useful playlist of ISA Mentor Program Webinars that should be significant value to every automation engineer. There are limitations to PID control that move users to consider Model Predictive Control (MPC). Most tuning software does not effectively and automatically identify the feedforward dynamic compensation needed by a PID. Also, more than half decoupling for a PID is confusing and too challenging for the average user. Compound or complex responses where the initial response is different than the later response due to recycle or competing effects, tuning of the PID is difficult. Additionally the simultaneous predictive honoring of the multiple constraints is not within the normal capabilities of the PID. These situations all lead us to MPC to achieve greater process performance, especially since the MPC software may be integrated into the DCS and implemented by a simple configuration making its use less expensive, faster and easier.  Also, let’s face it, doing an article or presentation on a PID application is not going to get you as much recognition as doing one on an MPC application. Model Predictive Control MPC can be potentially used as a replacement for any PID control loop in continuous operation if the PID gain needed is less than ten, the execution rate does not need to be faster than 1 second and derivative action is not essential for safe tight control for unmeasured disturbances. The temperature control of many liquid reactors require too high of a PID gain and derivative action to be good candidates for MPC. Here ratio control of reactants (e.g., intelligent flow feedforward control) is often used without dynamic compensation beyond a simple equal filter applied to each reactant flow setpoint as seen in the 7/26/2016 Control Talk Blog  " Control Strategies to Improve Process Capacity and Efficiency - Part 3 ".  For highly exothermic reactors, the positive feedback response can create a dangerous runaway condition with a point of no return. While technically processes should be designed so this cannot happen, the acceleration in the response from a batch polymerization reaction can exceed coolant capabilities leading to relief valves popping and blowing over the reactants to a flare system.  I have been in a control room when this occurred. Outside of these liquid reactors, there are many potential MPC applications especially when new MPC technology can offer tight control of integrating processes. MPC can be extended to fed-batch if models are switched as the batch progresses. The control of a temperature profile or product composition profile can be done by the translation of the controlled variable from temperature or concentration to rate of change of temperature and concentration, which is the slope of the batch profile. This creates a pseudo steady state and the ability to readily make changes in the controlled variable in both directions.  The abilities to handle nonlinearities and provide operator understanding are more challenging for MPC than PID control. Consultants and proficient users can address these needs by switching in different models for different production rates and plant conditions and adding graphics that enable operator to see future trajectories and understand the relative contribution of each controlled variable, disturbance variable and constraint variable.  For multivariable applications proficiency in knowing and improving matrix condition number and using the optimizer (e.g., linear program) is useful. My experience personally and after doing 14 years of Control Talk Columns with industry experts, is that the most successful and extensively used technologies are PID and MPC because they address Keys 1-10 with a track record for Keys 1 and 2 far exceeding the remaining technologies. Where PID and MPC fall short is in the ability to find unsuspected relationships and cause and effects particularly when there are a large number of variables at play. The variables for PID and MPC are chosen and classified as to controlled, manipulated, disturbance, constraint and optimization variables based on knowledge of person applying the technology. Also, regimented testing is used to identify the relationships. On the plus side, PID and MPC identify all of the necessary dynamics, whereas the other technologies leave a lot to the imagination. Thus, the PID and MPC solutions are narrowed dramatically compared to the remaining technologies to be discussed when looking at orders of magnitude number of variables with very little preconceptions and regimented testing as proposed in the Industrial Internet of Things (IIoT). Multivariate Statistical Process Control (Data Analytics) Data analytics can find relationships between process inputs and process outputs in very large sets of data. Data analytics eliminates relationships between process inputs (cross correlations) and reduces the number of process inputs by the use of principal components constructed that are orthogonal and thus independent of each other in a plot of a process output versus principle components. For two principal components, this is readily seen as a X, Y and Z plot with each axis at a 90 degree angle to the each other axis. The X and Y axis cover the range of values principal components and the Z axis is the process output. The user can drill down into each principal component to see the contribution of each process input. The use of graphics to show this can greatly increase operator understanding. Data analytics excels at identifying unsuspected relationships. For process conditions outside of the data range used in developing the empirical models, linear extrapolation helps prevent bizarre extraneous predictions. Also, there are no humps or bumps that cause a local reversal of process gain and buzzing. Batch data analytics does not need to identify the process dynamics because all of the process inputs are focused on a process output at a particular part of the batch cycle (e.g., endpoint). This is incredibly liberating. The piecewise linear fit to the batch profile enables batch data analytics to deal with the nonlinearity of the batch response. The results can be used to make mid batch corrections. Potentially batch data analytics can address most of the keys to success listed. The application of data analytics to continuous processes requires the synchronization of the process inputs with the process outputs predicted. This can be problematic because there are many delays and time constants of instrumentation and control valves and unit operations that are unknown. A deadtime is identified to be applied to each process input to deal with these dynamics. As you can imagine, this may work on a single unit operation with negligible automation system dynamics and no recycle but is a big challenge for most plant applications A word of caution for data analytics and the next technology discussed Artificial Neural Networks (ANN). What are identified are correlations and not causes and effects. The review by people who understand the process (e.g., process and research engineers) and the possible use of first principle simulations are needed to validate causes and effects. Both data analytics and ANN have been much more successful in predicting human responses that tend to be much less deterministic and much more situational. There is a possibly a greater future in terms of predicting operator responses than process responses when it comes to continuous unit operations.  Neural Networks ANN offer predictive behavior that addresses nonlinearities much more effectively than data analytics particularly for continuous processes.  However, ANN does not necessarily use principle components to prevent cross correlations and to reduce the number of inputs. Consequently, an ANN may have order(s) of magnitude greater number of process inputs than data analytics without the ability to determine numerically the contribution of each input beyond a relative sensitivity analysis. Also, the response may have humps or bumps conditions within the test range and strange results outside of test data range. Special techniques can be used to address these concerns. The future of ANN to me is more as a supplement rather than a replacement to other technologies. There has been considerable success in the use of a combination of an ANN and MPC. Nothing to me precludes the use Principal Component Analysis (PCA) extensively employed by data analytics to eliminate cross correlations and to dramatically reduce the number of process inputs to ANN. The main problem is that the ANN and data analytics camps see their technology as the best taking an adversarial view rather than trying to get the best out of both. Fuzzy Logic Control (FLC) FLC has been successfully used where a process model cannot be obtained (e.g., mining industry). FLC automates the best of the relative operator responses. Given a model, it has been shown that a PID gives as good or better performance for a single loop if all of the PID options are used as needed.  I expect the same is true in terms of MPC being a better solution for multivariable control if models can be identified.  FLC is undesirable from the standpoint of tuning, understanding and technical support. I implemented a successful FLC for a neutralization system. The plant was afraid to touch the FLC because no one understood it.  Decades later, I showed how an MPC could provide better optimization with a more continuously improvable and maintainable system as documented in the November 2007 Control article “ Virtual Control of Real pH ”. Expert Systems Seems like a great idea and maybe there is a future for it but the company I spent my career with had 10 people working on expert systems for ten years with only some limited success. Often the expert systems complained too much and sometimes about the wrong things. Operators saw them as a distraction. False alarms totally subjugated confidence in them.  The expert systems fell into disuse and were turned off within a few years after the developer left the area.  This was back in the 1990s. My hope is that technology has advanced and addressed these issues. I strongly suggest one check how well they meet Keys 1-12. The expert system we used also had fuzzy logic. I remember entering a lot of rules and then wondering how did they play together and what was the order of execution. It was very easy maybe too easy to add rules but extremely difficult to analyze the consequences for all the possible conditions. Expert systems do not have dynamics and are thus not used to predict a process variable but rather an abnormal condition and a possible steady state correction. I think there is a place for them in helping provide better diagnostics based on problems identified by other technologies. As a closing note, the one New Year’s Resolution that I can guarantee I am going to keep is “I will not keep all my New Year’s Resolutions.” Happy New Year!
  • Keys to Successful Process Control Technologies

    The post, Keys to Successful Process Control Technologies , first appeared on ControlGlobal.com's Control Talk blog. Why do many process control technologies fail to make prime time being relegated to special applications that are few and far between? Here I give what I see as the keys to a technology being successful and widely used in plant applications. How and to what degree each of the major technologies achieves these keys is discussed along with what is left on the table. Keys to Success (1) Technology addresses the actual dynamics of industrial process applications (open loop steady state or integrating process gain, open loop time constant(s) and deadtime without which I could retire per my last Control Talk Blog) (2) Technology successfully makes automatic adjustments to process without operator intervention (3) Technology achieves benefits in terms of an increase in process efficiency and/or capacity (4) Tools exist to identify the adjustments needed to achieve the best process performance (5) Technology can be adjusted to perform well for different production rates and process conditions (6) Operator can understand what the technology is doing to the process (7) Application can be readily monitored for performance and problems easily identified and fixed (8) Technology can be applied, maintained and improved by the average process control engineer (9) Technology is widely applicable (10) Technology does not distract operator and does not create false information (11) Technology identifies unsuspected relationships (12) Technology identifies unsuspected causes and effects PID Control The PID is widely applicable and can be applied by the average process control engineer. Studies have shown that for control of a single process variable, the PID is near optimal in terms of rejecting load disturbances as documented in a study by Bohl and McAvoy in a landmark paper in 1976. This has not stopped thousands of papers and millions of hours being spent on trying to invent a controller to replace the PID for single loop control. If the process has a runaway response (e.g., highly exothermic reactor temperature) or requires very fast execution (e.g., < 1 sec), the PID is the only safe solution. The PID can be readily applied but it is amazing how many of these controllers are not properly tuned. Part of this is due to the lack of really understanding the modes and options. To confuse users, the integral mode gives the type of response by humans in terms of never being satisfied accentuated by digital displays to excessive resolution and having no anticipation or understanding of the trajectory. The result is most loops have an order of magnitude or more too much integral action (too small of a reset time) except for deadtime dominant loops as discussed in my last Control Talk Blog. Confusing the situation are over a 100 tuning rules with the advocates having a closed mind to the relevance of other tuning rules as discussed in my Control Global Whitepaper “ So Many Tuning Rules, So Little Time ”. While the average user can apply almost any set of tuning rules to get good performance, a consultant is often useful to get the best performance. Tuning software can automatically identify the actual dynamics for feedback control and provide recommended settings that can be scheduled online for different production rates and process conditions. PID output for startup, batch operation and abnormal conditions can be scheduled automatically by the use of the Track or the Remote Output modes. Cascade control can be effectively used to compensate for fast secondary disturbances and nonlinearities. Advanced Regulatory Control techniques such as feedforward, override and valve position control can quickly implemented to increase process capacity or efficiency. When override control is used, the selected controller is evident to the operator. The external reset feedback feature must be used so that the unselected PID offset in output from the selected PID output is the unselected PID error multiplied by its gain providing predictability as to when an unselected PID will take over control. An enhanced PID can be used to enable the PID to use at-line and even off-line analyzer results for composition control with sensitivity of tuning settings to cycle time eliminated for self-regulating process responses and minimized for integrating process responses as described in the 7/06/2015 Control Talk Blog “ Batch and Continuous Control with At-Line and Offline Analyzers Tips ”. While the PID can be configured and tuned to provide automatic control for all sorts of situations resulting in its use in over 99% of the feedback control systems, there is a lack of a straightforward general approach to increase its effective. While my previous blogs this past year supporting my YouTube Videos on PID Options and Solutions give a useful perspective and a lot details, what is needed is for someone to step back and give a clear step by step approach that allows for different objectives and tuning methods. By the way, these videos are now posted as part of a very practical and useful playlist of ISA Mentor Program Webinars that should be significant value to every automation engineer. There are limitations to PID control that move users to consider Model Predictive Control (MPC). Most tuning software does not effectively and automatically identify the feedforward dynamic compensation needed by a PID. Also, more than half decoupling for a PID is confusing and too challenging for the average user. Compound or complex responses where the initial response is different than the later response due to recycle or competing effects, tuning of the PID is difficult. Additionally the simultaneous predictive honoring of the multiple constraints is not within the normal capabilities of the PID. These situations all lead us to MPC to achieve greater process performance, especially since the MPC software may be integrated into the DCS and implemented by a simple configuration making its use less expensive, faster and easier. Also, let’s face it, doing an article or presentation on a PID application is not going to get you as much recognition as doing one on an MPC application. Model Predictive Control MPC can be potentially used as a replacement for any PID control loop in continuous operation if the PID gain needed is less than ten, the execution rate does not need to be faster than 1 second and derivative action is not essential for safe tight control for unmeasured disturbances. The temperature control of many liquid reactors falls into this category. For highly exothermic reactors, the positive feedback response can create a dangerous runaway condition with a point of no return. While technically processes should be designed so this cannot happen, the acceleration in the response from a batch polymerization reaction can exceed coolant capabilities leading to relief valves popping and blowing over the reactants to a flare system. I have been in a control room when this occurred. MPC can be extended to fed-batch if models are switched as the batch progresses. The control of a temperature profile or product composition profile can be done by the translation of the controlled variable from temperature or concentration to rate of change of temperature and concentration, which is the slope of the batch profile. This creates a pseudo steady state and the ability to readily make changes in the controlled variable in both directions. The abilities to handle nonlinearities and provide operator understanding are more challenging for MPC than PID control. Consultants and proficient users can address these needs by switching in different models for different production rates and plant conditions and adding graphics that enable operator to see future trajectories and understand the relative contribution of each controlled variable, disturbance variable and constraint variable. For multivariable applications proficiency in knowing and improving matrix condition number and using the optimizer (e.g., linear program) is useful. My experience personally and after doing 14 years of Control Talk Columns with industry experts, is that the most successful and extensively used technologies are PID and MPC because they address Keys 1-10 with a track record for Keys 1 and 2 far exceeding the remaining technologies. Where PID and MPC fall short is in the ability to find unsuspected relationships and cause and effects particularly when there are a large number of variables at play. The variables for PID and MPC are chosen and classified as to controlled, manipulated, disturbance, constraint and optimization variables based on knowledge of person applying the technology. Also, regimented testing is used to identify the relationships. On the plus side, PID and MPC identify all of the necessary dynamics, whereas the other technologies leave a lot to the imagination. Thus, the PID and MPC solutions are narrowed dramatically compared to the remaining technologies to be discussed when looking at orders of magnitude number of variables with very little preconceptions and regimented testing as proposed in the Industrial Internet of Things (IIoT). Multivariate Statistical Process Control (Data Analytics) Data analytics can find relationships between process inputs and process outputs in very large sets of data. Data analytics eliminates relationships between process inputs (cross correlations) and reduces the number of process inputs by the use of principal components constructed that are orthogonal and thus independent of each other to each other in a plot of a process output versus principle components. For two principal components, this is readily seen as a X, Y and Z plot with each axis at a 90 degree angle to the each other axis. The X and Y axis cover the range of values principal components and the Z axis is the process output. The user can drill down into each principal component to see the contribution of each process input. The use of graphics to show this can greatly increase operator understanding. Data analytics excels at identifying unsuspected relationships. For process conditions outside of the data range used in developing the empirical models, linear extrapolation helps prevent bizarre extraneous predictions. Also, there are no humps or bumps that cause a local reversal of process gain and buzzing. Batch data analytics does not need to identify the process dynamics because all of the process inputs are focused on a process output at a particular part of the batch cycle (e.g., endpoint). This is incredibly liberating. The piecewise linear fit to the batch profile enables batch data analytics to deal with the nonlinearity of the batch response. The results can be used to make mid batch corrections. Potentially batch data analytics can address most of the keys to success listed. The application of data analytics to continuous processes requires the synchronization of the process inputs with the process outputs predicted. This can be problematic because there are many delays and time constants of instrumentation and control valves and unit operations that are unknown. A deadtime is identified to be applied to each process input to deal with these dynamics. As you can imagine, this may work on a single unit operation with negligible automation system dynamics and no recycle but is a big challenge for most plant applications A word of caution for data analytics and the next technology discussed Artificial Neural Networks (ANN). What are identified are correlations and not causes and effects. The review by people who understand the process (e.g., process and research engineers) and the possible use of first principle simulations are needed to validate causes and effects. Both data analytics and ANN have been much more successful in predicting human responses that tend to be much less deterministic and much more situational. There is a possibly a greater future in terms of predicting operator responses than process responses when it comes to continuous unit operations. Neural Networks ANN offer predictive behavior that addresses nonlinearities much more effectively than data analytics particularly for continuous processes. However, ANN does not necessarily use principle components to prevent cross correlations and to reduce the number of inputs. Consequently, an ANN may have order(s) of magnitude greater number of process inputs than data analytics without the ability to determine numerically the contribution of each input beyond a relative sensitivity analysis. Also, the response may have humps or bumps conditions within the test range and strange results outside of test data range. Special techniques can be used to address these concerns. The future of ANN to me is more as a supplement rather than a replacement to other technologies. There has been considerable success in the use of a combination of an ANN and MPC. Nothing to me precludes the use Principal Component Analysis (PCA) extensively employed by data analytics to eliminate cross correlations and to dramatically reduce the number of process inputs to ANN. The main problem is that the ANN and data analytics camps see their technology as the best taking an adversarial view rather than trying to get the best out of both. Fuzzy Logic Control (FLC) FLC has been successfully used where a process model cannot be obtained (e.g., mining industry). FLC automates the best of the relative operator responses. Given a model, it has been shown that a PID gives as good or better performance for a single loop if all of the PID options are used as needed. I expect the same is true in terms of MPC being a better solution for multivariable control if models can be identified. FLC is undesirable from the standpoint of tuning, understanding and technical support. I implemented a successful FLC for a neutralization system. The plant was afraid to touch the FLC because no one understood it. Decades later, I showed how an MPC could provide better optimization with a more continuously improvable and maintainable system as documented in the November 2007 Control article “ Virtual Control of Real pH ”. Expert Systems Seems like a great idea and maybe there is a future for it but the company I spent my career with had 10 people working on expert systems for ten years with only some limited success. Often the expert systems complained too much and sometimes about the wrong things. Operators saw them as a distraction. False alarms totally subjugated confidence in them. The expert systems fell into disuse and were turned off within a few years after the developer left the area. This was back in the 1990s. My hope is that technology has advanced and addressed these issues. I strongly suggest one check how well they meet Keys 1-12. The expert system we used also had fuzzy logic. I remember entering a lot of rules and then wondering how did they play together and what was the order of execution. It was very easy maybe too easy to add rules but extremely difficult to analyze the consequences for all the possible conditions. Expert systems do not have dynamics and are thus not used to predict a process variable but rather an abnormal condition and a possible steady state correction. I think there is a place for them in helping provide better diagnostics based on problems identified by other technologies. As a closing note, the one New Year’s Resolution that I can guarantee I am going to keep is “I will not keep all my New Year’s Resolutions.” Happy New Year!
  • What Are Best Cutover Strategies for Upgrading an Industrial Plant?

    The post What Are Best Cutover Strategies for Upgrading an Industrial Plant? first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc (now Eastman Chemicals). Greg will be posting questions and responses from the ISA Mentor Program , with contributions from program participants. Here we see how a general and open-ended question can lead to a very insightful, useful, and at times, humorous discussion on a critical phase of migration projects where automation is transferred from the old to a new system with many safety and performance implications. The question was posed by Marsha Wisely, a relatively new protégée. The answer is provided by Hunter Vegas, ISA Mentor Program co-founder, who has the most extensive practical experience of the program resources in automation project design and execution. Marsha Wisely’s Initial Question I am looking for general information on the different types of cutovers and what the impact of each type may be. If this topic is too big, I would appreciate even some quality documentation on the topic such that I can read up and maybe come back with more specific questions. Hunter Vegas’s Initial Answers It IS a pretty open-ended question, and the topic is rather huge. Books by practitioners are more useful because they reflect actual plant experience. Greg adds that academics who have partnered with industry have expanded our understanding and principles through theory supporting practice. However, you get good at cutovers by doing them – lots of them – and learning (sometimes the hard way) what works and what does not. Older folks in automation have lots of gray hair, or none at all, and plant startups are most likely to blame. There are several universal constants that you must know about startups:You get good at cutovers by doing them – lots of them – and learning (sometimes the hard way) what works and what does not. Older folks in automation have lots of gray hair, or none at all, and plant startups are most likely to blame. There are several universal constants that you must know about startups: a) Instrumentation and automation are almost always the last thing to install and check out because you can’t hook up instruments on pipes that don’t exist, and you can’t talk to instruments if there are no wires. Invariably the civil, mechanical, and electrical installations all get delayed on a cutover but the start date does not slide so instrumentation is ALWAYS critical path. You start off with three weeks to install your equipment and that usually gets whittled down to three hours. “You mean you aren’t DONE yet?!?!?!” b) You can’t run the plant without instrumentation so if your manager keeps hounding you about when you’ll be done, tell him that you absolutely promise that you’ll be done before he starts up. c) Regardless of who is on what shift, the night shift is almost always half as productive as the day shift. d) Good, Fast, and Cheap (pick any two). e) There is no such thing as too much coffee on a startup. Cutover types: The different “cutover types” isn’t nearly cut and dried or simple. I’ll start by talking about a control system retrofit as opposed to a new plant or a major plant expansion. Usually a retrofit has a couple of different types: 1) Software upgrade – same instruments, same hardware (or maybe a few upgraded computers), and a new revision of software. 2) Hardware upgrade – same field instruments, new hardware and software. (Maybe same vendor, maybe not) 3) Total revamp – upgrade/replace a lot of field instruments and the entire control system. All of those can be done in various ways that have different impacts on the production plant. Nearly all COULD be done without shutting down the plant. It would be expensive, and it would take a while, but it can certainly be done. I’ve converted three or four plants from pneumatic controllers on panelboards to full-blown distributed control systems (DCS) and never shut them down. Ultimately, the economics of a plant production drives the decision. If they are making a million dollars of product a day, they can afford to pay an extra $200,000 to cutover on the fly. Similarly, if they are barely running eight hours a day, a two- or three-day outage doesn’t matter. Often the cutover is a mix – you do as much work as possible prior to shutting down and then you come down and finish the rest. The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program. HOW do you cutover? That totally depends on the plant, the process, and the timing. When I am looking at a retrofit project, I start by talking to the plant and understanding a few things: 1) Their budget 2) Their schedule 3) The financial justification for the project 4) What their “pain points” are Based on that, I then start going over the project – looking at panels, mulling my options, determining what I have to do, and how I might go about it. I also investigate what options I have and what is required to change over the system. Is the I/O compatible? Are there interconnect options and do they work? (Just because a company SAYS they offer interconnect options, does NOT mean they actually work as advertised!) Eventually, I pull it all together and hatch a plan, figure what it will cost, and present it to the client. If they can afford it and like it, we are off to the races. If they don’t, then we tweak it accordingly. Maybe they can afford to take a longer outage and save on installation costs. Maybe they are willing to pay more to bring the plant on line faster. Maybe there is a way to jury rig things to get the bulk of the benefit now and do the rest of the cutover later when they have more time. Well, that’s a start… let’s see what questions this generates. Marsha Wisely’s Subsequent Questions Thanks for the information! When you have only seen one of something, it’s hard to know what the norm is or what the other options are. Also, I really like your universal constants section! After reading your comments, I started thinking about how the information applied to my experience. Below I have a summary of my experience to give you some background on me, and it inspired some questions as well. Type of project: brownfield/retrofit My role was strictly software-oriented Moving from a combination of a programmable logic controller (PLC) and legacy system (Provox) to a modern DCS platform (DeltaV) The idea was for it to be a software-only upgrade, temporarily Profibus DP served as the interface between the PLC and modern DCS platform A specialized interface (DeltaV Controller Interface for Provox I/O) was used to communicate with the legacy system’s I/O To your point on the interconnect options: both solutions above were tested prior to startup, which saved us some headaches I’m sure The long-term goal was a full conversion to electronic marshalling The customer also had some existing control modules on the modern DCS platform they wanted to integrate once we were onsite. The cutover and startup both took a good deal longer than previously anticipated. Loop checkouts were taking longer than anticipated, the plant discussed how to improve, but management never prioritized timeliness over safety and quality of work, which I respect and appreciate (I know that isn’t always the case). There was some unexpected equipment issues, namely valves were backwards (fail open when they should be fail closed and vice versa). Startup was also slower due to some equipment issues – air lines to pneumatic valves got leaks and equipment that was working before startup needed to be repaired before being brought back up. This leads to some frustrations and finger pointing at the software – “The valves aren’t working; they worked before. It must be the software.” Definitely a good lesson in patience. In those cases, we basically did an additional loop check, which quickly found the issue. Shutdowns are in the hands of operations and can be pretty hard on equipment. This leads to the following additional questions. 1) If you do a hardware and software cutover, are those types of equipment issues more likely to get caught? And are these types of equipment configuration errors common? 2) Are there any common issues caused by shutting down equipment that are good to check before starting back up? 3) In your notes, you mention asking the customer about their “pain points.” Could you provide some examples? 4) My next couple projects are for new plants – one of which I will be leading. Do you have any advice specific to new plants? It looks like your “HOW to cutover section” covers a lot, but are there any additional challenges associated with new plants ? Thanks again for your help! I greatly appreciate your insight into the field since I haven’t had much field experience. My next couple projects, in addition to being new plants, also have a larger equipment and instrumentation component to my assignment, and I am looking forward to that exposure. Hunter Vegas’s Subsequent Answers Let me try to answer your questions: 1) If you do a hardware and software cutover, are those types of equipment issues more likely to get caught? And are these types of equipment configuration errors common? If you are doing a hardware/software cutover you have to be fanatical about the details. Picking up failure modes of valves, ranges, square root versus linear, interlocks, etc. is pretty much a given. Only our most experienced people generally do the decoding of the existing system because it is so crucial to success. If you toss it to some inexperienced engineers, you will pay a wicked price on startup and rework. 2) Are there any common issues caused by shutting down equipment that are good to check before starting back up? One thing I learned very early on was, if it all possible, obtain historical data of the running plant before you shut down and have it available for access during the start up. It is a very common trick for plant personnel to get the automation company to fix instrumentation that hasn’t worked for years. If you can point to the fact that the transmitter has been flat-lined since 2006, it is a pretty easy argument to say that fixing that transmitter is out of scope. 3) In your notes, you mention asking the customer about their “pain points.” Could you provide some examples? Pain points are what keeps them up at night. Are they struggling with quality? Production? Throughput? Reliability? Does something break and it takes the techs 2 days to figure it out? Is there a particular area in the plant that is always in manual because the controls never work? Is the messaging awful so the batch stops and nobody knows why? Etc. Often I can fix a lot of those things fairly easy and make them very happy. Or I can offer solutions that increase the work scope some but has very big payback. 4) My next couple projects are for new plants – one of which I will be leading. Do you have any advice specific to new plants? It looks like your “HOW to cutover section” covers a lot, but are there any additional challenges associated with new plants ? New plants can be very painful and difficult for a whole new set of reasons. Specifically: a) How good is the engineering contractor? The large Architectural & Engineering (A&E) firms can be awful or wonderful; it all depends on what team you get. If you get the “A” team, things will be in pretty good shape. Unfortunately, they might bait you with the “A” team but swap you the “C” team later in the project or you end up with the “C” team due to turnover. Either way, the engineering is just wrong. Wrong pipe, wrong materials, wrong instruments, wrong drawings, wrong…wrong…wrong. And it’s your job to make it work using BIFF principles (Big Improvements for Free) because the money has already been spent. b) Was anything reviewed by someone other than the engineering contractor? The best option is for the plant personnel to review and approve everything; however, the plant often lacks either the expertise or the time to do that and the engineering contractors are infamous for dropping 1,000 pages on your desk and demanding you approve it overnight or “the project will be delayed and it was your fault.” The second-best option is for a third party to at least look things over and catch the worst stuff. If nobody looks it over, then you better have the “A” team or it will be bad. c) Does anyone even know how this is supposed to work? Is the plant a copy of an existing plant and the process is well understood (and ideally you have people from that plant helping you) OR is this Serial #1 and the only people who have a vague clue are some lab chemists who had a tiny pilot reactor running somewhere? Obviously, Serial #1 is going to be tricky because nobody knows what they don’t know and nobody has the answer. d) What is the schedule/budget like? Was it stupid aggressive to start? If so, it will be tough, as nothing ever goes to plan on a new plant and both the budget and schedule are bound to suffer. I have had no-bid projects that I knew were badly estimated because failure was virtually assured, and I was much happier having my competitor get the black eye than me! Now don’t get me wrong – large projects are executed successfully all the time and with the right team things can go very smoothly. But if the client goes with an unproven, low-bid entity, they often regret the decision. I once was part of a large project that was more than half-way completed but things were going so badly that the client fired the A&E firm mid project and went with a new one. They lost $10 million in engineering but probably saved $20 to $30 million in extra startup costs because the first firm was doing so poorly. See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine January/February 2013 feature article “ Enabling new automation engineers ” for candid comments from some of the original program participants. See the May 2015 Control Talk column “ How to effectively get engineering knowledge ” with the Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today. Discussion and answers are provided by Greg McMillan, Hunter Vegas (co-founder of the ISA Mentor Program and project engineering manager at Wunderlich-Malec), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (process systems automation group manager at the Midwest Engineering Center of Emerson Process Management), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont) and Bart Propst (Process Control Leader for the Ascend Performance Materials Chocolate Bayou plant).
  • Deadtime Dominance - Sources, Consequences and Solutions

    If the total loop deadtime becomes larger than the open loop time constant, you have a deadtime dominant loop. There is a lot of misunderstanding posed by this difficult challenge including a lot of prevalent myths.  Here we look at how we get into this situation, what to expect and what to do. Deadtime is the ultimate limit to loop performance. A controller cannot correct for a disturbance until it starts to see a change in the process variable and can start to change the manipulated variable (e.g., flow). If the deadtime was zero, the controller gain could be set as large as you want and the peak and integrated errors made as small as you want provided there is no noise. The PID gain could theoretically approach infinity and lambda approach zero. Without deadtime, I would be out of job. Sources Fortunately, deadtime dominant loops where the deadtime is much larger than the open loop time constant, is usually relegated to a few loops for processes involving liquids and gases.  Deadtime dominance typically occurs due to transportation delays (deadtime = volume/flow), cycle times of at-line analyzers (deadtime = 1.5 x cycle time, or improper setting of PID execution rates or wireless update rates (dead time = 0.5 execution or update rate time interval). Deadtime dominance due to transportation delay is more common in the mining industry and other processes that are primarily dealing with solids. Large transportation delays result from conveyors or feeders or extruders where the feed of solids are manipulated at the inlet and the quality or quantity is measured at the outlet (discharge) assuming the sensor lag is much less than the transportation delay (i.e., total loop deadtime). The use of a gravimetric feeder can eliminate the delay for feed flow control making a much more precise and responsive loop.  A kiln has a temperature sensor lag that creates an open loop time constant that is usually larger than the transportation delay. However, the use of an optical sensor (e.g., pyrometer) for temperature control eliminates this lag possibly making the loop deadtime dominant. In vessels and pipes where there is no axial mixing, also known as back mixing due to an agitation or recirculation or sparging, there is no mixing up and down in the vessels and mixing forward and backwards in the pipe. We call this plug flow. There may be radial mixing, to make the concentration or temperature in a cross section of the vessel or pipe more uniform or to keep solids dispersed. Radial mixing does not reduce transportation delays whereas axial mixing eliminates plug flow decreasing transportation delays. Any remaining delay from axial mixing depends upon the degree of mixing. For vessels with good geometry and axial agitators, the delay is approximately the turnover time that is the volume divided by the summation of the agitator pumping rate, recirculation rate and feed rate.       Deadtime dominance usually doesn’t occur in temperature loops except for inline systems, such as plug flow liquid or polymer reactors where there is negligible heat transfer lag in the equipment or sensor.   Consequences For a deadtime dominant system, the peak error for a step load disturbance approaches the open loop error (error if loop is in manual). Better tuning and feedback control algorithms cannot do much to improve the peak error. There can be a significant improvement in the integrated error by better tuning and feedback algorithms for a load disturbance but the ultimate limit to the integrated error is the peak error multiplied by the deadtime. Solutions The best solution is to eliminate the deadtime that is the source of deadtime dominance by better process, equipment or automation system design. The next best solution is feedforward control given a reliable timely measurement of the disturbance. The feedforward measurement does not need to be extremely accurate. An error of 20% will still result in an 80% improvement in integrated error if the feedforward timing is right. The correct timing cannot be achieved when the correction by feedforward has a deadtime greater than the deadtime in terms of the actual disturbance affecting the process variable that is the controlled variable. There is no compensation for too much deadtime in the feedforward path. A feedforward correction arriving too late must have the feedforward gain reduced. If the feedforward correction arrives after most of the feedback correction has been made, a second disturbance is created making the response worse than no feedforward. Even more disruptive is a feedforward correction that arrives too soon due to deadtime in the feedforward path being less than the deadtime in the disturbance path because this creates inverse response confusing the feedback controller.  However, this situation is readily corrected by inserting a deadtime in the feedforward (FF) signal equal to the disturbance path deadtime minus the feedforward path deadtime. A lead lag is used as well in the FF dynamic compensation where the FF lead time is set equal to the lag in the feedforward path and the FF lag is set equal to the lag in the disturbance path. The solution cited for deadtime dominant loops is often a Smith Predictor deadtime compensator (DTC) or Model Predictive Control (MPC). There are many counterintuitive aspects in these solutions. Not realized is that the improvement by the DTC or MPC is less for deadtime dominant systems than for lag dominant systems. Much more problematic is that both DTC and MPC are extremely sensitive to a mismatch between the compensator and model deadtime versus the actual total loop deadtime for an overestimate besides an underestimate of the deadtime. The consequences for the DTC and MPC are much greater for an overestimate. For a conventional PID, an overestimate of the deadtime just results in more robustness and slower control.  For a DTC and MPC, an overestimate of deadtime by as little as 25% can cause a big increase in integrated error and an erratic response. A better general solution first advocated by Shinskey and now particularly me is to simply insert a deadtime block in the PID external reset feedback path (PID+TD) with the deadtime updated to be always be less slightly less than the actual total loop deadtime. Turning on external reset feedback (e.g., dynamic reset limit) on and off enables and disables the deadtime compensation. Note that for transportation delays, this means updating the deadtime as the total feed rate or volume changes.  This PID+TD implementation does not require the identification of the open loop gain and open loop time constant for inclusion as is required for a DTC or MPC. Please note that the external reset feedback should be the result of a positive feedback implementation of integral action. There will be no improvement from a deadtime compensator if the PID tuning settings are left the same as they were before the DTC or by a deadtime block in external reset feedback (PID+TD).  In fact the performance can be slightly worse for even an accurate deadtime. You need to greatly decrease the PID integral time toward a limit of the execution time plus any error in deadtime. The PID gain should also be increased. The equation for predicting integrated error as a function of PID gain and reset time settings is no longer applicable because it predicts an error less than the ultimate limit that is not possible. The integrated error cannot be less than the peak error multiplied by the deadtime.  The ultimate limit is still present because we are not making deadtime disappear. If the deadtime is due to analyzer cycle time or wireless update rate, we can use an enhanced PID (e.g., PIDPlus) to effectively prevent the PID from responding between updates. If the open loop response is deadtime dominant mostly due to the analyzer or wireless device, the effect of a new error upon update results in a correction proportional to the PID gain multiplied by the open loop error. If the PID gain is set equal to the inverse of the open loop gain for a self-regulating process, the correction is perfect and takes care of the step disturbance in a single execution after an update in the PID process variable. The integral time should be set smaller than expected (about equal to the total loop deadtime that ends up being the PID execution time interval) and the positive feedback implementation of integral action must be used with external reset feedback enabled. The enhanced PID greatly simplifies tuning besides putting the integrated error close to its ultimate limit. Note that you do not see the true error that can have started at any time in between updates but only see the error measured after the update. For more on the sensitivity to both under and over estimates of the total loop deadtime and open loop time constant, see the ISA books “ Models Unleashed ” pages 56-70 for MPC and “ Good Tuning : A Pocket Guide 4 th Edition ” pages 118-122 for DTC. For more on the enhanced PID, see the July-Aug 2010 InTech article “ Wireless: Overcoming challenges in PID control & analyzer applications ” and the July 6, 2015 Control Talk Blog “ Batch and Continuous Control with At-Line and Offline Analyzers Tips ”. If you can make the deadtime zero, let me know so I can retire.
  • Deadtime Dominance - Sources, Consequences and Solutions

    The post, Deadtime Dominance - Sources, Consequences and Solutions first appeared on ControlGlobal.com's Control Talk blog. If the total loop deadtime becomes larger than the open loop time constant, you have a deadtime dominant loop. There is a lot of misunderstanding posed by this difficult challenge including a lot of prevalent myths. Here we look at how we get into this situation, what to expect and what to do. Deadtime is the ultimate limit to loop performance. A controller cannot correct for a disturbance until it starts to see a change in the process variable and can start to change the manipulated variable (e.g., flow). If the deadtime was zero, the controller gain could be set as large as you want and the peak and integrated errors made as small as you want provided there is no noise. The PID gain could theoretically approach infinity and lambda approach zero. Without deadtime, I would be out of job. Sources Fortunately, deadtime dominant loops where the deadtime is much larger than the open loop time constant, is usually relegated to a few loops for processes involving liquids and gases. Deadtime dominance typically occurs due to transportation delays (deadtime = volume/flow), cycle times of at-line analyzers (deadtime = 1.5 x cycle time, or improper setting of PID execution rates or wireless update rates (dead time = 0.5 execution or update rate time interval). Deadtime dominance due to transportation delay is more common in the mining industry and other processes that are primarily dealing with solids. Large transportation delays result from conveyors or feeders or extruders where the feed of solids are manipulated at the inlet and the quality or quantity is measured at the outlet (discharge) assuming the senor lag is much less than the transportation delay (i.e., total loop deadtime). The use of a gravimetric feeder can eliminate the delay for feed flow control making a much more precise and responsive loop. A kiln has a temperature sensor lag that creates an open loop time constant that is usually larger than the transportation delay. However, the use of an optical sensor (e.g., pyrometer) for temperature control eliminates this lag possibly making the loop deadtime dominant. In vessels and pipes where there is no axial mixing, also known as back mixing due to an agitation or recirculation or sparging, there is no mixing up and down in the vessels and mixing forward and backwards in the pipe. We call this plug flow. There may be radial mixing, to make the concentration or temperature in a cross section of the vessel or pipe more uniform or to keep solids dispersed. Radial mixing does not reduce transportation delays whereas axial mixing eliminates plug flow decreasing transportation delays. Any remaining delay from axial mixing depends upon the degree of mixing. For vessels with good geometry and axial agitators, the delay is approximately the turnover time that is the volume divided by the summation of the agitator pumping rate, recirculation rate and feed rate. Deadtime dominance usually doesn’t occur in temperature loops except for inline systems, such as plug flow liquid or polymer reactors where there is negligible heat transfer lag in the equipment or sensor. Consequences For a deadtime dominant system, the peak error for a step load disturbance approaches the open loop error (error if loop is in manual). Better tuning and feedback control algorithms cannot do much to improve the peak error. There can be a significant improvement in the integrated error by better tuning and feedback algorithms for a load disturbance but the ultimate limit to the integrated error is the peak error multiplied by the deadtime. Solutions The best solution is to eliminate the deadtime that is the source of deadtime dominance by better process, equipment or automation system design. The next best solution is feedforward control given a reliable timely measurement of the disturbance. The feedforward measurement does not need to be extremely accurate. An error of 20% will still result in an 80% improvement in integrated error if the feedforward timing is right. The correct timing cannot be achieved when the correction by feedforward has a deadtime greater than the deadtime in terms of the actual disturbance affecting the process variable that is the controlled variable. There is no compensation for too much deadtime in the feedforward path. A feedforward correction arriving too late must have the feedforward gain reduced. If the feedforward correction arrives after most of the feedback correction has been made, a second disturbance is created making the response worse than no feedforward. Even more disruptive is a feedforward correction that arrives too soon due to deadtime in the feedforward path being less than the deadtime in the disturbance path because this creates inverse response confusing the feedback controller. However, this situation is readily corrected by inserting a deadtime in the feedforward (FF) signal equal to the disturbance path deadtime minus the feedforward path deadtime. A lead lag is used as well in the FF dynamic compensation where the FF lead time is set equal to the lag in the feedforward path and the FF lag is set equal to the lag in the disturbance path. The solution cited for deadtime dominant loops is often a Smith Predictor deadtime compensator (DTC) or Model Predictive Control (MPC). There are many counterintuitive aspects in these solutions. Not realized is that the improvement by the DTC or MPC is less than for deadtime dominant systems than for lag dominant systems. Much more problematic is that both DTC and MPC are extremely sensitive to a mismatch between the compensator and model deadtime versus the actual total loop deadtime for an overestimate besides an underestimate of the deadtime. The consequences for the DTC and MPC are much greater for an overestimate. For a conventional PID, an overestimate of the deadtime just results in more robustness and slower control. For a DTC and MPC, an overestimate of deadtime by as little as 25% can cause a big increase in integrated error and an erratic response. A better general solution first advocated by Shinskey and now particularly me is to simply insert a deadtime block in the PID external reset feedback path (PID+TD) with the deadtime updated to be always be less slightly less than the actual total loop deadtime. Turning on external reset feedback (e.g., dynamic reset limit) on and off enables and disables the deadtime compensation. Note that for transportation delays, this means updating the deadtime as the total feed rate or volume changes. This PID+TD implementation does not require the identification of the open loop gain and open loop time constant for inclusion as is required for a DTC or MPC. Please note that the external reset feedback should be the result of a positive feedback implementation of integral action. There will be no improvement from a deadtime compensator if the PID tuning settings are left the same as they were before the DTC or by a deadtime block in external reset feedback (PID+TD). In fact the performance can be slightly worse for even an accurate deadtime. You need to greatly decrease the PID integral time toward a limit of the execution time plus any error in deadtime. The PID gain should also be increased. The equation for predicting integrated error as a function of PID gain and reset time settings is no longer applicable because it predicts an error less than the ultimate limit that is not possible. The integrated error cannot be less than the peak error multiplied by the deadtime. The ultimate limit is still present because we are not making deadtime disappear. If the deadtime is due to analyzer cycle time or wireless update rate, we can use an enhanced PID (e.g., PIDPlus) to effectively prevent the PID from responding between updates. If the open loop response is deadtime dominant mostly due to the analyzer or wireless device, the effect of a new error upon update results in a correction proportional to the PID gain multiplied by the open loop gain and open loop error. If the PID gain is set equal to the inverse of the open loop gain, the correction is perfect and takes care of the step disturbance in a single execution after an update in the PID process variable. The integral time should again be set to be very small and the positive feedback implementation of integral action must be used with external reset feedback enabled. The enhanced PID greatly simplifies tuning besides putting the integrated error close to its ultimate limit. Note that you do not see the true error that can have started at any time in between updates but only see the error measured after the update. For more on the sensitivity to both under and over estimates of the total loop deadtime and open loop time constant, see the ISA books “ Models Unleashed ” pages 56-70 for MPC and “ Good Tuning : A Pocket Guide 4 th Edition ” pages 118-122 for DTC. For more on the enhanced PID, see the July-Aug 2010 InTech article “ Wireless: Overcoming challenges in PID control & analyzer applications ” and the July 6, 2015 Control Talk Blog “ Batch and Continuous Control with At-Line and Offline Analyzers Tips ”. If you can make the deadtime zero, let me know so I can retire.
  • PID Options and Solutions - Parts 2 and 3

    Here we finish up the extensive presentation of how to get the most out of your PID. We start with a look at the contribution of each mode and show how to estimate performance metrics from tuning settings and how excessive integral action and insufficient proportional action create oscillations. The key role of dead time, external reset feedback and different PID Forms and Structures is given. The following key points support and augment the PID Options and Solutions - Part 2 and PID Options and Solutions - Part 3 recordings for the ISA Mentor Program. 2-1: To see the effect of each mode, a setpoint change can be made to a PID with a Structure of “PID on Error” (no setpoint filter or lead-lag) and freezing the process variable PV so that there is no feedback and so what is seen is the PID response solely due to the setpoint change. The same results could be observed by stepping the PV via the Analog Input block Simulate input keeping the SP constant. 2-2: The proportional mode provides a step change in output proportional to the step change in error. The derivative mode with a built in filter with a time constant that is about 1/8 the rate time provides a kick that decays out to zero. The integral mode ramps the output and at the transpired time equal to the reset time the integral mode contribution to the output will equal the output step from the proportional mode. Hence integral time settings are in terms of seconds per repeat or minutes per repeat where “per repeat” means the integral mode has repeated the contribution from the proportional mode. Often the term “per repeat” is dropped. Tuning settings that are the inverse of the integral time, usually include the “repeat” term, such as “repeats per second” and “repeats per minute”. 2-3: The integral mode will continue to increase its contribution to the PID output in the direction to cross the setpoint (SP) as the process variable approaches the SP. It only reverses direction after the PV crosses the SP, too late considering all processes have deadtime. This causes overshoot and hunting. Integral action is never satisfied and is always ramping the output since the error is never exactly zero. 2-4: The proportional mode will decrease its contribution to the PID output that is in the direction of driving the PV to cross SP, effectively working to stop the PV as the PV gets closer to the SP. This anticipation helps prevent the PV from crossing the SP. Derivative action is providing a contribution to the output that is proportional to slope of the approach in a direction to help stop the PV. 2-5: Most loops have too much integral action and not enough proportional action resulting in a SP overshoot to achieve a new setpoint or return to an existing setpoint in the recovery from a load upset. If there is too much proportional action relative to integral action, the approach to setpoint will falter (PV trend momentarily flattens out and then resumes the approach). This is a rate case but can occur for deadtime dominant processes that inherently need more integral and less proportional action. 2-6: Humans looking at digital displays of PV and SP are prone to advocate and even manually do what the integral mode does. Looking at the reactor temperature loop example faceplate or numerical values of PV and SP, humans expect for a PV value below SP that the steam valve should be open whereas a trend chart with an intelligent time scale would show the cooling water valve should be open based on the PV slope. See the 6/28/2012 Control Talk Blog “ Future PV Values are the Future ” for how to easily provide a PV value one deadtime into the future to help operators and engineers gain some anticipation and recognition that processes cannot stop on a dime.  2-7: The ultimate limit to the error from a step load upset to the process input for a perfectly tuned PID would approximate two right triangles placed back to back where each base is the dead time. The amplitude of the triangles that is the peak error is proportional to the ratio of the dead time to the dead time plus time constant. The area of the triangles that is the integrated error is proportional to this peak error multiplied by the dead time and thus the dead time squared. Dead time can come from the valve, process, piping, equipment, sensor, transmitter, PID execution time and filter time. Decreasing the dead time enables decreasing lambda and hence more aggressive PID tuning. A lambda equal to 3 dead times is typical unless there is significant nonlinearity or unknowns in the control loop response dynamics requiring a larger lambda. This lambda can tolerate an increase in total loop dead time or open loop gain of up to 300% without starting a significant oscillation. This lambda provides a gain margin of about 6 and a phase margin of about 76 degrees for self-regulating processes. 2-8: The practical limit to the error for a step load upset depends upon the PID tuning. Decreasing the PID gain affects both the peak error and integrated error whereas increasing the reset time mostly affects the integrated error. The performance for a load upset can be easily tested by momentarily putting the PID in manual, making a step change in the PID output and immediately putting the PID back in automatic. The open loop error can be seen for self-regulating processes by leaving the PID in manual. 2-9: The ultimate limit to the rise time (time to reach setpoint) for a setpoint change is about 2 dead times and is achieved by a very large PID gain (lambda slightly less than the dead time) or setpoint feedforward if output limits do not restrict the change in output. Usually, the prevention of overshoot is more important than minimizing the rise time. The tracking of a PID output that is the output limit and then using a future PV value to determine when to put the output at the final resting value and then releasing the PID from the track mode after one dead time can minimize the rise time to less than 2 dead times with no overshoot as explained in the May 2006 Control article “ Full Throttle Batch and Startup Response ”.   2-10: The use of a setpoint filter equal to the reset time enables the use of load disturbance tuning without causing much overshoot for a setpoint change. The use of a lead time equal to about ¼ the setpoint filter time will reduce the rise time but possibly create some overshoot.  Alternatively, a Two Degrees of Freedom (2DOF) PID structure can be used with beta and gamma equal to about 0.25. 2-11: Near-integrating, true integrating and runaway processes will develop oscillations from a PID gain too small that are greater than the amplitude and period of oscillations for a PID gain too high unless the reset time is increased. The proportional mode provides the negative feedback action missing in these processes that do not have sufficient self-regulation. 2-12: The amplitude and period of oscillations from backlash increases as the PID gain decreases similar to what occurs for violation of the low PID gain limit for processes without sufficient self-regulation. 3-1: Lambda tuning prevents violation of the low PID gain limit by keeping the product of the PID gain and reset time greater than the inverse of the equivalent integrating process gain that is critical for near-integrating, true integrating and runaway processes. Since the permissible PID gain is much higher than users are accustomed to (e.g., 80 for batch reactor temperature control), users tend to use a much lower PID gain not realizing they need to proportionally increase the reset time. Often the reset time needs to be increased by two orders of magnitude for these processes. Not realized is that external reset feedback can enable the use of high PID gains without upsetting other loops as noted in item 3-3.  3-2: The maximum desirable level error can be achieved by estimating the corresponding arrest time. For crystallizer, evaporator and reactor composition control and distillation overhead receiver reflux control, the error is minimized. For most other volumes, the error is maximized (the absorption of variability in volume is maximized) so that feed changes to downstream equipment are minimized. 3-3: The power of external reset feedback is incredible. I find more and more potential applications and benefits including directional move suppression (fast opening and slow closing surge valves, gradual smooth optimization and fast getaway for abnormal conditions, prevention of unnecessary crossings of the split range point), prevention of oscillations from deadband or a slow secondary loop or control valve, prevention of upsets to other loops from aggressive tuning settings, and proper positioning of the output of unselected PID controllers in override control (difference in outputs between selected and unselected controller is the unselected PID gain times its error so that it takes over when its PV hits its setpoint). 3-4: The enhanced PID can tolerate variable and large analyzer cycle times. For self-regulating processes where the cycle time is much greater than the 63% response time, the PID gain can be increased to be almost the inverse of the open loop gain. External reset feedback is turned on in the enhanced PID. 3-5: PID structure options help the PID do well in many diverse applications. 3-6: The positive feedback implementation of integral action is essential for external reset feedback.  Most DCS systems think they have this capability but don’t. Unless the integral action is achieved by a filter whose input is the PID output or feedback of the manipulated variable and whose filter time is the reset time with the filter output added to the output from the proportional mode creating positive feedback, the PID does not really have external reset feedback. For more info see the May 2006 article by Greg Shinskey “ The power of external-reset feedback ”.    This is “no repeat” day. I will not repeat a Control Talk Blog today. I repeat that I will not repeat a Control Talk Blog today.
  • PID Options and Solutions - Parts 2 and 3

    Here we finish up the extensive presentation of how to get the most out of your PID. We start with a look at the contribution of each mode and show how to estimate performance metrics from tuning settings and how excessive integral action and insufficient proportional action create oscillations. The key role of dead time, external reset feedback and different PID Forms and Structures is given. The following key points support and augment the PID Options and Solutions - Part 2 and PID Options and Solutions - Part 3 recordings for the ISA Mentor Program. 2-1: To see the effect of each mode, a setpoint change can be made to a PID with a Structure of “PID on Error” (no setpoint filter or lead-lag) and freezing the process variable PV so that there is no feedback and so what is seen is the PID response solely due to the setpoint change. The same results could be observed by stepping the PV via the Analog Input block Simulate input keeping the SP constant. 2-2: The proportional mode provides a step change in output proportional to the step change in error. The derivative mode with a built in filter with a time constant that is about 1/8 the rate time provides a kick that decays out to zero. The integral mode ramps the output and at the transpired time equal to the reset time the integral mode contribution to the output will equal the output step from the proportional mode. Hence integral time settings are in terms of seconds per repeat or minutes per repeat where “per repeat” means the integral mode has repeated the contribution from the proportional mode. Often the term “per repeat” is dropped. Tuning settings that are the inverse of the integral time, usually include the “repeat” term, such as “repeats per second” and “repeats per minute”. 2-3: The integral mode will continue to increase its contribution to the PID output in the direction to cross the setpoint (SP) as the process variable approaches the SP. It only reverses direction after the PV crosses the SP, too late considering all processes have deadtime. This causes overshoot and hunting. Integral action is never satisfied and is always ramping the output since the error is never exactly zero. 2-4: The proportional mode will decrease its contribution to the PID output that is in the direction of driving the PV to cross SP, effectively working to stop the PV as the PV gets closer to the SP. This anticipation helps prevent the PV from crossing the SP. Derivative action is providing a contribution to the output that is proportional to slope of the approach in a direction to help stop the PV. 2-5: Most loops have too much integral action and not enough proportional action resulting in a SP overshoot to achieve a new setpoint or return to an existing setpoint in the recovery from a load upset. If there is too much proportional action relative to integral action, the approach to setpoint will falter (PV trend momentarily flattens out and then resumes the approach). This is a rate case but can occur for deadtime dominant processes that inherently need more integral and less proportional action. 2-6: Humans looking at digital displays of PV and SP are prone to advocate and even manually do what the integral mode does. Looking at the reactor temperature loop example faceplate or numerical values of PV and SP, humans expect for a PV value below SP that the steam valve should be open whereas a trend chart with an intelligent time scale would show the cooling water valve should be open based on the PV slope. See the 6/28/2012 Control Talk Blog “ Future PV Values are the Future ” for how to easily provide a PV value one deadtime into the future to help operators and engineers gain some anticipation and recognition that processes cannot stop on a dime.  2-7: The ultimate limit to the error from a step load upset to the process input for a perfectly tuned PID would approximate two right triangles placed back to back where each base is the dead time. The amplitude of the triangles that is the peak error is proportional to the ratio of the dead time to the dead time plus time constant. The area of the triangles that is the integrated error is proportional to this peak error multiplied by the dead time and thus the dead time squared. Dead time can come from the valve, process, piping, equipment, sensor, transmitter, PID execution time and filter time. Decreasing the dead time enables decreasing lambda and hence more aggressive PID tuning. A lambda equal to 3 dead times is typical unless there is significant nonlinearity or unknowns in the control loop response dynamics requiring a larger lambda. This lambda can tolerate an increase in total loop dead time or open loop gain of up to 300% without starting a significant oscillation. This lambda provides a gain margin of about 6 and a phase margin of about 76 degrees for self-regulating processes. 2-8: The practical limit to the error for a step load upset depends upon the PID tuning. Decreasing the PID gain affects both the peak error and integrated error whereas increasing the reset time mostly affects the integrated error. The performance for a load upset can be easily tested by momentarily putting the PID in manual, making a step change in the PID output and immediately putting the PID back in automatic. The open loop error can be seen for self-regulating processes by leaving the PID in manual. 2-9: The ultimate limit to the rise time (time to reach setpoint) for a setpoint change is about 2 dead times and is achieved by a very large PID gain (lambda slightly less than the dead time) or setpoint feedforward if output limits do not restrict the change in output. Usually, the prevention of overshoot is more important than minimizing the rise time. The tracking of a PID output that is the output limit and then using a future PV value to determine when to put the output at the final resting value and then releasing the PID from the track mode after one dead time can minimize the rise time to less than 2 dead times with no overshoot as explained in the May 2006 Control article “ Full Throttle Batch and Startup Response ”.   2-10: The use of a setpoint filter equal to the reset time enables the use of load disturbance tuning without causing much overshoot for a setpoint change. The use of a lead time equal to about ¼ the setpoint filter time will reduce the rise time but possibly create some overshoot.  Alternatively, a Two Degrees of Freedom (2DOF) PID structure can be used with beta and gamma equal to about 0.25. 2-11: Near-integrating, true integrating and runaway processes will develop oscillations from a PID gain too small that are greater than the amplitude and period of oscillations for a PID gain too high unless the reset time is increased. The proportional mode provides the negative feedback action missing in these processes that do not have sufficient self-regulation. 2-12: The amplitude and period of oscillations from backlash increases as the PID gain decreases similar to what occurs for violation of the low PID gain limit for processes without sufficient self-regulation. 3-1: Lambda tuning prevents violation of the low PID gain limit by keeping the product of the PID gain and reset time greater than the inverse of the equivalent integrating process gain that is critical for near-integrating, true integrating and runaway processes. Since the permissible PID gain is much higher than users are accustomed to (e.g., 80 for batch reactor temperature control), users tend to use a much lower PID gain not realizing they need to proportionally increase the reset time. Often the reset time needs to be increased by two orders of magnitude for these processes. Not realized is that external reset feedback can enable the use of high PID gains without upsetting other loops as noted in item 3-3.  3-2: The maximum desirable level error can be achieved by estimating the corresponding arrest time. For crystallizer, evaporator and reactor composition control and distillation overhead receiver reflux control, the error is minimized. For most other volumes, the error is maximized (the absorption of variability in volume is maximized) so that feed changes to downstream equipment are minimized. 3-3: The power of external reset feedback is incredible. I find more and more potential applications and benefits including directional move suppression (fast opening and slow closing surge valves, gradual smooth optimization and fast getaway for abnormal conditions, prevention of unnecessary crossings of the split range point), prevention of oscillations from deadband or a slow secondary loop or control valve, prevention of upsets to other loops from aggressive tuning settings, and proper positioning of the output of unselected PID controllers in override control (difference in outputs between selected and unselected controller is the unselected PID gain times its error so that it takes over when its PV hits its setpoint). 3-4: The enhanced PID can tolerate variable and large analyzer cycle times. For self-regulating processes where the cycle time is much greater than the 63% response time, the PID gain can be increased to be almost the inverse of the open loop gain. External reset feedback is turned on in the enhanced PID. 3-5: PID structure options help the PID do well in many diverse applications. 3-6: The positive feedback implementation of integral action is essential for external reset feedback.  Most DCS systems think they have this capability but don’t. Unless the integral action is achieved by a filter whose input is the PID output or feedback of the manipulated variable and whose filter time is the reset time with the filter output added to the output from the proportional mode creating positive feedback, the PID does not really have external reset feedback. For more info see the May 2006 article by Greg Shinskey “ The power of external-reset feedback ”.    This is “no repeat” day. I will not repeat a Control Talk Blog today. I repeat that I will not repeat a Control Talk Blog today.
  • How to Avoid Common Tuning Mistakes

    The post How to Avoid Common Tuning Mistakes first appeared on the ISA Interchange blog site. This article was written by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemicals). The proportional, integral, derivative (PID) controller is the common key component of all control loops. Basic control systems depend upon the PID to translate the measurement signals into set points of secondary loop controllers, digital valve controllers, and speed controllers for variable frequency drives. The success of advanced control, such as model predictive control, depends upon the basic control system foundation and hence the PID. Elmer Sperry developed the first example of the PID in 1911, and Nicolas Minorsky published the first theoretical analysis in 1922. Ziegler and Nichols published papers on the ultimate oscillation method and reaction curve method for controller tuning in 1942 and 1943. While the parameters chosen as factors in the tuning settings provided overly aggressive control, the basic premise of an ultimate gain and ultimate period is essential to the fundamental understanding of the limits of stability. The identification of the slope in the reaction curve method is a key to the use of the near-integrator concept that we will find here to be critical for most composition, temperature, and pH loops to improve tuning settings and dramatically shorten test times. Bohl and McAvoy published a paper in 1976 that showed the PID can provide nearly optimal control for unmeasured load disturbances. Shinskey wrote many books detailing the knowledge of process dynamics and relationships essential to the best application of PID control. Shinskey developed the original equation for integrated error from disturbances as a function of tuning settings, as detailed in the January/February 2012 InTech article “PID tuning rules.” Shinskey also published a book dedicated to PID controllers that showed the simple addition of a dead time block in the external reset feedback path could further enhance the PID performance by dead time compensation. Internal model control (IMC) and lambda tuning rules were developed based on pole and zero cancellation to provide a good response to set points and disturbances at the process outlet. However, most of the improvement in set point response could have been achieved by a set point lead-lag or PID structure. Also, these tuning rules do not perform well for the more common case of disturbances on the process input (load upsets), particularly for lag dominant processes. Skogestadt developed significant improvements to IMC tuning rules. Bialkowski showed that always using lambda rather than lambda factors, relating lambda to dead time, and treating lag dominant processes as near-integrators enable the PID to provide good nonoscillatory control for load upsets besides dealing with the many different difficulties and objectives for which lambda tuning was originally designed. Not realized is that most methods converge to the same basic expression for the PID gain and reset time when the objective is load disturbance rejection and that a tuning parameter that is the closed loop time constant or arrest time is set relative to dead time. Also not recognized is how PID features, such as structure, external reset feedback, enhanced PID for analyzer and wireless, simple calculation of a future value, valve position controller, and “full throttle” set point response, can increase process efficiency and capacity, as noted in the ISA 2013 book 101 Tips for a Successful Automation Career . Overload The user is confronted with considerable disagreement of tuning rules as seen in the 400 pages of tuning rules in the 2006 book by O’Dwyer, not realizing most of them can be adjusted by factors or a near-integrator concept to achieve good control. The modern PID has many more options, parameters, and structures that greatly increase the power and flexibility of the PID, but most are underutilized due to insufficient guidance. Additionally, ISA standard form used in most modern control systems is not the parallel form shown in most textbooks or the PID series form pervasively used in the process industry until the 1990s. All of this can be quite overwhelming to the user, particularly because tuning is often done by a generalist faced with rapid changes in technology and with many other responsibilities. My goal in my recent articles, books, and columns (including blogs), which are more extensive and less supplier-specific than white papers, is to provide a unified approach and more directed guidance based on the latest PID features that are missing in the literature. The recently completed Good Tuning: A Pocket Guide, Fourth Edition seeks to concisely present the knowledge needed and to simplify the tuning by switching between just two sets of tuning rules, largely depending upon whether the PID is a primary or secondary controller. A primary PID for vessel or column composition, gas pressure, level, pH, and temperature control uses integrating process tuning rules where the lambda arrest time is set. A secondary PID for liquid pressure, flow, inline pH, and heat exchanger temperature control, uses self-regulating process tuning rules where the closed loop time constant is set. In both situations, lambda rather than a lambda factor is used and chosen relative to the dead time to provide the degree of tightness of control and robustness needed. The best thing a user can do is to use good tuning software, attend supplier schools, and get a consultant in the plant for on-site solutions and practice. It is also important to take responsibility for avoiding common tuning mistakes. Here we step back to make sure we are not susceptible to oversights and misunderstandings. The following compilation has the most common, disruptive, and potentially unsafe mistakes first, but all can come into play and be important. Mistakes 1. Using the wrong control action: In analog controllers and in many early distributed control systems (DCSs) and programmable logic controllers (PLCs), the valve action affected only the display of the output on the station or faceplate. The specification of an “increase-to-close” valve action for a fail-open valve reversed the display but not the actual output. Consequently, the control action had to take into account the valve action besides the process action. If the valve was “increase-to-open” (fail close), the control action was simply the reverse of the process action (direct control action for reverse-acting process and vice versa). If the valve was “increase-to-close” the control action was the same as the process action (direct control action for direct-acting process and vice versa) if not reversed in the current-to-pneumatic (I/P) transducer or positioner. In today’s systems, the user can specify “increase-to-close” in the PID block or analog output block besides the digital valve controller, enabling the control action to be set as the opposite of the process action. The challenge is realizing this and making sure the increase-to-close valve action is only set in one place. If you do not get the control action right, nothing else matters (the PID will walk off to its output limit). 2. Using PID block default settings: The settings that come with a PID block as it is dragged and dropped into a configuration must not be used. When first applying PID to dynamic simulations of new plants, typical settings based on process type and scale span can be used as a starting point. However, tuning tests must be done and settings adjusted before operator training and loop commissioning. 3. Using parallel form and series tuning settings in the ISA standard form: A parallel form that uses integrator gain and derivative gain settings that are put into the ISA standard form as reset time and rate time settings can be off by orders of magnitude. A series form can provide good control with the rate time equal to or greater than the reset time. This is because interaction factors inherently reduce the PID gain and rate time and increase the PID reset time to prevent oscillations from the derivative mode contribution being greater than the contribution from the other modes. Using a rate time equal to or greater than the reset time in an ISA standard form can cause severe fast oscillations. 4. Using the wrong units for tuning settings: Here we consider just the series form and ISA standard form. Controllers can have a gain or proportional band setting for the proportional mode. The gain setting is dimensionless and is 100 percent divided by the proportional band. Some PID algorithms in control studies and actual industrial systems have the gain setting in engineering units, which leads to a very bizarre setting. The integral mode setting can be repeats per second, repeats per minute, minutes per repeat, or seconds per repeat. The units of these last two settings are commonly given as just minutes or seconds. The omission of the “per minute” can cause confusion in the conversion of settings. The conversion of the rate time is simpler, because the units are simply minutes or seconds. 5. Using the wrong units for output limits and anti-reset limits: In analog controllers and in many early DCS and PLC systems, the output and consequently the output limits and anti-reset windup limits were in percent. In modern control systems, the output is in engineering units, and the limits must be set in engineering units. For valves, the units are usually percent of valve stroke. For a primary (upper) PID that is sending a set point to a secondary (lower) PID, the primary PID output is in the engineering units of the secondary PID process variable. 6. Tuning level controllers: If you calculate the product of the valve, gain, process gain, and measurement gain, where the process gain is simply the inverse of the product of the fluid density and vessel cross-sectional area, you realize the open loop integrating process gain is very small (e.g., 0.000001 1/sec), leading to a maximum PID gain for stability that is more than 100. For surge tank level control, a PID gain closer to unity is desired to absorb fluctuations in inlet flows without passing them on as changes to a manipulated outlet flow that will upset downstream users. Users do not like a high PID gain even when tight level control is needed. Decreasing the level controller gain without a proportional increase in the reset time will cause nearly sustained slow rolling oscillations. Further decreases in the PID gain only make the oscillations worse. Most oscillations in production plants and poor performance of distillation columns can be traced back to poorly tuned level controllers. The solution is to choose an arrest time (lambda for integrating processes) to either maximize the absorption of variability (e.g., surge tanks level control or distillate receiver level control where distillate flow is manipulated) or maximize the transfer of variability (e.g., reactor level for residence time control or distillate receiver level control where reflux flow is manipulated for internal reflux control). The integrating process tuning rules prevent the violation of the window of allowable PID gains by first setting the arrest time and using this time to compute the reset time and finally the PID gain. 7. Violating the window of allowable controller gains: We can all relate to the fact that too high of a PID gain causes oscillations. In practice, what we see more often is oscillations from too low of a PID gain in primary loops. Most concentration and temperature control systems on well-mixed vessels are vulnerable to a PID gain that violates the low PID limit, causing slow rolling, nearly undamped oscillations. These systems have a highly lag dominant (near-integrating), integrating, or runaway process response. All of these processes benefit from the use of integrating process tuning rules to prevent the PID gain from being less than twice the inverse of the product of the open loop integrating process gain and reset time, preventing the oscillations shown in the figures. The oscillations in the figures could have been stopped by increasing the reset time. In industrial applications, the reset time in vessel control loops often needs to be increased by two or more orders of magnitude. Note that the oscillations get worse as the process loses internal self-regulation, going from a near-integrating (low internal negative feedback) to an integrating (no internal feedback) and to a runaway (positive feedback) open loop response. For runaway processes, there is also a minimum gain setting independent of reset time that is the inverse of the open loop runaway process gain. The identification of the open loop integrating process gain can generally be done in about four dead times, greatly reducing the test time and reducing the vulnerability to load upsets. 8. Lacking recognition of sensor lag, transmitter damping, or filter-setting effect: A slow measurement response can give the illusion of better control. If the measurement time constant becomes the largest time constant in the loop, the PID gain can be increased and oscillations will be smoother as the measurement is made slower. This occurs all the time in flow control, pressure control, inline pH control, and temperature control of gas volumes, since the process time constant is less than a second. The real process variability has increased and can be estimated with a simple equation. For more information about this widespread problem, see the 2 December 2014 Control Talk blog “Measurement Attenuation and Deception.” For details about how to prevent this in temperature control systems, see the 16 February 2015 ISA Interchange post “ Temperature Sensor Installation for Best Response and Accuracy .” 9. Failing to do tuning tests at different times, set points, and production rates: The installed characteristics of most control valves and most concentration, pH, and temperature processes are nonlinear. The process gain varies with operating point and process conditions, including relatively unknown changes in catalyst activity, fouling, and feed compositions. The valve gain varies with system resistances and flow required. For operating point nonlinearities, the open loop process gain identified depends upon the step size and direction and the split range valve being throttled. Temperature process time constants also tend to vary with the direction of the change. For more details see the 19 January 2015 Control Talk blog “ Why Tuning Tests are Not Repeatable .” 10. Failing to increase PID gain to decrease backlash limit cycle amplitude: An attempt to decrease oscillation amplitude by decreasing gain will make the oscillation worse when the oscillation is a limit cycle from backlash (deadband). The amplitude from backlash is inversely proportional to the PID gain. The limit cycle period from backlash or stiction is also increased as the PID gain is decreased, reducing the attenuation from the filtering effect of process volumes. The same equation noted in item 8 can be used to estimate the attenuated amplitude at the outlet of a well-mixed volume by using the residence time (volume divided by throughput flow) as the filter time constant. Having avoided mistakes, you are ready to take full advantage of the online addendum “ Top PID control opportunities .” War stories 1. The trend charts of phosphorous furnace pressure from faster pressure transmitters installed looked worse, even though the number of high-pressure reliefs had been dramatically reduced. Fortunately, the older, slower transmitters were left installed, showing that the amplitude of the pressure excursions had actually decreased after the faster transmitters were used for furnace pressure control. 2. A plant operated for several years with default tuning settings of gain and reset (repeats per minute) both equal to 1 for all of the PID controllers. Nearly every loop was oscillating, but the plant ingeniously managed to run by setting output limits to reduce oscillation amplitudes. 3. When a plant converted from analog controllers to a DCS, the plant was amazed at the improvement in the distillation column control. It turns out the configuration engineers did not realize the difference between PID gain and proportional band (PB). The analog controller for column overhead receiver level manipulating reflux had a PB of 100 percent that was then set as the gain of 100 in the DCS PID. The tight level control and consequential great internal reflux control stopped the slow rolling oscillations from violation of the low gain limit and rejected disturbances from “Blue Northerner” cold rain storms. Addendum Top PID control opportunities Use cascade control, so secondary proportional, integral, derivative (PID) controllers (e.g., flow and jacket temperature controllers) isolate the primary PID controllers (e.g., composition, level, pH, and temperature) from the nonlinearities of the control valve installed flow characteristic, pressure disturbances, and process nonlinearities, and to enable feedforward and ratio control. If the flowmeter does not have the rangeability needed, substitute an inferential flow measurement using the installed valve flow characteristic when the flow drops to the point where the meter signal is too noisy or erratic. (See Control Talk blog entries “ Best Control Valve Installed Flow Characteristic ” and “ Secondary Flow Loop and Valve Positioner Tips .”) The exception is that pressure controller outputs must usually go directly to the final control elements (e.g., control valve or variable frequency drive) to provide a faster response. Often the installed valve flow characteristic is linear for these pressure loops by the use of linear trim, because the pressure drop is relatively constant. Use external reset feedback (e.g., dynamic reset limit) to ensure the primary PID output does not change faster than the secondary PID process variable can respond. Use feedforward control that nearly always ends up being ratio control, where the divisors and numerators are most often a flow rate but can be a speed or an energy rate. The ratio is corrected by a primary PID controller. The operator should be able to set the desired ratio and see the actual corrected ratio. Dynamic compensation should be applied as needed so that the manipulated flow arrives at the same point and at the same time in the process as the feedforward flow. Often this is done by inserting adjustable dead time and lead/lag blocks in the feedforward signal. To synchronize the timing of reactant flows or blend flows so that the stoichiometric ratio is maintained for changes in production rates and corrections in ratio, a leader set point is filtered and a ratio factor applied to become the set points of the other flow controllers. Each flow PID is tuned for a smooth response that is fast enough to deal with pressure disturbances and valve nonlinearities. The leader set-point filter is set large enough that all the flow loops respond in unison. (See “ Feedforward control enables flexible, sustainable manufacturing ,” March/April 2011 InTech .) Use the right PID structure. The PI on error and D on error structure is often the right choice. If the process variable can only respond in one direction, which may be the case for batch processes with no reaction or change phase and no split-ranged opposing valve (e.g., temperature control with heating but no cooling, and pH control with base reagent but no acid reagent), a structure without integral action is needed (P on error and D on PV no I). In these cases the bias is set to be the PID output when the PID process variable has settled out close to the set point. If overshoot of set point is critical and the time to reach set point and load disturbance response is of no concern, a structure of I on error and PD on PV can be used. A more flexible approach uses a two-degrees-of-freedom PID structure, where the set point weight factors beta and gamma are set for the proportional and derivative modes, respectively, to optimize a compromise between the objectives for set point response and load response. Alternatively, a set point lead-lag can be used to achieve the desired set point response with a PID tuned for a good load disturbance response (minimum peak and integrated absolute errors). See Appendix C of Good Tuning: A Pocket Guide for details on what affects these errors. The set point lag is set equal to the PID reset time, and the lead is set to provide a faster set point response. A lead of zero is equivalent to a PID controller with no proportional or derivative action on error (e.g., beta and gamma equal to zero). Tune all the loops in the right order using good software. Choose tuning rules (e.g., self-regulating versus integrating process) recognizing that self-regulating processes with time constant to dead time ratios greater than 4 can be considered to have a near-integrating response and should use integrating process tuning rules. Use tuning factors (e.g., lambda relative to dead time) based on different objectives (e.g., set point versus load response and maximizing transfer of variability versus maximizing absorption of variability) and difficult situations (e.g., resonance, interaction, and inverse response). See table D-1 in Appendix D of Good Tuning: A Pocket Guide for details. The direction should in general proceed from upstream to downstream PID. The gas and liquid pressure PID controllers should be tuned first, followed by the secondary PID flow and utility system controllers. The level PID controllers should then be tuned for the right objective that depends upon whether the level PID is responsible for enforcing a material balance (e.g., column temperature controller manipulating reflux flow) or just needs to keep the level in bounds because the manipulated flow upsets downstream unit operations (e.g., column temperature controller manipulating distillate flow). Finally, the primary concentration, pH, and temperature controllers should be tuned for the desired set point or load response and the abruptness of movement of the manipulated flow allowed when they can upset other users or come back to upset the respective loop (e.g., plug flow systems with heat integration and recycle streams). If the primary PID does not have a near-integrating, true integrating, or runaway response, and peak error and rise time is not a concern, an objective of minimizing overshoot of the primary PID output past the final resting value may be advantageous. Secondary PID or analog output set point rate limits with primary PID external reset feedback can prevent abrupt changes. Use adaptive control. The PID controller tuning settings generally change with the split-ranged manipulated variable with production rate, heat transfer surface fouling, catalyst activity, and set point, and with cycle time for batch processes (e.g., batch level, reaction rate, and concentration). Also, see the article Overcoming challenges of PID controller and analyzer applications for the opportunities of using an enhanced PID. A version of this article originally was published at InTech magazine .
  • Why Do Most Vessel Control Loops Need the Reset Time Increased by Two or More Orders of Magnitude?

    The post Why Do Most Vessel Control Loops Need the Reset Time Increased by Two or More Orders of Magnitude? first appeared on the ISA Interchange blog site. This excerpt from  InTech magazine was written by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemicals). The proportional, integral, derivative (PID) controller is the common key component of all control loops. Basic control systems depend upon the PID to translate the measurement signals into set points of secondary loop controllers, digital valve controllers, and speed controllers for variable frequency drives. The success of advanced control, such as model predictive control, depends upon the basic control system foundation and hence the PID. Elmer Sperry developed the first example of the PID in 1911, and Nicolas Minorsky published the first theoretical analysis in 1922. Ziegler and Nichols published papers on the ultimate oscillation method and reaction curve method for controller tuning in 1942 and 1943. While the parameters chosen as factors in the tuning settings provided overly aggressive control, the basic premise of an ultimate gain and ultimate period is essential to the fundamental understanding of the limits of stability. The identification of the slope in the reaction curve method is a key to the use of the near-integrator concept that we will find here to be critical for most composition, temperature, and pH loops to improve tuning settings and dramatically shorten test times. Bohl and McAvoy published a paper in 1976 that showed the PID can provide nearly optimal control for unmeasured load disturbances. Shinskey wrote many books detailing the knowledge of process dynamics and relationships essential to the best application of PID control. Shinskey developed the original equation for integrated error from disturbances as a function of tuning settings, as detailed in the January/February 2012 InTech article “PID tuning rules.” Shinskey also published a book dedicated to PID controllers that showed the simple addition of a dead time block in the external reset feedback path could further enhance the PID performance by dead time compensation. Internal model control (IMC) and lambda tuning rules were developed based on pole and zero cancellation to provide a good response to set points and disturbances at the process outlet. However, most of the improvement in set point response could have been achieved by a set point lead-lag or PID structure. Also, these tuning rules do not perform well for the more common case of disturbances on the process input (load upsets), particularly for lag dominant processes. Skogestadt developed significant improvements to IMC tuning rules. Bialkowski showed that always using lambda rather than lambda factors, relating lambda to dead time, and treating lag dominant processes as near-integrators enable the PID to provide good nonoscillatory control for load upsets besides dealing with the many different difficulties and objectives for which lambda tuning was originally designed. Not realized is that most methods converge to the same basic expression for the PID gain and reset time when the objective is load disturbance rejection and that a tuning parameter that is the closed loop time constant or arrest time is set relative to dead time. Also not recognized is how PID features, such as structure, external reset feedback, enhanced PID for analyzer and wireless, simple calculation of a future value, valve position controller, and “full throttle” set point response, can increase process efficiency and capacity, as noted in the ISA 2013 book 101 Tips for a Successful Automation Career . Overload The user is confronted with considerable disagreement of tuning rules as seen in the 400 pages of tuning rules in the 2006 book by O’Dwyer, not realizing most of them can be adjusted by factors or a near-integrator concept to achieve good control. The modern PID has many more options, parameters, and structures that greatly increase the power and flexibility of the PID, but most are underutilized due to insufficient guidance. Additionally, ISA standard form used in most modern control systems is not the parallel form shown in most textbooks or the PID series form pervasively used in the process industry until the 1990s. All of this can be quite overwhelming to the user, particularly because tuning is often done by a generalist faced with rapid changes in technology and with many other responsibilities. My goal in my recent articles, books, and columns (including blogs), which are more extensive and less supplier-specific than white papers, is to provide a unified approach and more directed guidance based on the latest PID features that are missing in the literature. The recently completed Good Tuning: A Pocket Guide, Fourth Edition seeks to concisely present the knowledge needed and to simplify the tuning by switching between just two sets of tuning rules, largely depending upon whether the PID is a primary or secondary controller. A primary PID for vessel or column composition, gas pressure, level, pH, and temperature control uses integrating process tuning rules where the lambda arrest time is set. A secondary PID for liquid pressure, flow, inline pH, and heat exchanger temperature control, uses self-regulating process tuning rules where the closed loop time constant is set. In both situations, lambda rather than a lambda factor is used and chosen relative to the dead time to provide the degree of tightness of control and robustness needed. The best thing a user can do is to use good tuning software, attend supplier schools, and get a consultant in the plant for on-site solutions and practice. It is also important to take responsibility for avoiding common tuning mistakes. Here we step back to make sure we are not susceptible to oversights and misunderstandings. The following compilation has the most common, disruptive, and potentially unsafe mistakes first, but all can come into play and be important. Mistakes 1. Using the wrong control action: In analog controllers and in many early distributed control systems (DCSs) and programmable logic controllers (PLCs), the valve action affected only the display of the output on the station or faceplate. The specification of an “increase-to-close” valve action for a fail-open valve reversed the display but not the actual output. Consequently, the control action had to take into account the valve action besides the process action. If the valve was “increase-to-open” (fail close), the control action was simply the reverse of the process action (direct control action for reverse-acting process and vice versa). If the valve was “increase-to-close” the control action was the same as the process action (direct control action for direct-acting process and vice versa) if not reversed in the current-to-pneumatic (I/P) transducer or positioner. In today’s systems, the user can specify “increase-to-close” in the PID block or analog output block besides the digital valve controller, enabling the control action to be set as the opposite of the process action. The challenge is realizing this and making sure the increase-to-close valve action is only set in one place. If you do not get the control action right, nothing else matters (the PID will walk off to its output limit). 2. Using PID block default settings: The settings that come with a PID block as it is dragged and dropped into a configuration must not be used. When first applying PID to dynamic simulations of new plants, typical settings based on process type and scale span can be used as a starting point. However, tuning tests must be done and settings adjusted before operator training and loop commissioning. 3. Using parallel form and series tuning settings in the ISA standard form: A parallel form that uses integrator gain and derivative gain settings that are put into the ISA standard form as reset time and rate time settings can be off by orders of magnitude. A series form can provide good control with the rate time equal to or greater than the reset time. This is because interaction factors inherently reduce the PID gain and rate time and increase the PID reset time to prevent oscillations from the derivative mode contribution being greater than the contribution from the other modes. Using a rate time equal to or greater than the reset time in an ISA standard form can cause severe fast oscillations. 4. Using the wrong units for tuning settings: Here we consider just the series form and ISA standard form. Controllers can have a gain or proportional band setting for the proportional mode. The gain setting is dimensionless and is 100 percent divided by the proportional band. Some PID algorithms in control studies and actual industrial systems have the gain setting in engineering units, which leads to a very bizarre setting. The integral mode setting can be repeats per second, repeats per minute, minutes per repeat, or seconds per repeat. The units of these last two settings are commonly given as just minutes or seconds. The omission of the “per minute” can cause confusion in the conversion of settings. The conversion of the rate time is simpler, because the units are simply minutes or seconds. 5. Using the wrong units for output limits and anti-reset limits: In analog controllers and in many early DCS and PLC systems, the output and consequently the output limits and anti-reset windup limits were in percent. In modern control systems, the output is in engineering units, and the limits must be set in engineering units. For valves, the units are usually percent of valve stroke. For a primary (upper) PID that is sending a set point to a secondary (lower) PID, the primary PID output is in the engineering units of the secondary PID process variable. 6. Tuning level controllers: If you calculate the product of the valve, gain, process gain, and measurement gain, where the process gain is simply the inverse of the product of the fluid density and vessel cross-sectional area, you realize the open loop integrating process gain is very small (e.g., 0.000001 1/sec), leading to a maximum PID gain for stability that is more than 100. For surge tank level control, a PID gain closer to unity is desired to absorb fluctuations in inlet flows without passing them on as changes to a manipulated outlet flow that will upset downstream users. Users do not like a high PID gain even when tight level control is needed. Decreasing the level controller gain without a proportional increase in the reset time will cause nearly sustained slow rolling oscillations. Further decreases in the PID gain only make the oscillations worse. Most oscillations in production plants and poor performance of distillation columns can be traced back to poorly tuned level controllers. The solution is to choose an arrest time (lambda for integrating processes) to either maximize the absorption of variability (e.g., surge tanks level control or distillate receiver level control where distillate flow is manipulated) or maximize the transfer of variability (e.g., reactor level for residence time control or distillate receiver level control where reflux flow is manipulated for internal reflux control). The integrating process tuning rules prevent the violation of the window of allowable PID gains by first setting the arrest time and using this time to compute the reset time and finally the PID gain. 7. Violating the window of allowable controller gains: We can all relate to the fact that too high of a PID gain causes oscillations. In practice, what we see more often is oscillations from too low of a PID gain in primary loops. Most concentration and temperature control systems on well-mixed vessels are vulnerable to a PID gain that violates the low PID limit, causing slow rolling, nearly undamped oscillations. These systems have a highly lag dominant (near-integrating), integrating, or runaway process response. All of these processes benefit from the use of integrating process tuning rules to prevent the PID gain from being less than twice the inverse of the product of the open loop integrating process gain and reset time, preventing the oscillations shown in the figures. The oscillations in the figures could have been stopped by increasing the reset time. In industrial applications, the reset time in vessel control loops often needs to be increased by two or more orders of magnitude. Note that the oscillations get worse as the process loses internal self-regulation, going from a near-integrating (low internal negative feedback) to an integrating (no internal feedback) and to a runaway (positive feedback) open loop response. For runaway processes, there is also a minimum gain setting independent of reset time that is the inverse of the open loop runaway process gain. The identification of the open loop integrating process gain can generally be done in about four dead times, greatly reducing the test time and reducing the vulnerability to load upsets. Click here to continue reading Greg McMillan’s article at InTech magazine.