*Posts on this page are from the Control Talk blog, which is one of the ControlGlobal.com blogs for process automation and instrumentation professionals and Greg McMillan’s contributions to the ISA Interchange blog.
The post How to Measure pH in Ultra-Pure Water Applications first appeared on the ISA Interchange blog site.
The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.
Danny Parrott is an instrumentation and controls specialist at Spallation Neutron Source. Danny is a detail-oriented instrumentation and controls professional experienced in the areas of electrical, electronics and controls specification, installation, maintenance, and project planning. Danny’s question is important in dealing with the many challenges for reliable and accurate pH measurement in ultra pure water and more generally in streams with exceptionally low conductivity.
What are some opinions, thoughts, or practical experience relating to pH measurements in ultra-pure water applications?
Ultra-pure water applications pose special problems because of the exceptionally low conductivity of the fluid from the absence of ions. The consequences are extreme sensitivity to fluid velocity and spurious ions, unstable reference junction potentials, sample contamination, and loss of electrical continuity between the reference and measurement electrodes. The functional electrical diagram showing resistances and potentials provides insightful view of nearly all of the sources of problems with pH measurements in general. Ultra-pure water and process fluids with an exceptionally small near zero fluid conductivity threaten the continuity of the electrical circuit between the reference and measurement electrode terminals at the transmitter by an extraordinarily large electrical resistance R8 in Figure 1, a pH electrode functional electrical circuit diagram for a combination electrode that is a great way of recognizing the many potential source of errors in a pH measurement.
Figure 1: pH Electrode Functional Electrical Circuit Diagram
The solution for online measurements is to use a flowing junction reference electrode to provide a small fixed liquid junction potential in a low flow assembly for a combination electrode. The combination electrode assembly ensures a short fixed distance path of reference electrolyte to the measurement electrode and a small fixed fluid velocity. The assembly also provides mounting of an electrolyte reservoir that sustains a small fixed reference junction flow as shown in Figure 2. The flow of reference electrode electrolyte reduces the fluid velocity and electrical resistance (R8) in the fluid path and provides a much more constant liquid junction potential (E5) that does not jump or shift due to the appearance of spurious ions. The resistances and potentials in the diagram provide a wealth of information. The flow assembly also has a special cup holder for calibration with buffer solutions. A solution ground connection reduces the effect of ground potentials. Temperature compensation must be accurate and fast.
Figure 2: Low Flow Assembly With Flowing Reference Junction for Low Conductivity pH Applications
The pH measurement calibration needs to be checked and adjusted before installation and periodically thereafter by inserting the electrode(s) in buffer solutions. Making a pH measurement of a sample is very problematic because of contamination from glass beaker ions, absorption of carbon dioxide creating carbonic acid and accumulation of electrolyte ions from flowing junction. The sample volume needs to be large and the measurement made quickly to reduce effect of accumulating ions. A closed plastic sample container is employed to minimize contamination. The same type of electrode(s) in the online measurement should be utilized for sample measurement so that reference junction potentials are consistent. Since these sample pH measurement requirements are rarely satisfied, buffers instead of process samples should be used for calibration checks.
The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.
In exceptionally low conductivity process fluids, there is often not enough water content to keep the glass measurement electrode hydrated. Also, the activity of the hydrogen ion is severely decreased by the lack of water and the extremely different dissociation constant of a non-aqueous solvent can cause a pH range that is outside of the normal 0 to 14 pH range. For these applications, a flowing reference electrode is also needed but an automatically retractable insertion assembly is useful to periodically retract, flush, soak and calibrate the electrodes reducing process exposure time and hydrating/rejuvenating the measurement electrode’s glass surface. For more on the challenges of semi-aqueous pH measurements see the Control Talk article The wild side of pH measurement. For a much more complete view of what is needed for pH applications, see the ISA book Advanced pH Measurement and Control.
For pH measurements used for process control, I recommend three pH assemblies and middle signal selection. Lower lifecycle costs from less and more effective maintenance and better process performance more than pays for cost of the three measurements. Middle signal selection will inherently ignore a single measurement failure of any type and dramatically reduce the effect of spikes, noise, and the consequences of slow or insensitive glass electrodes. The middle selection also eliminates unnecessary calibration checks and provides much more intelligent knowledge on electrode performance enabling optimum time for calibration and replacement.
To download a free PDF excerpt from Advanced pH Measurement and Control, click here.
Connect with Greg:
The post How to Optimize PID Controller Settings and Options first appeared on the ISA Interchange blog site.
The following discussion is based on the ISA Mentor Program webinar recordings of the three-part series on PID options and solutions. The webinar is discussed in the Control Talk blog post PID Options and Solutions – Part 1 and the post PID Options and Solutions – Parts 2 and 3. Since the following questions from one of our most knowledgeable recent protégés Adrian Taylor refer to slide numbers, please open or download the presentation slide deck ISA Mentor Program WebEx PID Options and Solutions.
Figure 1: ISA Standard Form (see slide #42 from the presentation PDF)
Figure 1 depicts the only known time domain block diagram for ISA Standard Form with eight different PID structures by setpoint weight factors and the positive feedback implementation of integral mode that enables true external-reset feedback (ERF). Many capabilities, such as deadtime compensation, directional move suppression, and the elimination oscillations from deadband, poor resolution, poor sensitivity, wireless update times, and analyzer sample times, are readily achieved by turning on ERF.
Adrian Taylor’s Question 1:
Slide 9 details a Y factor which varies between 0.28 and 0.88 for converting the faster lags to apparent dead time. You mentioned this Y factor can be looked up on charts given by Ziegler and Nichols (Z&N), are you able to provide with a copy of these charts or point me to where I can get a copy?
Greg McMillan’s Answer 1:
The chart and equations to compute Y are on page 137 of my Tuning and Control Loop Performance Fourth Edition (Momentum Press 2015). The original source is Ziegler, J. G., and Nichols, N. B., “Process Lags in Automatic Control Circuits,” ASME Transactions, 1943.
Adrian Taylor’s Question 2:
On slide 17 you give recommendations for setting of Lambda, I’m presuming these recommendations are for integrating and near integrating systems only and wondered what your recommendations are for setting of Lambda when using the self-regulating rules?
Greg McMillan’s Answer 2:
I was focused on near-integrating and integrating processes since these are the more important ones in the type of plants I worked in but the recommendations for Lambda apply to self-regulating as well. Lambda being a fraction of deadtime for extremely aggressive control is of theoretical value only to show how good PID can be if you are publishing a paper. I would never advocate a Lambda less than one deadtime unless you are absolutely confident you exactly know dynamics and that they never change and you can tolerate some oscillation. Lambda being a multiple of deadtime for robust control is of practical value for dealing with changing or unknown dynamics and providing a smooth response.
Adrian Taylor’s Question 3:
On slide 17 where the recommended Lambda settings are given, it recommends a Lambda of 3 and 6 respectively for adverse changes in loop dynamics of less than 5 and 10. What do 5 and 10 refer to?
Greg McMillan’s Answer 3:
I should have said “factor of 5” and “factor of 10” instead of just “5” and “10”, respectively in statement on robustness. These factors are actually gain margins. I also should not have rounded up to a factor of 10 and instead said a factor of 9 for Lambda 6x deadtime. While this specifically indicates what increase in self-regulating or integrating process gain as a factor of original can occur without the loop going unstable, it can be extended to give an approximate idea of how much other adverse changes in loop dynamics can be tolerated if the process gain is constant. For example, the factor applies roughly to the increase in total loop deadtime for deadtime dominant self-regulating processes and decrease in process time constant for lag dominant processes that would cause instability. This extension assumes Lambda tuning where the Lambda in every case is a factor of deadtime with the reset time being proportional to process time constant for deadtime dominant processes and reset time being proportional to deadtime for lag dominant processes. The reasoning can be seen in the equations for PID gain and reset time on slides 30 and 32 without my minimum limits on reset time.
Adrian Taylor’s Question 4:
On slide 17 there is a statement “Adverse changes are multiplicative…”. I didn’t quite understand the context of this statement if you are able to expand a little more? (Probably goes hand in hand with question 3 above).
Greg McMillan’s Answer 4:
An increase in process gain by factor of 2 will result in a combined factor of 9 for adverse changes such as a decrease in process time constant for lag dominant self-regulating processes by factor of 4.5 or an increase in loop dead time by a factor of 4.5 for deadtime dominant processes.
Adrian Taylor’s Question 5:
On slide 31 when calculating arrest time we use a value Δ% which is described as the maximum allowable level change (%). Just to be sure I understand the value to be used here… If I had a setpoint high limit of 80% and the tank overflow is at 100% then the value of Δ% would be equal to 100-80=20%?
Greg McMillan’s Answer 5:
Yes, if the high level alarm is above the high setpoint limit. Δ% is maximum allowable deviation that is often the difference between an operating point and the point where there is an alarm.
Adrian Taylor’s Question 6:
On slide 31 when calculating arrest time we use a value Δ% which is described as the maximum allowable PID output change? Is this just simply the difference between the output high and low limits…. So if the output high limit was 100% and the output low limit was 0%, then the value of Δ% would be equal to 100-0=100%?
Greg McMillan’s Answer 6:
Yes. This term in the equation is counter intuitive but results from derivation of equation in Tuning and Control Loop Performance Fourth Edition using minimum integrating process gain.
Adrian Taylor’s Question 7:
I am going to purchase a copy of your ‘Tuning and Control Loop Performance’ book shown at the end of the presentation. I am curious if you think it is also worth purchasing the tuning rules pocket book, or if all the content of the pocket book is also contained in the larger book I am already purchasing?
Greg McMillan’s Answer 7:
The ‘Tuning and Control Loop Performance’ book is much more complete and explanatory but can be overwhelming. The pocket guide provides a more concise and focused way of knowing what to do.
Adrian Taylor’s Question 8:
At the end of webinar a question was posed around tuning loops where it is not possible to put the loops in manual, I am seeking more specifics based on my notes on procedure:
Greg McMillan’s Answer :
A closed loop procedure using your notes to give approximate tuning that keeps loop in automatic is as follows if you cannot put loop in manual or do not have software to identify the loop dynamics:
Inverse response, negative resistance, positive feedback and discontinuities can cause processes to jump, accelerate and oscillate confusing the control system and the operator. Not properly addressing these situations can result in equipment damage and plant shutdowns besides the loss of process efficiency. Here we first develop a fundamental understanding of the causes and then quickly move on to the solutions to keep the process safe and productive.
We can appreciate how positive feedback causes problems with sound systems. We can also appreciate from circuit theory how negative resistance and positive feedback would cause an acceleration of a change in current flow. We can turn this insight into an understanding of how a similar situation develops for compressor, steam-jet ejector, exothermic reactor and parallel heat exchanger control.
The compressor characteristic curves from the compressor manufacturer that are a plot of compressor pressure rise versus suction flow shows a curve of decreasing pressure rise for each speed or suction vane position whose slope magnitude increases as the suction flow increases in the normal operating region. The pressure rise consequently decreases more as the flow increases opposing additional increases in compressor flow creating a positive resistance to flow. Not commonly seen is that compressor characteristic curve slope to the left of the surge point becomes zero as you decrease flow, which denotes a point on the surge curve, and then as the flow decreases further, the pressure rise decreases causing a further decrease in compressor flow creating a negative resistance to a decrease in flow. When the flow becomes negative, the slope reverses sign creating a positive resistance with a shape similar to that seen in the normal operating region to the right of the surge point. The compressor flow then increases to a positive flow at which point the slope reverses sign creating negative resistance. The compressor flow jumps in about 0.03 seconds from the start of negative resistance to some point of positive resistance. The result is a jump in 0.03 seconds to negative flow across the negative resistance, a slower transition along positive resistance to zero flow, than a jump in 0.03 seconds across the negative resistance to a positive flow well to the right of the surge curve. If the surge valve is not open far enough, the operating point walks about 0.5 to 0.75 seconds along the positive resistance to the surge point. The whole cycle repeats itself with an oscillation period of 1 to 2 seconds.
The following plot of a pilot plant compressor characteristic for a single speed shows the path 2 along the curve 1. When the operating point reaches point B, which is where the compressor characteristic curve slope is zero, the operating point jumps to point C due to the negative resistance. This jump corresponds to the precipitous drop in flow that signals the start of the surge cycle and the subsequent reversal of flow (negative acfm). After this jump to point C, the operating point follows the compressor curve from point C to point D as the plenum volume is emptied due to reverse flow. When the operating point reaches point D, which is where the compressor characteristic slope is zero again, the operating point jumps to point A due to negative resistance. If the surge valve is not opened, the operating point walks again from A to B starting the whole oscillation all over again.
Once a compressor gets into surge, the very rapid jumps and oscillations confuse the PID controller. Even a very fast PID and control valve is not fast enough. Consequently, the oscillation persists until an open loop backup holds open the surge valves until the operating point is sustained well to the right of the surge curve for about 10 seconds at which point there is a bumpless transfer back to PID control. The solution is a very fast valve and PID and an open loop backup that detects a zero slope indicating an approach to surge or a rapid dip in flow indicating an actual surge. The operating point should always be kept well to the right of the surge point.
The same shape but with much less of a dip in the compressor curve, sometimes occurs just to the right of the surge point. This local dip causes a jumping back and forth called buzzing. While the oscillation is much less severe than surge, the continual buzzing is disruptive to users.
A similar sort of dip in a curve occurs in a plot of pumping rate versus absolute pressure for a steam-jet ejector. The result is a jumping across the path of negative resistance. The solution here is a different operating pressure or nozzle design, or multiple jets to reduce the operating range so that operation to one side or the other of the dip can be assured.
Positive feedback occurs in exothermic reactors when the heat of reaction exceeds the cooling rate causing an accelerating rise in temperature that further increases the heat of reaction. The solution is to always insure the cooling rate is larger than the heat of reaction. However, in polymerization reactions the rate of reaction can accelerate so fast the cooling rate cannot be increased fast enough causing a shutdown or a severe oscillation. For safety and process performance, an aggressively tuned PID is essential where the time constants and dead time associated with heat transfer in cooling surface and thermowell and loop response are much less than the positive feedback time constant. Derivative action must be maximized and integral action must be minimized. In some cases a proportional plus derivative controller is used.
Positive feedback can also occur when parallel heat exchanges have a common process fluid input each with outlet temperature controller(s) with a setpoint close to the boiling point or temperature resulting in vaporization of a component in the process fluid. Each temperature controller is manipulating a utility stream providing heat input. The control system is stable if the process flow is exactly the same to all exchangers. However, a sudden reduction in one process flow causes overheating causing bubbles to form expanding back into the exchanger causing an increase in back pressure and hence a further decrease in process flow. The increasing back pressure eventually forces all of the process flow into the colder heat exchanger making it colder. The high velocity in the hot exchanger from boiling and vaporization causes vibration and possibly damage to any discontinuity in its path from slugs of water. When all of the water is pushed out of the hot exchanger, its temperature drops drawing feed from the cold heat exchanger that causes it to overheat repeating the whole cycle. The solution is separate flow controllers and pumps for all streams so that changes in one flow do not affect another and a lower temperature setpoint.
Acceleration in the response of a process variable is also seen as the pH approaches neutrality in a strong acid and strong base system due to the increase in process gain by a factor of 10 for each pH unit closer to neutrality (e.g., 7 pH). The result is a limit cycle across the steep portions of titration curve to the flat portions where the slope is much smaller (process gain is much smaller). The solution is to use signal characterization, excellent mixing, and very precise reagent valves.
Inverse response from changes in phase commonly seen in boiler level control occurs when increases in cold feedwater flow causes a collapse of bubbles in the downcomers causing drum level to shrink from liquid in drum falling back down into downcomers. In the opposite direction, decreases in cold feedwater flow causes a formation of bubbles in the downcomers causing drum level to swell from liquid rising up from downcomers into the drum. Preheating the feedwater can greatly reduce shrink and swell. The control solution is normally a feedforward of steam flow to feedforward flow and less feedback action in a setup called three element drum level control. However, the shrink and swell can be so large for drums too small as a result of a misguided attempt to save capital costs or as a result of pushing plant capacity beyond original design. In these cases, the direction is reversed for the very start of a feedforward signal change and is decayed out to be the right direction for the material balance. This counterintuitive action helps prevent a level shutdown but must be very carefully monitored. A warmer feedwater can make this the wrong action.
Limit cycles also develop from resolution limits typically as a result of stiction in control valves but can also originate from input cards to change speed for variable frequency drives (VFDs). Believe it or not the standard VFD input card used maybe even to this day has a resolution of only about 0.4%. Resolution limits create limit cycles that cannot be stopped unless all integrating action is removed from the process and controllers. The limit cycle period can be reduced by increasing the controller gain but the amplitude is set by the process gain.
If there are two or more sources of integrating action in a control loop, limit cycles also develop from deadband typically as a result of backlash in control valves but can also originate from deadband settings in the setup of variable frequency drives (VFDs) or configured in the split range point of controllers. The limit cycle period and amplitude can be reduced by increasing the controller gain.
Many processes have integrating action. Positioners may mistakenly have integral action. Most controllers have integrating action but it can be suspended by an integral deadband or by the use of external-reset feedback (dynamic reset limit) where there is a fast readback of actual valve position or VFD speed. For a near-integrating, true integrating or runaway process response there is a PID gain window where oscillations result from too low besides too high of a PID gain. The low PID gain limit increases as integral action increases (reset time decreases).
To summarize, to eliminate oscillations, the best solution is a design that eliminates negative resistance, inverse response, positive feedback and discontinuities. When this cannot provide the total solution, operating points may need to be restricted and the controller gain increased and integral action decreased or suspended. Not covered here are the oscillations due to resonance and interaction. In these situations, better pairing of controlled and manipulated variables is the first choice. If this is not possible, see if the faster loop can be made faster and the slower loop made slower so that the closed loop response times of the loops are different by a factor of five or more. The suspension of integral action best done by external-reset feedback can also help. The same rule and solution works for cascade control. If pairing and tuning does not solve the interaction problem, then decoupling via feedforward of one controller output to the other is needed or moving on to model predictive control.
If this gives you a headache from concerns raised about your applications, suspend thinking about the problems and use creativity and better tuning when you can actually do something.
The post Webinar Recording: Feedforward and Ratio Control first appeared on the ISA Interchange blog site.
This educational ISA webinar on control valves was presented by Greg McMillan in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).
Feedforward control and ratio control that preemptively correct for load disturbances or changes in leader flow are greatly underutilized due to the lack of understanding of how to configure the control and determine the parameters needed. This webinar provides key insights on how feedforward control often simplifies to ratio control. It also explains how to identify the parameters so that the feedforward or ratio control correction does not arrive too late or too soon and the correction has the right sign and value to cancel out the load disturbance or achieve the right ratio to the leader flow.
The post, Maximizing Synergy between Engineering and Operations, first appeared on ControlGlobal.com's Control Talk blog.
The operator is by far the most important person in the control room having the most intimate knowledge and “hands on” experience with the process. Engineers who are most successful with process improvements realize they need to sit with and observe what operators are doing to deal with a variety of situations. Process engineers tend to recognize this need more than automation engineers. Improvements in operator interfaces, alarms, measurements, valves, and control systems are best accomplished by a synergy of knowledge gained by meetings between research, design, support, maintenance and operations where each talk about what they think are problems and opportunities. ISA Standards and Virtual Plants can provide a mutual understanding in these discussions.
The most successful process control improvement initiative at Monsanto and Solutia used such discussions with some preparatory work on what the process is actually doing and capable of doing. An opportunity sizing that detailed gaps between current and potential performance estimated by identifying the best performance found from cost sheets and from a design of experiments (DOE) most often done by a virtual plant due to increasingly greater limitations to such experimentation in an actual plant. After completion of the opportunity sizing, a one or two day opportunity assessment was held led by a process engineer with input sought and given by operations, accounting and management, marketing, maintenance, field and lab analyzer specialists, instrument and electrical engineers, and process control engineers. Marketing provided the perspective and details on how the demand and value of different products is expected to change. This knowledge was crucial for effectively estimating the benefits from increases in process flexibility and capacity. Opportunities for procedure automation and plantwide ratio control making the transition from one product to another faster and more efficient were consequently identified. Agreement was sought and often reached on the percentage of each gap that could be eliminated by potential PCI proposed and discussed during the meeting. A rough estimate of the cost and time required for each PCI implementation was also listed. The ones with the least cost and time requirements were noted as “Quick Hits”. To take advantage of the knowledge and enthusiasm and momentum, the “Quick Hits” were usually started immediately after the meeting or the following week.
Synergy can be maximized by exploring a wide spectrum of scenarios in a virtual plant that can run faster than real time and discussed in training sessions. Every engineer, scientist technician, and operator should be involved. If necessary this can be done at luncheons. Any resulting Webinar should be recorded including discussions. See the Control article “Virtual Plant Virtuosity” and ISA Mentor Program Webinar Recordings for this and much more in terms of gaining and using operational, process and automation system knowledge.
Webinar recordings should focus on the level of understanding needed and achievable in the plant and not what a supplier would like to promote. The ability of operators to learn the essential aspects and principles of process, equipment, and automation system performance should not be underestimated. We want to ensure the operator knows exactly and quickly what is happening being able to get at the root cause of a problem preemptively preventing poor process performance and SIS activation. Operators need to be aware of the severe adverse effect of deadtime. Fortunately, operators want to learn!
Finding the real causes of potential abnormal situations is critical for improving HMI, alarm systems, engineering, maintenance and operations. Ideally there should be a single alarm of elevated importance identifying the root cause (e.g., state based alarm) and the operator should be able in HMI to readily investigate conditions associated with root cause. Maintenance should be able to know what mechanical or automation component to repair or replace. Engineering should design procedure automation (state based control) to automatically deal with the abnormal situation.
Often the very first abnormal measurement is an indication of root cause. However, the abnormal condition should be upstream and the measurement of the abnormal condition should be faster than the measurement of other problems that occur as a consequence or coincidence. This is a particular concern for temperature because thermowells lags can be 10 to 100 seconds depending upon fit and velocity. For pH, the electrode lags can range from 5 to 200 seconds depending upon glass age, dehydration, fouling and velocity. There is also the deadtime associated with any transportation delay to the sensor. Finally, an output correlated with an input is not necessarily a cause and effect relationship. I find that process analysis and some form of a fault tree diagram and investigating relevant scenarios in a virtual plant as most useful.
Sharing useful knowledge is the biggest obstacle to success. The biggest obstacle can become the biggest achievement.
The post Webinar Recording: Strange but True Process Control Stories first appeared on the ISA Interchange blog site.
Greg McMillan presents lessons learned the hard way during his 40-year career, through concise “War Stories” of mistakes made in the field. Many of these mistakes are still being made today with some posing a safety risk, as well as potentially reducing process efficiency or capacity.
The post Webinar Recording: Temperature Measurement and Control first appeared on the ISA Interchange blog site.
Temperature is the most important common measurement that is critical for process efficiency and capacity because it not only affects energy use but also production rate and quality. Temperature plays a critical role in the formation, separation, and purification of product. Here we see how to get the most accurate and responsive measurements and the best control for key unit operations.
The usual concern is whether an automation system is too slow. There are some applications where an automation system is disruptive by being too fast. Here we look at what determines whether a system should be faster or slower and what are the limiting factors and thus the solution to meeting a speed of response objective. In the process, we will find there are a lot of misconceptions. The good news is that most of corrections needed are within the realm of the automation engineer’s responsibility.
The more general case with possible safety and process performance consequences is when the final control element (e.g., control valve or variable frequency drive), transportation delay, sensor lag(s), transmitter damping, signal filtering, wireless update rate and PID execution rate is too slow. The question is what are the criteria and priorities in terms of increasing the speed of response.
The key to understanding the impact of slowness is to realize that the minimum peak error and integrated absolute error are proportional to the deadtime and deadtime squared, respectively. The exception is deadtime dominant loops that basically have a peak error equal to the open loop error (error if the PID is in manual) and thus an integrated error that is proportional to deadtime. It is important to realize that this deadtime is not just the process deadtime but a total loop deadtime that is the summation of all the pure delays and the equivalent deadtime from lags in control loop whether in the process, valve, measurement or controller.
These minimum errors are only achieved by aggressive tuning seen in the literature but not used in practice because of the inevitable changes and unknowns concerning gains, deadtime, and lags. There is always a tradeoff between minimization of errors and robustness. Less aggressive and more robust tuning while necessary results in a greater impact of deadtime in that the gain margin (ratio of ultimate gain to PID gain) and the phase margin (degrees that a process time constant can decrease) is achieved by setting the tuning to be a greater factor of deadtime. For example, to achieve a gain margin of 6 and a phase margin of 76 degrees, lambda is set as 3 times the deadtime.
The actual errors get larger as the tuning becomes less aggressive. The actual peak error is inversely proportional to the PID gain. The actual integrated error is proportional to the ratio of the integral time (reset time) to PID gain. Consider the use of lambda integrating process tuning rules for a near integrating process where lambda is an arrest time. If you triple the deadtime used in setting the PID gain and reset to maintain a gain margin of about six and a phase margin of 76 degrees, you decrease the PID gain by about a factor of two times the new deadtime and increase the reset time by about a factor of two times the new deadtime increasing the actual integrated error by a factor of thirty six when the new deadtime is 3 times the original deadtime.
Consequently, how fast automation system components need to be depends on how much they increase the total loop deadtime. The components to make the loop faster is first chosen based on ease such as decreasing PID and wireless execution rate, signal filtering and transmitter damping assuming these are more than ten percent of total loop deadtime. Next you need to decrease the largest source of deadtime that may take more time and money such as a better thermowell or electrode design, location and installation or a more precise and faster valve. The deadtime from PID and wireless update rates is about ½ the time between updates. The deadtimes from transmitter damping or sensor lags increase logarithmically from about 0.28 to 0.88 times the lag as the ratio of the lag to the largest open loop time constant decreases from 1 to 0.01. The deadtime from backlash, stiction and poor sensitivity is the deadband or resolution limit divided by the rate of change of the controller output. Fortunately, deadtime is generally easier and quicker to identify than the open loop time constant and open loop gain. See the Control Talk Blog “Deadtime, the Simple Easy Key to Better Control.”
For flow and pressure processes, the process deadtime is often less than one second making by far the control system components the largest source of deadtime. For compressor, liquid pressure and furnace pressure control, the control valve is the largest source of deadtime even when a booster is added. Transmitter damping is generally the next largest source followed by PID execution rate.
There is a common misconception that the wireless update time should be less than a fraction (e.g., 1/6) of the response time. For the more interesting processes such as temperature and pH, the time constant is much larger than the deadtime. A well-mixed vessel could have a process time constant that is more than 40 times the process deadtime. If you use the criteria of 1/6 the response time assuming the best case scenario of a 63% response time, the increase in deadtime can be as large as 3 times the deadtime from the wireless update rate. Fortunately, wireless update rates are never that slow. Another reason not to focus on response time is because in integrating processes where there is no steady state, a response time is irrelevant.
The remaining question is when is the automation system too fast? The example that most comes to mind is when the faster system causes greater resonance or interaction. You want the most important loops to be able to see an oscillation from less important loops whose period is at least four times its ultimate period to reduce resonance and interaction. Hopefully, this is done by making the more important loop faster but if necessary is done by making the less important loops slower. A less recognized but very common case of needing to slow down an automation loop is when it creates a load disturbance to other loops (e.g., feed rate change). While step changes are what are analyzed in the literature so far as disturbances, in real applications there are seldom any step changes due the tuning of the PID and the response of the valve. This effect can be approximated by applying a time constant to the load disturbance and realizing that the resulting errors are reduced compared to the step disturbance by a factor that is one minus the base e to the negative power of lambda divided by the disturbance time constant.
Overshoot of a temperature or pH setpoint is extremely detrimental to bioreactor cell life and productivity. Making the loop response much slower by much less aggressive tuning settings and a PID structure of Integral on Error and Proportional -Derivative on Process Variable (I on E and PD on PV) is greatly needed and permitted because the load disturbances from cell growth rate or production rate are incredibly slow (effective process time constant in days). In fact, fast disturbances are the result of one loop affecting another (e.g., pH and dissolved oxygen control).
In dryer control, the difference between inlet and outlet temperatures that is used as the inferential measurement of dryer moisture is filtered by a large time constant that is greater than the moisture controller’s reset time. This is necessary to prevent a spiraling oscillation from positive feedback.
Filters on setpoints are used in loops whose setpoint is set by an operator or a valve position controller to change the process operating point or production rate. This filter can provide synchronization in ratio control of reactant flow maintaining the ability of each flow loop to be tuned to deal with supply pressure disturbances and positioner sensitivity limits. However, a filter on a secondary lower loop setpoint in cascade control is generally detrimental because it slows down the ability of the primary loop to react to disturbances.
Finally, more controversial but potentially useful is a filter on the pH at the outlet of static mixer for a strong acid and base to control in the neutral region. Here the filter acts to average the inevitable extremely large oscillations due to nearly non-existent back mixing and the steep titration curve. The result is a happier valve and operator. The averaged pH setpoint should be corrected by a downstream pH loop that is on a well-mixed vessel that sees a much smoother pH on a much narrower region of the titration curve. A better solution is signal characterization. The static mixer controlled variable becomes the abscissa of the titration curve (reagent demand) rather than the ordinate (pH). This linearization greatly reduces the oscillations from the steep portion of the titration curve and enables a larger PID gain to be used. The titration curve must not be very accurate but must include the effect of absorption of carbon dioxide from exposure to air and the change in dissociation constants and consequently actual solution pH with temperature not addressed by a standard temperature compensator that is simply addressing the temperature effect in the Nernst equation. You need to be also aware that the pH of process samples and consequently the shape of the titration curve can change due to changes in sample liquid phase composition from reaction, evaporation, absorption and dissolution. The longer the time is between the sample being taken and titrated, the more problematic are these changes.
The post, Solutions to Prevent Harmful Feedforwards, originally appeared on the ControlGlobal.com Control Talk blog.
Here we looks at applications where feedforward can do more harm than good and what to do to prevent this situation. This problem is more common than one might think. In the literature we mostly hear how beneficial feedforward can be for measured load disturbances. Statements are made that the limitation is the accuracy of the feedforward and that consequently an error of 2% can still result in a 50:1 improvement in control. This optimistic view does not take into account process, load and valve dynamics. The feedforward correction needs to arrive in the process at the same point and the same time as the load disturbance. This is traditionally achieved by passing the feedforward (FF) through a deadtime block and lead-lag block. The FF deadtime is set equal the load path deadtime minus the correction path deadtime. The FF lead time is set equal to the correction path lag time. The FF lag time is set equal to the load path lag time. If the FF arrives too soon, we create inverse response and if the FF arrives too late, we create a second disturbance. Setting up tuning software to identify and compute the FF dynamic can be challenging. Even more problematic are the following feedforward applications that do more harm than good despite dynamic compensation.
(1) Inverse response from the manipulated flow causes excessive reaction in the opposite direction of load. The inverse response from a feedwater change can be so large as to cause a boiler drum high or low level trip, a situation that particularly occurs for undersized drums and missing feedwater heaters due misguided attempts to save on capital costs. The solution here is to use a traditional three element drum level control but added to the traditional feedforward is an unconventional feedforward with the opposite sign that is decayed out over the period of the inverse response. In other words, for a step increase in steam flow, there would be initially a step decrease in boiler feedwater feedforward added to the three element drum level controller output that is trying to increase feedwater flow. This prevents shrink and a low level trip from bubbles collapsing in the downcomers from an increase in cold feedwater. For a step decrease in steam flow, there would be a step increase in boiler feedwater feedforward added to the three element drum level controller output that is trying to decrease feedwater flow. This prevents swell and a high level trip from bubbles forming in the downcomers from a decrease in cold feedwater. A severe problem of inverse response can occur in furnace pressure control when the scale is a few inches of water column and the incoming air manipulated is not sufficiently heated. The inverse response from the ideal gas law can cause a pressure trip. An increase in cold air flow causes a decrease in gas temperature and consequently a relatively large decrease in gas pressure at the furnace pressure sensor. A decrease in cold air flow causes an increase in gas temperature and consequently a relatively large increase in gas pressure at the furnace pressure sensor.
(2) Deadtime in correction path is greater than the deadtime in the load path. The result is a feedforward that arrives too late creating a second disturbance and worse control than if there was no feedforward. This occurs whenever the correction path is longer than the load path. An example is a distillation column control when the feed load upset stream is closer to the temperature control tray than the corrective change in reflux flow. The solution is to generate the feedforward signal for ratio control based on a setpoint change that is then delayed before being used by the feed flow controller. The delay is equal to the correction path deadtime minus the load path deadtime. The same problem can occur for a reagent injection delay that often occurs due to conventional sized dip tubes and small reagent flows. The same solution applies in terms of using an influent flow controller setpoint for feedforward ratio control of reagent and delaying the setpoint used by the influent flow controller.
(3) Feedforward correction makes response from an unmeasured disturbance worse. This occurs in unit operations such as distillation columns and neutralizers where the unmeasured disturbance from a feed composition change is made worse by a feedforward correction based on feed flow. Often feed composition is not measured and is large due to parallel unit operations and a combination of flows that become the feed flow. For pH, the nonlinearity of titration curve increases the sensitivity to feed composition. Even if the influent pH is measured, the pH electrode error or uncertainty of the titration curve makes feedforward correction for feed pH to do more harm than good for setpoints on the steep part of the curve. If the feed composition change requires a decrease in manipulated flow and there is a coincidental increase in feed flow that corresponds to an increase in manipulated flow or vice versa, the feedforward does more harm than good. The solution is to compute the required rate of change of manipulated from the unmeasured disturbance and subtract this from the computed rate of change for the feedforward correction needed paying attention to the signs of the rate of change. If the unmeasured disturbance rate of change of manipulated flow is in the same direction and exceeds the computed feedforward correction rate of change in the manipulated flow, the feedforward rate of change is clamped at zero to prevent making control worse. If the rates of change for the manipulated are in the same direction, the magnitude of the feedforward rate of change is correspondingly increased.
I am trying to see how all this applies in my responses to known and unknown upsets to my spouse.
I have seen engineers and technologists thrown into the world of process instrumentation and control (PIC) with little or no
Bergotech's N.E. (Bill) Battikha
knowledge of this engineering specialty—and they were expected to perform immediately. At best, they may have taken a course in control theory, which is very rarely (if ever) used in a plant environment.
PIC typically represents a substantial cost to an average industrial project. It’s a high-tech discipline critical to the success and survival of a plant, and yet it is typically learned “on the job.” Many people working in the discipline lack the proper training needed to make appropriate decisions. An error could result in a very expensive or hazardous situation.
Many of them don't know the basics. Over the years, PIC personnel have come to me with questions such as, “How does an orifice plate work? With a square root output? Why?” and “How can I describe all this logic? In a logic diagram? What’s that?”
Worse, I have seen so-called experienced PIC personnel facing a ground loop problem because both the transmitter and receiver had their signals grounded. The solution they took? They went back to the vendor of the receiver and asked to have the equipment isolated from ground. In other words, a modification to an electrically-approved off-the-shelf product. The cost of modifying the circuit boards on these fancy receivers and obtaining the required approvals—and there were 20 of them—was astronomical. The experienced personnel and their supervisor had never heard of a loop isolator. Unbelievable, but true.
My examples could go on, filling a few pages. However, I will stop here as the topic of this article is not about listing my complaints. But you can understand what is typically encountered due to lack of knowledge, which is due to the lack of good training.
This is not the fault of the people doing the work. They were never properly trained. The result of this lack of training is poor performance and longer times to correctly implement control systems in a competitive environment squeezed by tight budgets and stiff competition.
There are two main problems facing the need for training: time and money. Time is a problem because organizations operate with a skeleton staff and therefore, it is very hard for a manager to let an employee take time off for training. Money is another problem as budgets are tight, and global competition does not leave much room for “extra” spending on training. In addition to course fees, there is also the cost of traveling and accommodation expenses to a location where face-to-face training is provided.
Besides the time and money issues, many engineering associations have now implemented a requirement for continuing professional development (CPD) for its members. Under such a requirement, members must provide a declaration of competence combined with a reporting of how they are maintaining competence in their discipline. So, adding to the time and money issues, we now have CPD requirements. What can be done?
Training is available in different formats, each with its advantages and disadvantages. It can be provided in the classical form of face-to-face in regular classrooms. However, face-to-face teaching is relatively expensive due to the student having to travel to a remote location (where the class is conducted). In addition, the employee is absent from his/her workplace.
A multitude of face-to-face courses are available from equipment vendors and manufacturers, training companies, universities and technical colleges. The majority of them are not in a sequential format to allow a person to start with the basics and move on to more complex topics. In addition, and quite often, these courses are either too theoretical, or are geared for someone who already has a reasonably good basic knowledge of PIC.
Self-teach programs are another format of training, available either from self-teach books or from software loaded onto personal computers, some which are interactive. This solution is probably the lowest in cost. However, without an instructor available to answer questions, it is up to the student to understand the information at hand and, more important, to have the self-discipline to proceed and complete the learning process independently. At the end, self-teach programs do not typically provide proof of successful completion and understanding by the student.
How about those who want to learn about PIC in an organized fashion, in a condensed time frame, from a practical point of view, with limited training funds and without the (almost impossible) absence from work? The solution is instructor-led quality online learning. This approach provides training without the student having to travel, keeping the personnel on site and costs reduced to a relatively affordable minimum.
Online education allows a student to progress at a relatively convenient pace. With good instructional material fit to the course, students learn and complete quizzes and exams to confirm their acquired knowledge. This approach, with an instructor to answer questions, provides an incentive to finish the study program. It is followed by a certificate obtained on successful completion of quizzes and exams, and is relatively low in cost while keeping the student available at work since the online sessions are typically held in the evening.
I have been teaching university-based online PIC courses for about eight years. I have learned through trial and error as well as through students’ comments and suggestions that the most effective approach for a quality PIC online course is to present it in three modules spread over a year. Such a course would cover the different facets of PIC from a non-mathematical and practical point of view. The spread over one year allows the students to gradually apply and practice some of the information learned. It also avoids students’ information overload.
Including theory such as Laplace Transform, Bode Plots and the like in a PIC practical course has little value in day-to-day plant operation. And speaking from personal experience, this type of theoretical information would be forgotten shortly after the course is completed.
To the best of my knowledge, such online, instructor-led, university-based PIC training programs are presently being taught in North America by three institutions. All of the three use the same award-winning reference book published by the ISA and titled, “The Condensed Handbook of Measurement and Control.”
In the United States, the course is offered by: Penn State University - Berks (phone# 610-396-6221) and University of Kansas Continuing Education (phone# 1-877-404-5823 or 785-864-5823).
In Canada, the course is offered by: Dalhousie University Continuing Education (phone # 800-565-1179 or 902-494-6079).
These three organizations offer a university certificate that is awarded after the successful completion of the three modules, including all quizzes and final exams. The three modules of these certificate programs amount to approximately 150 classroom hours. The universities recommend that participants attend Modules 1, 2, and 3 in their sequential order, however, some of the students, due to their prior knowledge of PIC, take the modules in a different order and have successfully passed all quizzes and exams.
I’ve successfully instructed face-to-face PIC courses for more than 10 years in many industrial plants, at ISA functions, and at several North American universities. Then, and due to a substantial drop in student enrollment following the financial problems of 2018-2019, I started online training at two universities. At the beginning, I was hesitant about the potential effectiveness and success of online training. I have now changed my mind. In addition to avoiding the effects of cost and time lost away from the workplace, online training has proven to be effective and practical for the students. A five-fold increase in the number of students occurred with the implementation of the online course compared to the face-to-face it replaced, proving its success and benefits.
Online courses have their limitations. They can replace many face-to-face courses, but not all. For example, online learning can’t provide hands-on training such as control equipment maintenance. Dedicated training facilities provide such training, often at a vendor’s facility.
The main benefits of an on-line university-based and instructor-led certificate program are:
Online PIC training, when accompanied by a good reference book, quality course notes, quizzes, and exams, provides students with the knowledge and confidence needed to grasp this field of technology.
As a final note, if you think education is expensive, try ignorance. You'll find it more expensive.
N.E. (Bill) Battikha, PE, president, Bergotech / firstname.lastname@example.org
About the Author
N.E. (Bill) Battikha, P.E., has more than 40 years of experience in PIC, working mainly in the USA and Canada. He holds a Bachelor of Science in Engineering and is a member of the Delaware Association of Professional Engineers. Throughout his career, Bill has gained a lot of experience in management, engineering and training. Bill has generated and conducted training courses for many universities in the USA and Canada, including Penn State University, the University of Wisconsin, Kansas State University, the University of Toronto and Dalhousie University. He co-authored a patent and a commercial software package. He also wrote four books on PIC, all published by the ISA, with the third one (The Condensed Handbook of Measurement and Control) twice awarded the Raymond D. Molloy Award as an ISA best-seller. Bill is the president of Bergotech Inc., a firm specializing in teaching online engineering courses in a variety of disciplines as well as implementing university-based online programs. For more info, or to contact the author, please visit www.bergotech.com
The post What are New Technologies and Approaches for Batch and Continuous Process Control? first appeared on the ISA Interchange blog site.
The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.
What is the technical basis and ability of technologies other than PID and model predictive control (MPC)? These technologies seem fascinating and I would like to know more, particularly as I study for the ISA Certified Automation Professional (CAP) exam.
Michel Ruel has achieved considerable success in the use of fuzzy logic control (FLC) in mineral processing as documented in “Ruel’s Rules for Use of PID, MPC and FLC.” The process interrelationships and dynamics in the processing of ores are not defined due to the predominance of missing measurements and unknown effects. Mineral processing PID loops are often in manual, not only for the usual reasons of valve and measurement problems, but also because process dynamics between a controlled and manipulated variable radically change, including even the sign of the process action (reverse or direct) based on complex multivariable effects that can’t be quantified.
If the FLC configuration and interface is set up properly for visibility, understandability and adjustability of the rules, the plant can change the rules as needed, enabling sustainable benefits. In the application cited by Michel Ruel, every week metallurgists validate rules, make slight adjustments, and work with control engineers to make slight adjustments. A production record was achieved in the first week. The average use of energy per ton had decreased by 8 percent, and the tonnage per day had increased by 14 percent.
There have been successful applications of PID and MPC in the mining industry as detailed in the Control Talk columns “Process control challenges and solutions in mineral processing” and “Smart measurement and control in mineral processing.”
I have successfully used FLC on a waste treatment pH system to prevent RCRA violations at a Pensacola, Fla. plant because of my initial excitement about the technology. It did very well for decades but the plant was afraid to touch it. The 2007 Control magazine article “Virtual Control of Real pH” with Mark Sowell showed how you could replace the FLC with an MPC and PID strategy that could be better maintained, tuned and optimized.
We used FLC integrated into the software for a major supplier of expert systems in the 1980s and 1990s but there were no real success stories for FLC. There was one successful application of an expert system for a smart level alarm but it did not use FLC. However, a simple material balance could have done as well. There were several applications for smart alarms that were turned off. After nearly 100 man-years, we have not much at all to show for these expert systems. You could add a lot rules for FLC and logic based on the expertise of the developer of the application, but how these rules played together and how you could tell which rule needed to be changed was a major problem. When the developer left the production unit, operators and process engineers were not able to make changes inevitably needed.
The standalone field FLC advertised for better temperature setpoint response cannot do better than a well-tuned PID if you use all of the PID options summarized in the Control magazine article “The greatest source of process control knowledge,” including PID structure such as 2 Degrees of Freedom (2DOF) or a setpoint lead-lag. You can also use gain scheduling in the PID if necessary. The problem with FLC is how you tune it and update it for changing process conditions. I wrote the original section on FLC in A Guide to the Automation Body of Knowledge but the next edition is going to have it omitted due to common agreement between me and ISA that making more room to help get the most out of your PID was more generally useful.
FLC has been used in pulp and paper. I remember instances of FLC for kiln control but since then we have developed much better PID and MPC strategies that eliminate interaction and tuning problems.
So far as artificial neural networks (ANN), I have seen some successful applications in batch end point detection and prediction and for inferential dryer moisture control. The insertion of time delays on inputs to make them coincide with measured output is required for continuous operations. For plug flow operations like dryers, this can be readily done since the deadtime is simply the volume divided by the flow rate. For continuous vessels and columns, the insertion of very large lag times and possibly a small lead time are needed besides dead time. No dynamic compensation is needed for batch operation end point prediction.
You have to be very careful not to be outside of the test data range because of bizarre nonlinear predictions. You can also get local reversals of process gain sign causing buzzing if the predicted variable is used for closed loop control. Finally, you need to eliminate correlations between inputs. I prefer multivariate statistical process control (MSPC) that eliminates cross correlation of inputs by virtue of principle component analysis and does not exhibit process gain sign reversals or bizarre nonlinearity upon extrapolation outside of test data range. Also, MSPC can provide a piecewise linear fit to nonlinear batch profiles, which is a technique we commonly do with signal characterizers for any nonlinearity. I think there is an opportunity for MSPC to provide more intelligent and linear variables for an MPC like we do with signal characterizers.
For any type of analysis or prediction, whether using ANN or MSPC, you need to have inputs that show the variability in the process. If a process variable is tightly controlled, the PID or MPC has transferred the variability to the manipulated variable. Ideally, flow measurements should be used, but if only position or a speed is available and the installed flow characteristic is nonlinear, signal characterization should be used to convert position or speed to a flow.
I implemented a neural network some years ago on a distillation column level control. The column was notoriously difficult to control. The level would swing all over and anything would set it off, such as weather or feed changes. The operators had to run it in manual because automatic was a hopeless waste of time.
At the time (and this information might be dated) the neural network was created by bringing a stack of parameters into the calculation and “training” it on the data. Theoretically the calculation would strengthen the parameters that mattered, weaken the parameters that didn’t, and eventually configure itself to learn the system.
The process taught me much. Here are my main learning points:
1) Choose the training data wisely. If you give it straight line data then it learns straight lines. You need to teach it using upset data so it learns what to do when things go wrong. (Then use new upset data to test it.)
2) Choose the input parameters wisely. I started by giving it everything. Over time I came to realize that the data it needed wasn’t the obvious. In this case it needed:
3) Ultimately the system worked very well – but honestly by the time I had gone through four iterations of training and building the system I KNEW the physics behind it. The calculation for controlling the level was fairly simple when all was said and done. I probably could have just fed it into a feedforward PID and accomplished the same thing.
The experience was interesting and fun, and I actually got an award from ISA for the work. However when all was said and done, I realized it wasn’t nearly as impressive a tool as all the marketing brochures suggested. (At the time it was all the rage – companies were selling neutral network controller packages and magazine articles were predicting it would replace PID in a matter of years.)
Thank you, this is a lot more practical insight than I have been able to glean from the books.
I imagine the batch data analytics program offered by a major supplier of control systems is an example of the MSPC you mentioned. I think I have some papers on it stashed somewhere, since we have considered using it for some of our batch systems. What is batch data analytics and what can it do?
Yes, batch data analytics uses MSPC technology with some additional features, such as dynamic time warping. The supplier of the control system software worked with Lubrizol’s technology manager Robert Wojewodka to develop and improve the product for batch processes as highlighted in the InTech magazine article “Data Analytics in Batch Operations.” Data analytics eliminates relationships between process inputs (cross correlations) and reduces the number of process inputs by the use of principal components constructed that are orthogonal and thus independent of each other in a plot of a process output versus principle components. For two principal components, this is readily seen as an X, Y and Z plot with each axis at a 90-degree angle to the each other axis. The X and Y axis covers the range of values principal components and the Z axis is the process output. The user can drill down into each principal component to see the contribution of each process input. The use of graphics to show this can greatly increase operator understanding. Data analytics excels at identifying unsuspected relationships. For process conditions outside of the data range used in developing the empirical models, linear extrapolation helps prevent bizarre extraneous predictions. Also, the use of a piecewise linear fit means there are no humps or bumps that cause a local reversal of process gain and buzzing.
Batch data analytics (MSPC) does not need to identify the process dynamics because all of the process inputs are focused on a process output at a particular part of the batch cycle (e.g., endpoint). This is incredibly liberating. The piecewise linear fit to the batch profile enables batch data analytics to deal with the nonlinearity of the batch response. The results can be used to make mid-batch corrections.
There is an opportunity for ANN to be used with MSPC to deal with some of the nonlinearities of inputs but the proponents of MSPC and ANN often think their technologies is the total solution and don’t work together. Some even think their favorite technology can replace all types of controllers.
Getting laboratory information on a consistent basis is a challenge. I think for training the model, you could enter the batch results manually. When choosing batches, you want to include a variety of batches but all with normal operation (no outliers from failures of devices or equipment or improper operations). The applications as noted in the Wojewodka article emphasize what you want to have as a model is the average batch and not the best batch (not the “golden batch”). I think this is right to start detecting abnormal batches but process control seeks to find the best and reduce the variability from the best so eventually you want a model that is representative of the best batches.
I like MSPC “worm plots” because they tell me from tail to the head the past and future of batches with tightness of coil adding insight. The worm plot is a series of batch end points expressed as a key process variable (PV1n) that is plotted as scores of principal component 1 (PC1) and principal component 2 (PC2)
If you want to do some automated correction of the prediction by taking a fraction of the difference between the predicted result and lab result, you would need to get the lab result into your DCS probably via OPC or some lab entry system interfaced to your DCS. Again the timing of the correction is not important for batch operations. Whenever the bias correction comes in, the prediction is improved for the next batch. The bias correction is similar to what is done in MPC and the trend of the bias is useful as a history of how the accuracy is changing and whether there is possibly noise in the lab result or model prediction.
The really big name in MSPC is John F. MacGregor at McMaster University in Ontario, Canada. McMaster University has expanded beyond MSPC to offer a process control degree. Another big name there is Tom Marlin, who I think came originally from the Monsanto Solutia Pensacola Nylon Intermediates plant. Tom gives his view in the InTech magazine article “Educating the engineer,” Part 2 of a two-part series. Part 1 of the series, “Student to engineer,” focused on engineering curriculum in universities.
For more on my view of why some technologies have been much more successful than others, see my Control Talk blog “Keys to Successful Control Technologies.”
The post, Common Mistakes not Commonly Understood - Finale, first appeared on the Controlglobal.com Control Talk blog.
Here is our finale just in time to serve as a momentous process control gift for the Holidays. Just don’t try to re-gift this to anyone unless they are into the automation profession or you big time.
(21) Misuse and Missing Use of Setpoint Filter. The use of a setpoint filter on a secondary loop setpoint will slow down the ability of the primary loop to make a correction for changes in load to or the setpoint of the primary controller in cascade control. For this reason, there has been a general rule of thumb that a setpoint filter should not be used on secondary controllers. However, as with most rules of thumb there are important exceptions derived from a deeper understanding. The setpoint filter on the secondary loop does not interfere with the ability of the secondary loop to reject disturbances and to deal with nonlinearities within the secondary loop that is often the most frequent and important role. If the setpoint filter is judicially applied so that it is less than 10% of the primary loop dead time, the effect on the ability of the primary loop to reject disturbances originating in the primary loop is negligible. The use of a judicious setpoint filter can ensure there are no temporary unbalances from changes of multiple flows under ratio control by enabling all the flows to move in concert. This is critical for reactant flows and the inline mixing of any flows. Often this unbalance was prevented by tuning the secondary flow loops to have the same closed loop time constant. Unfortunately, this forces the tuning of loops to be as slow as the slowest or most nonlinear flow loop. This detuning reduces the ability of the other loops to deal with their pressure disturbances and nonlinearities of their installed flow characteristics. Also, the use of setpoint rate limits that are different up and down gives directional move suppression to provide a fast approach to a better operating condition and a fast getaway to prevent undesirable operating point in the process variable (PV) or manipulated valve position. This is important to provide a fast opening and slow closing surge valve for compressor control, to optimize user valve position to improve process efficiency or maximize production by a valve position control, and to prevent oscillations across a split range point. For primary loops, a setpoint filter time equal to the reset time is the same as a PID structure of Proportional and Derivative on PV and Integral on Error (PD on PV, I on Error) so that the setpoint and load response are the same. The addition of lead time to the setpoint that is about 25% of the setpoint filter where the filter is lag time of a lead-lag block enables a faster setpoint response.
(22) Choosing and Achieving Level Control Objective. We are increasingly becoming aware that level loops are tuned too aggressively causing rapid changes in the manipulated flow upsetting downstream unit operations. The solution for level loops that need loose control is not to simply reduce the level controller gain because this can cause slow rolling oscillations. The tuning objective is to so minimize the transfer of variability of level to the manipulated flow and is more often stated as the maximization of the absorption of variability. The solution is to first increase the reset time dramatically, like one or two orders of magnitude, and then decrease the PID gain so that the product of the PID gain and reset time is greater than twice the inverse of the integrating process gain whose units are %/sec/% (1/sec) as discussed in Mistake 2 in Part 1. For surge tank level control, the objective is obviously maximization of absorption of variability. This objective has gained such popularity that the cases where the level controller must be tuned tightly are not recognized and addressed. In fact some may say there are no such cases and that feedforward control can take care of providing tighter level control when needed. There are exceptions. The biggest one that comes to mind is the distillate receiver level controller that manipulates reflux flow. Tight level feedback control achieves internal reflux control where changes in column top temperature, particularly from blue northerners, causes a change in overhead distillate flow and hence distillate receiver level that results in a correction of manipulated reflux flow in the direction to minimize the disturbance. Another case is where tight residence time control in continuous reactors provides enough time to complete a reaction but not so much time as to cause side reactions or polymer buildup. A change in production rate must result in a change in level setpoint that must be reached quickly by tight level control so that residence time (level/flow) is as constant as possible. A similar concern may exist for continuous crystallizers. For multiple effect evaporators, changes in discharge flow from the last stage to control product solids concentration must be translated to changes in feed coming into each stage by its level controller to affect product concentration. There are similar requirements whenever an upstream flow is manipulated to control a level, such as a raw material makeup flow to deal with changes in a recovered recycle flow. While feedforward flow and ratio control can help, good level control deals with the inevitable errors that cause unbalances in stoichiometry.
(23) Misunderstanding of Load Disturbances. There is a huge disconnect in the literature and what really happens in a plant in terms of the supposed location of a disturbance. The literature and consequently many tuning methods and new algorithms supposedly better than PID are based on the disturbance being on the process output downstream of time constants and dead times in the process and even in most cases ignoring any time constants or dead times in the measurement. This view is convenient for thinking model predictive control and internal model control are best for disturbance rejection and that tuning for setpoint changes is sufficient since a disturbance on the process output is as quick as a change in setpoint. The reality is that nearly all disturbances occur as a process input and are delayed by process dead times or slowed by process time constants. For lag dominant processes, this recognition is particularly important and is the basis of switching from self-regulating tuning rules where lambda is a closed loop time constant for a setpoint change to integrating process tuning rules where lambda is an arrest time for rejection of a load disturbance on the process input. Of course there are a few exceptions where the disturbance is on a process output that would benefit from a larger reset time but this can be identified by tuning the controller for a setpoint response. Also, when in doubt a larger reset time is always a good thing to try since integral action is destabilizing. More proportional action can be stabilizing as discussed in Mistakes 2 and 3 in Part 1.
(24) Missing Automated Startup. Often loops cannot be simply put in automatic for startup.The controller approach to setpoint is often not as smooth and consistent with other controllers approach to setpoint. Often the operator manually position valves to get the process to a reasonable operating state before going to automatic control. The best practices of the best operators can be automated and implemented with much better timing and repeatability enabling continuous improvement by better recognition of what is left to be addressed. If operators say the situation is too complex or conditional on their expertise to be automated, it is even a greater opportunity and motivation for automation. For much more on how procedural automation can be used for startup and dealing with abnormal situations, see the Sept 2016 Control Talk column “Continuous improvement for continuous processes”
(25) Missing Ratio Control. Nearly all process inputs are flows that have a specific ratio to each other for a unit operation as seen on a Process Flow Diagram. The simple use of ratio control is inherently powerful where a “leader” flow is chosen that is often a major feed flow and the other flow controller setpoints designated as “followers” are ratioed to the “leader” flow controller setpoint. If the flows need to work in concert with each other, a filter is applied to each flow setpoint including the “leader” flow for reasons noted in Mistake 21. The actual ratio must be displayed for the operator based on measured flows and the operator must be given the ability to change the ratio for startup and abnormal operating conditions via a ratio controller for each “follower” flow. The ratio often has a feedback correction by a primary temperature or composition controller output. For plug flow volumes, conveyors and sheet lines, the feedback correction changes the ratio setpoint. For back mixed volumes, the feedback correction biases the ratio controller output. For more on ratio control, see the 1/31/2017 Control Talk Blog “Uncommon Knowledge for Achieving Best Feedforward and Ratio Control”
(26) Misleading response time statements. The term response time has no value unless a percentage of the final response is noted. For linear systems, the 63% response time is the dead time plus one time constant. The 86% response time often used for valve response is the dead time plus two time constants. The 95% and 98% response times are the dead time plus three and four time constants, respectively. Waiting for the 98% response takes a lot of time making the test vulnerable to changing conditions and disturbances. For large distillation columns, it could take days to see a 98% response.
(27) Not knowing the dead time to time constant ratio. The tuning and performance of a control loop for self-regulating processes depends heavily upon the dead time to time constant ratio. Most studies in the literature are for loops that are smart enough to include dead time have a dead time to time constant ratio between 2 and 0.5 that are termed “balanced” self-regulating processes. When the dead time to time constant ratio is much larger than 1, the process is termed “dead time dominant” and the reset time can be significantly decreased and PID gain should be decreased to reduce the reaction to noise and abrupt response due to the lack of a significant time constant. For more on these processes see the 12/1/2016 Control Talk Blog “Deadtime Dominance - Sources, Consequences and Solutions”
When the dead time to time constant ratio is less than ¼, the process is termed “lag dominant” and “near-integrating”. Integrating process tuning rules should be used that increase the reset time and PID gain to account for the reduced degree of negative feedback in the process. In the time frame of a major PID reaction (4 dead times), the process ramps and appears to be similar to an integrating process. The integrated absolute error (IAE) for all processes is proportional to the ratio of the reset time to controller gain. The peak error for “dead time dominant” processes approaches the open loop error (error if controller was in manual). The peak error for “lag dominant” processes is inversely proportional to the controller gain since the PID gain can be quite high dominating the initial response. For more on how the dead time to time constant ratio affects performance see the 7/17/2017 Control Talk Blog “Insights to Process and Loop Performance”
(28) Unnecessary crossings of split range point. The valve stiction and nonlinearity and process discontinuity is greatest when switching from the manipulation of one valve and stream to another. Once a controller output crosses a split range point, the tendency is to oscillate back and forth unless there is a predominant need for one stream versus the other. Putting a dead band into the split range point will cause oscillations if there are two integrators either due to integrating action in a process or from the integral mode cascade loop or a positioner. The best way to prevent an unnecessary crossing of the split range point is to put up and down rate limits on a valve or flow controller setpoint where the rate limit is slow in the direction of going back to the split range point if there is no safety issue. For cooling and heating, the movement toward heating across split range point may be slowed down for temperature control. For venting and gas inlet flows, the movement toward more inlet flow across split range point may be slowed down for pressure control. For mammalian bioreactor pH control, the movement toward adding a base across the split range point, such as sodium bicarbonate, is slowed down to reduce sodium ion accumulation that increases cell osmotic pressure and cell lysis. External reset feedback to the primary PID of valve position or flow should be used so that primary PID (temperature, pressure or pH) does not try to change a valve or flow faster than it can respond. This is a great feature in general for cascade control and to provide directional move suppression for valve position control and surge control. For more on external reset feedback see the 4/26/2012 Control Talk Blog “What is the Key PID Feature for Basic and Advanced Control” A corrected and improved block diagram in time domain for a PID with the positive feedback implementation of integral action for the ISA Standard Form is
(29) Minimizing instrumentation cost. We get hung up on saving a few thousand dollars putting millions of dollars often at stake in terms of poor process performance and inadvertent shutdowns. A big mistake is allowing a packaged equipment supplier to choose the instrumentation since the supplier is seek to win the low bid contest. Similarly allowing purchasing to decide which instruments to buy is fundamentally bad. We need to take our knowledge about instrumentation performance and insist on the best even when the justification is not clear and pressure is put on to lower system cost. My cohort Stan Weiner would purposely increase the initial estimates for projects to give him the freedom to choose the best instrumentation. He favored inline meters such as Magnetic Flow Meters and Coriolis meters over differential head meters because of the better rangeability, accuracy, and maintainability and insensitivity to piping system. Similarly, for temperatures less than 400 degrees C, I use RTDs with “sensor matching” instead of thermocouples to reduce drift and improve accuracy and sensitivity by orders of magnitude despite a few hundred dollars more in cost.
(30) Lack of middle signal selection. The best way to avoid unnecessary shutdowns, eliminate the reaction to any possible type of single failure including a measurement stuck at setpoint, reduce the reaction to noise, spikes, and drift and provide intelligence as to what is wrong with a measurement is middle signal selection of three independent measurements. For pH this is almost essential. To me it is bizarre how multimillion bioreactor batches are put at risk from using two instead of three pH electrodes resulting in anybody’s guess as to which electrode is right. Some batches can be ruined by a pH that is off by just a few tenths yet engineers are reluctant to spend a couple of thousand dollars upfront not realizing that even if you disregard the cost of a potentially spoiled batch, the reduction in unnecessary maintenance more than pays for the extra electrode. For one large intermediate continuous process, the use of middle signal selection on all of the measurements used by the Safety Instrumented System (SIS) reduced the number of shutdowns from 2 per year to less than one every 2 years saving tens of millions of dollars each year. The risk of a disastrous operator mistake was also realty reduced because startup is the most difficult and hazardous mode for operations.
I could keep on talking but I think this is enough to start your “New Year Resolutions”. Hopefully, you will be better at keeping them than me.
The post What Are Best Practices and Standards for Control Narratives? first appeared on the ISA Interchange blog site.
The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.
At the place I work we are typically good at documenting how we configure our controls in the form of DDS documents but not always as good at documenting why they have been configured that way in the form of rigorous control narratives.
We now have an initiative to start retrospectively producing detailed control narratives for all our existing controls and I am looking for best practice, standards and examples of what good looks like for control narratives.
I wondered if you had any good resources in this regard or you could point me in any direction. (I did look at ANSI/ISA-5.06.01-2007 but this seems more concerned with URS/DDS/FDS documents rather than narratives).
We are mainly DeltaV now.
We do a lot of DeltaV systems and we use 3 different ways to “document” the control system. As a system integrator “document” for me may mean something than different than for you so let me explain that these documents are my way to tell my programmers exactly how I want the system to be configured. These documents fully define the system’s logic so they can program it and I can test against it.
As I said there are three parts:
Obviously batch flowsheets do not apply if your system isn’t batch but the same flow sheets can be used to define an involved sequence.
The tag list is simply a large excel spreadsheet that includes all of the key parameters – module name, IO Name, tuning constants, alarm constants, etc . It also includes a “comment” cell that can include relatively simple logic like “Man only on/off FC valve with open/close limits and 30 sec stroke” or “analog input”, or “Rev acting PID with man/auto modes and FO valve” etc. Most of the modules can be defined on this spreadsheet.
The logic notes are usually a couple of paragraphs each and explain logic that is more complicated. Maybe we have an involved set of interlocks or ratio or cascade logic. If I have a logic note I’ll reference it in the tag list so the programmer knows to look for it.
The flow sheets are the last part. I usually have a flow sheet for every phase which defines the phase parameters, logic paths, failures, etc. (See Figure 1 for an example of an agitate phase.) Then I create a flow chart for every recipe which defines what phases I am using and what parameters are being passed. (See Figure 2 for an example of a partial recipe.)
Figure 1: Control Narrative Best Practices Agitator Phase
Figure 2: Control Narrative Best Practices Recipe Sample
Hiten Dalal’s Pipeline Feed System Example
I find the American Petroleum Institute Standard API RP 554 Part 1 (R2016) “Process Control Systems: Part 1-Process Control Systems Functions and Functional Specification Development” and the ISA Standard ANSI / ISA 5.06.01-2007 Functional Requirements Documentation for Control Software Applications to be very useful. ANSI/ISA95 also offers guidance on “Enterprise-Control System Integration.” These types of documents in my opinion help include the opinion of all stakeholders in the logic without the stakeholder having to be familiar with flow charting or logic diagrams or specific control system engineering terminology. The functional specification in my opinion is a progressive elaboration of a simple process description done by the process engineer. Once finalized, the functional specification can be developed into a SCADA/DCS operations manual by listing normal sequence of operation along with analysis of applicable responsibility such as operator action/responsibility, logic solver responsibility, and HMI display. You may download my example of a pipeline control system functional specification: Condensate Feed Pump & Alignment Motor Operated Valves (MOVs).
The post When and How to Use Derivative Action in a PID Controller first appeared on the ISA Interchange blog site.
Derivative action is the least frequently used mode in the PID controller. Some plants do not like to use derivative action at all because they see abrupt changes in PID output and lack an understanding of benefits and guidance on how to set the tuning parameter (rate time). Here we have a question from one of the original protégés of the ISA Mentor Program and answers by a key resource on control Michel Ruel concluding with my view.
Is there a guideline in terms of when to enable the derivate term in a PID?
Derivative is more useful when dead time is not pure dead time but instead a series of small time constants; using derivative “eliminate” one of those small time constants.
You should use the derivative time equal to the largest of those small time constants. Since we usually do not know the details, a good rule of thumb is adjusting Derivative time to half the dead time.
Adding derivative (D) will increase robustness (higher gain and phase margin) since D will reduce apparent dead time of the closed loop.
A good example is the thermowell in a temperature loop: if the thermowell represents a time constant of 10 s, using a D of 10 seconds will eliminate the lag of the thermowell.
Hence, the apparent dead time of the closed loop is reduced and you can use more propositional, shorter integral time; the settling time will be shorter and stability better.
When you look at formulas to reject a disturbance, you observe that in presence of D, proportional and integral can be stronger.
We recommend using derivative only if the derivative function contains a built-in filter to remove high frequency noise. Most DCSs and PLCs have this function but some do not or there is a switch to activate the derivative filter.
What does having a higher phase margin increase the robustness?
Robustness means that the control loop will remain stable even if the model changes. Phase and gain margin represents the amplitude of the change before it becomes unstable, i.e. before reaching -180 degrees or a loop gain above one.
Ta analyze, we use open loop frequency response, the product of controller model and process model. On a Bode plot, gain are multiplied (or added if plot in dB) and total phase is the sum of process phase and controller phase.
Phase margin is the number of degrees required to reach -180 degrees when the open loop gain is 1 (0 dB). If this number is large (high phase margin), the system is robust meaning that the apparent dead time can increase without reaching instability. If the phase margin is small, a slight change in apparent dead time will bring the control loop to instability.
Adding derivative adds a positive phase, hence increases phase margin (compare to adding a dead time or a time constant that reduces the phase margin).
The use of derivative is more important in lag dominant (near-integrating), true integrating, and runaway processes (highly exothermic reactions). The derivative action benefit declines as the primary time constant (largest lag) approaches the dead time because the process changes become too abrupt due to lack of a significant filtering action by a process time constant.
Temperature loops have a large secondary time constant courtesy of heat transfer lags in the thermowell or the process heat transfer areas. Setting the derivative time equal to the largest of the secondary lags can cancel out almost 90 percent of the lag assuming the derivative filter is about 1/8 to 1/10 the rate time setting. Highly exothermic reactors can have positive feedback that causes acceleration of the temperature. Some of these temperature loops have only proportional and derivative action because integral action is viewed as unsafe.
If a PID Series Form is used, increasing the rate time reduces the integral mode action (increases the effective reset time), reduces the proportional mode action (decreases effective PID gain or increases effective PID proportional band) and moderates the increase in derivative action. The interaction factors moderates all of the modes preventing the resulting effective rate time from being greater than one-quarter the effective reset time. This helps prevent instability if the rate time setting approaches the reset time setting. There is no such inherent protection in the ISA Standard Form. It is critical that the user prevent the rate time from being larger than one-quarter the reset time in the ISA Standard Form. While in general it is best to identify multiple time constants, a general rule of thumb I use is the rate time should be the largest of a secondary time constant identified or one-half the dead time and never larger than one-quarter the reset time.
It is critical to convert tuning based on setting units and PID form used as you go from one vintage or supplier to another. It is best to verify the conversion with the supplier of the new system. The general rules for converting from different PID forms are given in the ISA Mentor Program Q&A blog post How Do You Convert Tuning Settings of an Independent PID with the last series of equations K1 thru K3 showing how to convert from a series PID form to the ISA Standard Form.
In general, PID structures should have derivative action on the process variable and not error unless the resulting kick in the PID output upon a setpoint change is useful to get to setpoint faster particularly if there is a significant control valve or VFD deadband or resolution limit.
A small setpoint filter in the analog output or secondary loop setpoint along with external reset feedback of the manipulated variable can make the kick a bump. A setpoint lead-lag on the primary loop where the lag time is the reset time and the lead is one-quarter of the lag or a two degrees of freedom structure with the beta set equal to 0.5 and the gamma set equal to about 0.25 can provide a compromise where the kick is moderated while getting to the primary setpoint faster.
Image Credit: Wikipedia
The post, Common Mistakes not Commonly Understood - Part 2, appeared first on the ControlGlobal.com Control Talk blog.
Here we continue on in our exploration of what we should know but don’t know and how it hurts us.
(11) Ignoring actuator and positioner sensitivity. Piston actuators are attractive due to smaller size and lower cost but have a sensitivity that can be 10 times worse than diaphragm actuators. Many positioners look fine for conventional tests but increase response time to almost a 100 times larger for step changes in signal less than 0.2%. The result is extremely confusing erratic spikey oscillations that only get worse as you decrease the PID gain. My ISA 2017 Process Control and Safety Symposium presentation ISA-PCS-2017-Presentation-Solutions-to-Stop-Most-Oscillations.pdf show the bizarre oscillations for poor positioner sensitivity. Slides 21 through 23 show the situation, the confusion and a simple tuning fix. While the fix helps stop the oscillations, the best solution is a better valve positioner to provide precise control (critical for pH systems with a strong acid or strong base due to amplification of oscillations from sensitivity limit).
(12) Ignoring drift in thermocouples. The drift in thermocouples (TCs) can be several degrees per year. Thus, even if there is tight control, the temperature loop setpoint is wrong resulting in the wrong operating point. Since temperature loops often determine product composition and quality, the effect on process performance is considerable with the culprit largely unrecognized leading to some creative opinions. Some operators may home in on a setpoint to get them closer to the best operating point, but the next shift operator may put the setpoint back at what is defined in the operating procedures. Replacement of the thermocouple sensor means the setpoint becomes wrong. The solution is a Resistance Temperature Detector (RTD) that inherently has 2 orders of magnitude less drift and better sensitivity for temperatures less than 400 degrees C. The slightly slower response of an RTD sensor is negligible compared to the thermowell thermal lags. The only reason not to use an RTD is a huge amount of vibration or a high temperature. Please don’t say you use TCs because they are cheaper. You would be surprised at the installed cost and lifecycle cost of a TC versus an RTD. See the ISA Mentor Program Webinar “Temperature Measurement and Control” for a startling table on slide 4 comparing TCs and RTDs and a disclosure of real versus perceived reasons to use TCs on slide 7.
(13) Not realizing the effect of flow ratio on process gain. The process gain of essentially all composition, pH and temperature loops is the slope of the process variable plotted versus the ratio of the manipulated flow to the main feed flow. This means the process gain is inversely proportional to feed flow besides being proportional to slope of plot. In order to convert the slope that is the change in process variable (PV) divided by the change in ratio to the required units of the change in PV per change in manipulated flow, you have to divide by feed flow. The plot of temperature or composition versus ratio is not commonly seen or even realized as necessary. The same sort of relationship holds true where the manipulated variable is an additive or reactant flow for composition or a cooling or heating stream flow for temperature. For temperature control, the slope of the curve is often also steeper at low flow creating a double whammy as to the increase in process gain at low flow. Also, for jackets, coils and heat exchangers, the coolant flow may be lower creating more dead time for a sensor on the outlet. Fortunately for pH, we have pH titration curves where pH is plotted versus a ratio of reagent volume added to sample volume although often the sample volume added is just used on the X axis. In this case, you need to find out the sample volume so you can put the proper abscissa on the laboratory curve. In the process application the titration curve abscissa that is the ratio of reagent volume to sample volume is simply the ratio of volumetric reagent flow to volumetric feed flow if the reagent concentrations are the same. You can then use this plot with application abscissa in terms of flow ratios to determine process gain and valve capacity, rangeability, backlash (deadband) and stiction (resolution) requirements. An intelligent analysis of the amplification of oscillations by the slope of limit cycle amplitude from deadband and resolution limitations determines the number of stages and size of reagent valves needed. Often for strong acid or bases, two or three stages of neutralization with largest reagent valve on first stage and smallest reagent valve on last stage are needed due to valve rangeability or precision limitations. For more details, check out the 12/12/2015 Control Talk Blog “Hidden Factor in Our Most Important Control Loops”.
(14) Ignoring effect of temperature on actual solution pH. We are accustomed to using the built-in temperature compensator that has been in pH transmitters for 60 or more years to account for the effect of temperature seen in glass electrode Nernst Equation. What we don’t tend to do is quantify and take advantage of the solution pH compensation in smart transmitters. The dissociation constant for water, acids and bases are a function of temperature. If you express the water dissociation constant as a pKw and the acid as well as the base dissociation constant as a pka, whenever the pH is within 4 pH of the pKw or pKa, there is a significant effect of temperature on actual pH. Physical property tables can detail the pKw and pKa as a function of temperature but the best bet is to vary the temperature of a lab sample and note the change in pH after correction for the Nernst equation (some lab meters don’t do even Nernst temperature compensation).
(15) Replacing a positioner with a booster. Extensive guidelines dating back to Nyquist plot studies in the 1960s concluded that fast loops should use a booster instead of a positioner. I still hear this rule cited today. This is downright dangerous due to positive feedback from the high outlet port sensitivity of the booster and flexure of the diaphragm actuator causing valve to slam shut. The volume booster should be placed on the output of the positioner with its bypass valve slightly open to stop any high frequency oscillations as seen in the ISA Mentor Program Webinar “How to Get the Most out of Control Valves”. You can fast forward to slides 18 and 19 to see the setup.
(16) Putting VFD speed control in the DCS. We like putting controls and logic as much as possible in the control room for adjustment and maintenance. While this is normally a good idea, a speed loop in the VFD instead of DCS is orders of magnitude faster enabling much tighter control. In fact if the speed loop is put in the DCS for flow and pressure control, you will violate the cascade rule that the secondary loop (speed) must be 5 times faster than primary loop.
(17) Putting a deadband into split range block for integrating processes or cascade control or integral action in positioner. The deadband creates a limit cycle just like deadband from backlash in a control valve when there are two or more integrators in loop whether in process, PID or positioner.
(18) Not taking into account temperature and pH cross sectional profile in pipelines. The temperature and pH varies extensively across a pipeline especially for high viscosity feeds or reagents. The tip should be near the centerline. For small pipelines, this may require installing the sensor in an elbow preferably facing into flow unless too abrasive. The pH sensor tip must, of course, be pointed down preferably at about a 45 degree angle so that bubble in internal fill of electrode does not reside in tip. The angle prevents the bubble from residing at the internal electrode (relatively low probability but possible).
(19) Not preventing measurement noise from phases and mixing. Thinking we need to have the sensor see a process change as fast as possible, we fail to realize a few seconds of transportation delay is better than a poor signal to noise ratio. To prevent a sensor seeing bubbles or undissolved solids in a liquid and droplets or condensate in a gas or steam, you need to locate a sensor sufficiently downstream of static mixer or exchanger or desuperheater outlet or where any streams come together. You need to keep sensor away from a sparge and avoid top or bottom of vessel or horizontal line. For temperature control of a jacketed vessel with split ranged manipulation of cooling water and steam, you should use jacket outlet instead of inlet temperature measurement to allow time for water to vaporize and for steam to condense. An even better solution is to use a steam injector to heat up the cooling water eliminating the transition of phases from going back and forth between steam and cooling water in the jacket. The injector provides rapid and smooth transitions from cooling to heating over quite a temperature range going from cold to hot water.
(20) Tuning to make smooth approach of PID output to final resting value in near and true integrating chemical processes. The main task of composition, temperature and pH loops in chemical processes is to be able to effectively reject load disturbances at the process input. This requires a maximization of controller gain and significant overshoot by the controller output of the final resting value to balance the load. Many experts in tuning who worked mostly on self-regulating processes don’t realize this requirement and may even say you should never tune the controller output to overshoot the final resting value failing to realize near-integrating processes will take an incredible long time to recover and true integrating processes will never recover from load disturbance. To understand the necessity of overshoot in PID output, think of a level loop where the level has increased because the flow into the vessel has increased. To bring the level back down to setpoint, the outlet flow manipulated by the level controller must be greater than the inlet flow to lower the level to setpoint before outlet flow settles out to match the inlet flow (final resting value of PID output).
Always remember to …………………………………………………………..………….. Oh shoot, I forget ... senior moment.
The post How to Manage Pipeline Valve Positioner and PID Tuning first appeared on the ISA Interchange blog site.
I have been trying to get a handle on small ripples in one of the pipelines by using a rule of thumb to successively reduce proportional action by 20 percent and integral action by 50 percent. Using the same rule, I could stabilize the ripples on Friday. On Sunday, the product changed in the pipeline and with that back came those 4 percent ripples. There is one control valve that impacts line pressure. I could stretch ripples a bit but could not eliminate them. Output going to zero is natural scheduled shutdown of pipeline. I know it is a lot of information that I am providing but perhaps you can glance through and pinpoint something that stands out. I am learning since I started tuning the control valve that it is product sensitive as well.
Since I don’t know if there is a trend of valve signal and valve flow, I am not sure what is happening. If the considerable decrease in gain does not help or makes it worse, I am wondering if there is some valve stiction or backlash, respectively. Is the valve the same for both products? Could a product be causing more stiction due to buildup or coating on valve seating or sealing surfaces or stem? Could the Sunday valve be closer to the shutoff where friction is greatest?
It sure looks like you have too much proportional (P) action for the new product. The integral action is already greatly reduced and most of the overcorrection is occurring very quickly due to proportional action. I would try decreasing the proportional mode action (proportional mode gain) by 50 percent (cut gain in half). If this helps, reduce the proportional gain again. Based on the very small integral (I) action, you may be able to increase integral action once you decrease proportional action. However, I reiterate that if decreasing the gain simply increases the period of the oscillation, you have backlash or stiction. If amplitude stays the same, you have stiction.
Please make sure there is no integral action in the digital valve controller.
When you say no integral action, do you mean in valve positioner or in controller? I don’t think our positioner has any PID setup. Only PID action is in controller. Since it is liquid pressure and flow, we use P&I. Are you suggesting we use only P action in my controller?
I meant no integral in the valve positioner that for Fisher is called a digital valve controller (DVC). You should use integral action in most process controllers (e.g., flow and pressure). Integral action in the process controllers is essential for the PID control of many processes. So far as tuning the process controller for pipeline control, the integral time also known as reset time (seconds per repeat) should generally be greater than four times the deadtime for an ISA Standard Form. You must be careful about what PID form, structure and tuning setting units are being used. If the integral setting is an integral gain, such as what is used in the “parallel” PID form depicted in textbooks and used in some PLCs, the integral setting may not just be a simple factor of the deadtime (e.g., four times deadtime) but will also depend upon other dynamics. Also, some integral settings are in repeats per minute instead of seconds.
Please make sure you extensively test any tuning settings by making small changes in the setpoint with the controller in automatic or in the controller output by momentarily putting the controller in manual. There should be little to no oscillation. The tests should be done at different valve positions particularly if the valve installed flow characteristic is nonlinear. Oscillations may be most prone near the shutoff positioner where stiction is greatest from seat/seal friction.
If there is interaction between loops, the least important loop must be made slower or decoupling used by means of a feedforward signal. If you are going to do some optimization via a controller that seeks to minimize or maximize a valve position, the proportional gain divided by the reset time for this controller doing optimization must be an order of magnitude smaller than process controller to prevent interaction. These PID controllers used for optimizing a valve position are called “valve position controllers” (VPC). I hesitated to mention this to avoid confusion because these are not valve positioners and are only used for optimization. Also, nonlinear or notch gains and directional move suppression via external reset feedback are used to keep the VPC from responding too much or too little so the process controller does not oscillate or run out of valve.
Many newer smart positioners have added integral action to positioners in the last two decades. In some cases, integral action is enabled as the default. This prompted me to write the Control Talk blog post “Getting the Most Out of Positioners.” This blog does not address setting integral action in process controllers (e.g., flow and pressure controllers).
Do you teach a control valve tuning class? Is there a specific method you recommend for a pipeline control valve?
I do not offer a class on tuning positioners. Supplier courses on tuning positioners are good but you will need to insist on turning off integral action. You can have them talk to me if they disagree. In general you should make sure you do not use integral action and that you use the highest valve positioner gain that does not cause oscillation since for pipeline flow and pressure control, oscillations are not filtered. If you have an Emerson Digital Valve Controller (DVC), I recommend “travel control” with no integral action and with the highest gain that still gives an overdamped response. The valve must be a true throttling valve and not an on-off valve posing as a throttling valve as discussed in the Control Talk blog “Getting the Most out of Valve Positioners”. Note that in this blog we are going for a more aggressive response than what you need. Because of the lack of a significant process time constant in a pipeline, you need a smooth valve response. In the blog, the valve positioner gain is described to be set high enough to cause a slight overshoot and oscillation that quickly settles out. Oscillations in the valve response are useful to get a faster response for vessels and columns since there is a a large process time constant to filter out oscillations. You want to still use a high gain and no integral action in the positioner but seek an overdamped (non-oscillatory) response of valve position.
I have bought Tuning and Control Loop Performance Fourth Edition. I reference tables from there for suggested PID values. I have removed derivative from several pressure and flow loops and observed them to be equally efficient. In the process of tuning I have learned that operations installations have impact on loop tuning. I have made the following types of corrections,
(1) As installed, the logic had the PID getting initiated as soon as block valve #1 was fully opened but block valve #2 was getting commanded to open after #1 causing PID output to ramp off to high output limit since the control valve was not seeing full flow. We solved this by setting temporary upper clamp in PID output at safe limit to avoid overshoot until block valve #2 was fully opened.
(2) Transmitter range was high and margin of error was not acceptable by operations. Re-ranged transmitter to suitable range and brought error within acceptable margin.
(3) EIM Controls Electric and REXA electrohydraulic actuators have a limit on number of actuations. I added an acceptable dead band to reduce number of actuations.
The post, Common Mistakes not Commonly Understood - Part 1, first appeared on ControlGlobal.com's Control Talk blog.
There are many mistakes but some are repeated over and over again even though the automation engineer is attentive and experienced and has the best intentions. Part of the problem is overload in terms of tasks and the time crunch. It is highly unlikely engineers today read even a smattering of the thousands of pages in books, handbooks, white papers and articles. The knowledge to prevent the following mistakes may be buried in this literature but I am not so sure of even this. In any case, one probably could not find it. Here is my effort to get straight to the point of realizing and fixing mistakes.
(1) Reset time set too large for deadtime dominant processes. Most tuning algorithms don’t recognize what Shinskey found is that the reset time could be decreased by a factor of 8 or more from 3 to 4 times the dead time to 0.4 to 0.5 times the dead time for the same controller gain setting for severely dead time dominant processes. Lambda tuning can accomplish a dramatic reduction in reset time since the reset time is the time constant that for dead time dominant processes is by definition less than the dead time. The controller gain is also proportionally reduced providing stability despite a much smaller reset time, which is generally good since these processes are more likely to have noise and a jagged not so smooth response due to the lack of a significant time constant. The criticism that the controller reset time and gain becomes too small leading to integral-only type of control for severely dead time dominant processes is avoided by simply putting a limit of ¼ the dead time on the reset time that is then used in the equation for the controller gain as discussed in the June 2017 Control Talk Column “Opening minds about controllers, part 1”. This column is also a good resource for understanding the next common mistake where the reset time is set too small.
(2) Reset time set too small for lag dominant (near-integrating) processes, integrating and runaway processes. These processes lack self-regulation in the process and depend more upon the gain action in the PID to provide the negative feedback missing in the process. Since engineers are not comfortable with controller gains greater than 5 and operators object to sudden movements of PID output, the controller gain is often an order of magnitude or more too small. Since the product of the reset time and gain must be greater than twice the inverse of the integrating process gain to prevent the start of slow oscillations, the reset time is an order of magnitude or more too small. Since we are taught in control theory classes how too high a PID gain causes oscillations, the PID gain is typically decreased making the problem worse. For more on this pervasive problem and the fix, see the 9/14/2017 Control Talk Blog “Surprising Gains from PID Gain” which leads us to the next common mistake.
(3) PID gain set too small for valves with poor positioner sensitivity and excessive dead band from backlash and variable frequency drives with a large dead band setting. Not only is the PID gain too small per last mistake but is also too small to deal with valve problems as seen in my ISA 2017 Process Control and Safety Symposium slides ISA-PCS-Presentation-Solutions-to-Stop-Most-Oscillations.pdf that details a lot of cases where a counterintuitive increase in PID gain reduces or stops oscillations.
(4) Split ranged valves used to increase valve rangeability. The transition from the large to small valve is not smooth since the friction and consequently stiction is greatest near shutoff as plugs rub seats and balls or disks rub seals. Since the stiction in percent stroke translates to a larger abrupt change in flow and amplitude in the limit cycle, small smooth changes in flow are not possible especially near shutoff but also whenever the large valve is open. The better solution is a large and small valve stroked in parallel either where a Valve Position Controller manipulates the large valve with directional move suppression to keep the small valve near an optimum position or by simultaneous manipulation as detailed in the November 2005 Control feature article “Model Predictive Control can Solve Valve Problem.”
(5) Ignoring effect of meter velocity on flow measurement rangeability. The maximum velocity for a given meter size rarely corresponds to the velocity at the maximum flow in a process application. Often the maximum process velocity is less than half the maximum meter velocity for line size meters. Thus, the velocity at the minimum process flow is so far below the minimum flow for a good meter response that the actual rangeability is less than half what is stated in the literature.
(6) Ignoring the effect of noise on flow measurement rangeability. The signal to noise ratio often deteriorates before the meter reaches the low flow corresponding to its rangeability limit. The flow measurement can essentially become unusable for flow control making the actual rangeability much less than what is stated in the literature.
(7) Ignoring the effect of stiction, backlash and pressure drop on valve rangeability. There are many definitions of valve rangeability that are erroneous, such as those that define rangeability as the ratio of maximum to a minimum flow coefficient (Cv) where the closeness of the actual to theoretical inherent flow characteristic determines the minimum Cv leading to the conclusion that a rotary valve offers the greatest rangeability. The real rangeability should be the ratio of maximum to minimum controllable flow. Deadband from backlash and resolution from stiction near the shutoff position should determine the minimum position that gives a controllable flow. Since stiction is greatest as the plug moves into the seat or ball or disk moves into the seal particularly for tight shutoff valves, the minimum controllable position can be quite large (e.g., 2% to 20%). The flow at this position needs to be computed based on the installed flow characteristic. A ratio of valve to pressure drop less than 0.5 will cause a linear characteristic to distort to quick opening increasing the flow at the minimum controllable position causing a significant loss in rangeability. For equal percentage valves there is also a loss in the minimum controllable flow due to excessive flattening of the installed characteristic. There may also be significant flattening of the installed flow characteristic for rotary valves resulting in any rotation past 50 degrees being ineffectual due to the flatness of the installed flow characteristic, which shows up as the controller output through integral action wandering about above 50 degrees. My book Tuning and Control Loop Performance Fourth Edition published in 2014 by Momentum Press gives the equations to compute the real rangeability. It turns out sliding stem valves with diaphragm actuators and smart positioners have the best rangeability.
(8) Ignoring the effect of static head, motor and frame type, and inverter type and control algorithm on VFD rangeability. Since the inverter waveform is not purely sinusoidal, it is important to select motors that are designed for Pulse Width Modulation (PWM). These “inverter duty” motors have windings with a higher temperature rating (class F). Another option that facilitates operation at lower speeds to achieve the maximum rangeability offered by the PWM drive is a higher service factor (e.g. 1.15). To help prevent motor overheating at low speeds, larger frame sizes and line powered ventilation fans are used. In the process industry, totally enclosed fan cooled (TEFC) motors are used to provide protection from chemicals and ventilation by a fan that is run off the same power line as the motor. The fan speed decreases as the motor speed decreases. To reduce the problem from motor overheating at low speeds, an AC line power constant speed ventilation fan and a larger frame size to provide more ventilation space can be specified. Alternately, a separate booster fan can be supplied. For very large motors (e.g. 1000 HP), totally enclosed water cooled (TEWC) motors are used to deal with the extra heat generation. For low static head pump applications, the overheating at low speeds is not a problem because the torque load decreases with flow. Turndown also depends upon the control strategy in the variable frequency drive. All of the control strategies discussed here use pulse width modulation to manipulate the frequency and amplitude of voltage and current to each phase. Open loop voltage (volts/hertz) control has the simplest algorithm but is susceptible to varying degrees of slip. Most of the drives provided for pump control use this strategy in which the rate of change of flux and hence speed is taken as proportional to voltage. At low speeds the motor losses are larger making the difference between the computed and actual speed (slip) much larger. Some drives make a correction to the voltage to account for estimated motor losses. Ultimately these drives depend upon the DCS to correct for dynamic slip through proportional action and to correct for steady state slip through integral action in process controller(s). The rangeability is normally 40:1 with 0.5% speed regulation. Closed loop slip control has a speed loop cascaded to a torque loop. Speed (tachometer) and torque feedback are shown to be from sensors. The torque feedback may be calculated from a current sensor. A DCS process controller output is the speed set point for the speed controller whose output is the set point to a torque controller. PI rather than P-only controllers can be used since sticktion and resolution limits are negligible, eliminating any concern about limit cycles from integral action. The control system in the VSD is analogous to the cascade control system in a digital positioner. The speed controller plays a role similar to the valve position controller and the torque controller serves a similar purpose as the relay controller. However, in the digital positioner the relay response is inherently much faster than the valve position response. In the VSD, the torque controller can have a relatively sluggish response. To prevent a violation of the cascade rule that requires the secondary loop (torque) to be 5x faster than the primary loop (speed), the speed loop is slowed by decreasing the speed controller gain and integral time. Since the speed set point comes from process controller in the DCS, there is at least a triple cascade. In many cases there is a quadruple cascade control system, for vessel temperature to jacket temperature to coolant flow to speed cascade. The detuning of the speed controller causes detuning of the flow controller, which in turn may cause detuning the temperature controller. As a result, the ability to reject fast process disturbances may be compromised. The rangeability is normally 80:1 with 0.1% speed regulation. However, if the static head approaches the total pressure rise, the rangeability can deteriorate by an order of magnitude for all VFDs with a resulting installed flow characteristic that is quick opening.
(9) Ignoring changes in fluid composition in thermal mass flow meters. Changes in fluid composition cause a change in the assumed thermal conductivity and specific heat capacity of the fluid and the viscosity for liquids introducing a significant error. If you are trying to use a thermal flow meter on air/gas/vapors never install it in a service where it can ever see a gas/vapor approaching dew point. Fouling also causes an error due to thermal lags. Thermal mass flow meters are generally only successfully used on small dry pure gas flows such as oxygen or air for lab or pilot plant bioreactors (very controlled environment) where the fluid is clean single phase and composition is fixed and the remaining 1% or 2% error is corrected by a primary dissolved oxygen controller manipulating the secondary oxygen or air flow controller setpoint.
(10) Ignoring changes in emissivity in optical pyrometers. Two-color or ratio pyrometers measure the radiation at two wavelengths. If the change in emittance at each wavelength with temperature is identical (gray-bodies), the effect of emittance can be cancelled out by ratio calculations. In reality, the change in emittance with temperature varies with wavelength (non-gray-bodies). Additionally, the change in emittance with changes in surface, operating conditions, and the composition of the intervening space may vary with wavelength. In one comparison test on a blackbody, a single-color and two-color pyrometers exhibited errors of 2 and 30 degrees C, respectively. Equal changes in emittance due to surface and operating conditions and intervening gases, particles, and vapors may make a two-color ratio pyrometer more accurate than a single-color pyrometer but it puts into question any accuracy statements for two-color pyrometers that are much better than 30 degrees C.
Please take my advice. I am not using it anyway. I have mostly retired into the virtual world.
The post, Surprising Gains from PID Gain, first appeared on ControlGlobal.com's Control Talk blog.
We learned in control theory courses that too high a PID gain causes oscillations and can lead to instability. Operators do not like the large sudden changes in PID output from a high PID gain. Operators may see what they think is the wrong valve open in split range control as the setpoint is approached when PID gain dominates the response. Most tuning tests and studies use a setpoint response rather than a load response for judging tuning. A high PID gain that is set to give maximum disturbance rejection in load response will show overshoot and some oscillation for a setpoint response. Internal Model Control or any control algorithm or tuning method that considers disturbances on the process output will see a concern similar to what is observed for the setpoint response because the load and setpoint change are immediately appearing at the PID algorithm as inputs. There are many reasons why PID gain is unfavorably viewed. Here I try to show you that PID gain is undervalued and underutilized.
First, let’s realize that the immediate feedback correction based on the change in process variable being controlled may be beneficial in some important cases. The immediate action reduces the deadtime and oscillations from deadband, resolution and sensitivity limits in the measurement, control valve and variable frequency drive. The preliminary draft of ISA-PCS-2017-Presentation-Solutions-to-Stop-Most-Oscillations.pdf for the ISA 2017 Process Control and Safety Symposium show how important PID gain is for stopping oscillations from non-ideal measurements and valves and also from integrating processes.
PID gain does not play as important a role in balanced self-regulating process often shown in control theory courses and publications. The primary process time constant is not very large compared to the dead time. Consequently, there is more negative feedback action seen in these self-regulating processes. When the time constant becomes more than 4 times the dead time, we consider these processes to be near-integrating in that in the time frame of the PID they appear to ramp losing self-regulation. For these processes and true integrating processes, PID gain provides the negative feedback action missing in the process to halt or arrest the ramp. Integrating process tuning rules are used where lambda is an arrest time. Not readily understood is that there is a window of allowable gains, where too low of a PID gain cause larger and slower oscillations than too high of a PID gain. The problem is even more serious and potentially dangerous for runaway processes (highly exothermic reactors). Most loops on integrating and runaway processes have a reset time that is orders of magnitude too small and a PID gain that is an order of magnitude too low. A PID gain greater than 50 may be needed for a highly back mixed polymerization reactor. Many users are uncomfortable with such high gain settings.
For integrating and runaway processes, the PID output must exceed the load disturbance to return the process variable to setpoint. This is more immediately and effectively done by PID gain action. We can often see an immediate improvement in control by greatly increasing the reset time and then the gain. The gain must be greater than twice the inverse of the product of the open loop integrating process gain (%/sec/%) and reset time (sec) to prevent the start of slow oscillations from violation of the low gain limit.
As the process variable approaches setpoint, there is an immediate reduction in the PID contribution to the output from the PID gain. Reset has no sense of direction and will continue to change the output in the same direction not reversing direction till the process variable crosses setpoint reversing the sign of the error. Operators looking at digital displays waiting for a temperature to rise to setpoint will think a heating valve should still be open if the temperature is just below setpoint when in fact the cooling valve should be open to prevent overshoot. If the loop waits till the PV crosses setpoint, the correction is too late due to deadtime. PID gain provides the anticipatory action missing in reset action.
A setpoint filter equal to the reset time or a PID structure of proportional action on process variable instead of error will eliminate overshoot of the setpoint for PID tuning that maximizes disturbance rejection where the peak error and the integrated error are both inversely proportional to PID gain.
Finally, let’s realize that the use of external-reset feedback allows us to use setpoint rate of change limits on the valve signal or secondary loop signal that will prevent the large abrupt jump in PID output form a high PID gain that upsets operators and some related loops. External-reset feedback will also prevent the PID output from changing faster than a valve or secondary loop can respond. No retuning needed.
These days we personally don’t have time to wait for taking corrective action in our lives and need to seek more anticipation based on where we are going. We also benefit from external-reset feedback. We need to realize the same for PID loops. There is a lot to be gained from PID gain.
The post, How to Motivate Management and Millennials (M&Ms), originally appeared on ControlGlobal.com's Control Talk blog.
The key to a much brighter future of our profession depends upon management providing the funding and support and millennials seeking to improve process performance by the use of the best automation and process control. There are some common approaches to these seemingly very different groups.
Management is focused on the bottom line that may be as short term as quarterly results. If management has only business degrees, this may be the primary and perhaps the only motivation. If management also has a technical degree, there can be an additional motivation to advance the knowledge and technology used to make processes safer and more productive. Technical people are intrigued and attracted to new more powerful developments in technology. An upcoming Control Talk column with Walt Boyes will give considerable insight into how management thinks.
Millennials choose engineering as a major because they were interested in technology and having a positive impact on people and systems by using and advancing the latest technologies. Unfortunately, students have a negative image that industry is seemingly low tech, routine, and “down and dirty”. Peter Martin aptly discussed the negative view by engineering graduates of working in industry and the missing understanding of opportunities that meet their altruistic motivation in his July 2017 ISA Interchange post “The Challenges of Attracting Millennial to Industrial Careers”. Another goal of engineers may be to make money and seem important by moving on and becoming a manager.
The use of the best and highest tech hardware and software in automation and process control can yield impressive improvements in process safety and performance that are much faster and less expensive than changing process equipment. Unfortunately, management is often not aware of this as discussed in the May 2017 Control Talk column “The Invisibility of process control”.
What can possibly work to impress both management and millennials are demos that show the benefits and use of new technologies. The “before” and “after” cases should show benefits of increases in process performance in terms of dollars with a running quarterly total moving forward showing continuous improvement from knowledge gained. Since time is precious and concentration spans are short, the summary of dynamic runs can be presented in terms of trend charts of benefits for changes in demand and supplier feeds. “Seeing is believing.” The reality of being able to adjust to a dynamic world is inspiring. For engineers, a chance to see live demos may increase interest because of the dynamics. The emphasis should be on how fast and beneficial are the results from using the best technology. See the August feature article “Virtual Plant Virtuosity” for how to develop high tech solutions that impress the M&Ms satisfying altruistic and monetary motivations.
I love M&Ms and there are so many flavors now. See if you can enjoy the M&Ms that determine our profession as much as the M&Ms that satisfy your cravings. Maybe M&Ms can be a totally sweet deal.
The post Webinar Recording: How to Use Key PID Controller Features first appeared on the ISA Interchange blog site.
This educational ISA webinar on control valves was introduced by Greg McMillan and presented by Hector Torres, in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Hector is a senior process and control engineer with Eastman Chemical. Hector is a recipient of ISA’s John McCarney Award for the article “Enabling new automation engineers”
Héctor Torres, a protégé of the ISA Mentor Program from its inception, provides a detailed view of how to use key PID controller features that can greatly expand what you can achieve. The setting of anti-reset windup ARW limits, dynamic reset limit, eight different structures, integral dead band, and set-point filter. Feedforward and rate limiting are covered with some innovative application examples.
The post Webinar Recording: How to Use Key PID Features first appeared on the ISA Interchange blog site.
Héctor Torres, a protégé of the ISA Mentor Program from its inception, provides a detailed view of how to use key PID features that can greatly expand what you can achieve. The setting of anti-reset windup ARW limits, dynamic reset limit, eight different structures, integral dead band, and set-point filter. Feedforward and rate limiting are covered with some innovative application examples.
The post, Insights to Process and Loop Performance, originally appeared on ControlGlobal.com's Control Talk blog.
Here we look at a myriad of metrics on process and control loop performance and show how to see through the complexity and diversity to recognize the commonality and underlying principles. We will see how dozens of metrics simplify to two classes each for the process and the loop. We also provide a concise view of how to compute and use these metrics and what affects them.
Let’s start with process metrics because while as automation engineers we are tuned into control metrics, our ultimate goal is improvement in the process and thus, process metrics. The improvement in profitability of a process comes down to improving process efficiency and/or capacity. Often these are interrelated in that an increase in process capacity is often associated with a decrease in process efficiency. Also an increase in the metrics for a particular part of a process may decrease the metrics for other parts of the process. The following example cited in the April 2017 Control Talk Column “An ‘entitlement’ approach to process control improvement” is indicative of the need to have metrics and an understanding for the entire process:
“In a recent application of MPC for thermal oxidizer temperature control that had a compound response complicating the PID control scheme, there was a $700K per year benefit clearly seen in reduced natural gas usage. However, the improvement also reduced steam make to a turbo-generator, reducing electricity generated by $300K per year. We reached a compromise of about $400K per year in net benefit because of lost electrical power generation from less steam to the turbo-generators. We spent many hours to align the benefit with measureable accounting for the natural gas reduction and the electrical purchases. Sometimes the loss of benefits is greater than expected. You need to be upfront and make sure you don’t just shift costs to a different cost area.”
Process efficiency can be increased by reducing energy use (e.g., electricity, steam, coolant and other utilities) and raw materials (e.g., reactants, reagents, additives and other feeds). The efficiency is first expressed as a ratio of the energy use per unit mass of product produced (e.g., kJ/kg) or energy produced (kJ/kJ) and then ideally in terms of ratio of cost to revenue by including the cost of energy used (e.g., $ per kJ) and the value of revenue for product produced (e.g., $ per kg) or energy produced (e.g., $ per kJ). The kJ of energy and kg and mass are running totals where the oldest value of mass flow or energy multiplied by a time interval between measurements is replaced in the total by the current value. A deadtime block can provide the oldest value. The time interval between measurements and the deadtime representative of the time period for the running total should both be chosen to provide a good signal to noise ratio. The deadtime block time period should also be chosen to help focus on the source of changes in process efficiency. For batch operations, the time period is usually the cycle time of a key phase in the batch and may simply be the totals at the end of the phase or batch. For continuous operations, I favor a time period that is an operator shift to recognize the key effect of operators on process performance. This time period is also suitable for evaluating other sources of variability, such as the effect of ambient conditions (day to night operation and weather) and feeds and recycle and heat integration (upstream, downstream and parallel unit operations). The periods of best operation can be used to as a goal to be possibly achieved by smarter instruments or better installations less sensitive to ambient conditions or smarter controls thru procedural automation or state based control as discussed in the in the Sept 2016 Control Talk Column “Continuous improvement of continuous processes”.
The metrics that affect process capacity are more diverse and complicated. Process capacity can be affected by feed rates, onstream time, startup time, shutdown time, maintenance time, transition time, spectrum of products and their value, recycle, and off spec product. An increase in off spec product that can be recycled can be taken as a loss in product capacity if the raw material feed rate is kept the same or taken as a loss in process efficiency if the raw material feed rate is increased. If the off spec product can be sold as a lower revenue product, the $ per kg must be correspondingly adjusted.
For batch operations, an increase in batch end point in terms of kg of product produced and a decrease in batch cycle time including time in-between batches can translate to an increase in process capacity. If a higher endpoint can be reached by holding or running the batch longer, there is a likely increase in process efficiency assuming a negligible increase in raw material but there may be an increase or decrease in process capacity. The optimum time to end a batch is best determined by looking at the rate of change of product formation (batch slope) and if necessary the rate of change of raw material and energy use to determine the optimum time to end the batch and move on. A deadtime block is again used to provide a fast update with a good signal to noise ratio to compute the slope of the batch profile and the prediction of batch end point. Of course whether downstream units for recovery and purification are able to handle an increase in batch capacity and their metrics must be included in the total picture. For example in ethanol production, a reduction in fermenter cycle time may not translate to an increase in process capacity because of limitations in distillation columns downstream or the dryer for recovery of dried solids byproduct sold as animal feed. For more on the optimization of batch end points see the Sept 2012 Control feature article “Getting the Most Out of your Batch”.
The metrics that indicate loop performance can be classified as load response and setpoint response metrics. The load response is often most important in that the desired setpoint response can be achieved for the best load response by the proper use of PID options. The load response should in nearly all cases be based on disturbances that enter as inputs to the process whereas many academic and model based studies are based on disturbances entering in the process output. For self-regulating processes where the process deadtime is comparable to or larger than the process time constant, the point of entry does not matter because the intervening process time constant does not appreciably slow down input disturbances in the time frame of the PID response (e.g., 2 to 4 deadtimes). However, most of the more interesting temperature and composition control loops in my career did not have a negligible process time constant and in fact had a near-integrating, true integrating or runaway open loop response.
The load metrics are peak error and integrated error. The peak error is the maximum excursion after a load upset. The integrated error is most often an absolute integrated error (IAE) but can be an integrated square error. If the response is non oscillatory, the integrated error and IAE are the same. There are also metrics indicative of oscillations such as settling time and undershoot. The ultimate and practical limits to peak error are proportional to the deadtime and inversely proportional to controller gain, respectively. The ultimate and practical limits to integrated error are proportional to the deadtime squared and the ratio of controller reset time to controller gain, respectively.
For setpoint metrics, there is the time to get close to setpoint, which I call rise time, important for process capacity. I am sure there is a better name because the metric must be indicative of the performance for an increase or decrease in setpoint. The other setpoint metrics are overshoot, undershoot and settling time that can affect process capacity and efficiency. The use of a setpoint lead-lag or PID structure that minimizes proportional and derivative action on setpoint changes can reduce overshoot, despite using good load disturbance rejection tuning. A setpoint lag equal to the reset time (no lead) corresponds to a PID structure of Proportional and Derivative on the Process Variable and Integral action on the Error (PD on PV and I on E).
See the Sept and Oct 2016 Control Talk Blogs “PID Options and Solutions - Part 1” and “PID Options and Solutions - Parts 2 and 3” for a discussion of loop metrics in great detail including when they are important and how to improve them. Also look at the presentation for the ISA Mentor Program WebExs “ ISA-Mentor-Program-WebEx-PID-Options-and-Solutions.pdf ”.
My last bit of advice is to ask your spouse for metrics on your marriage. Minimizing the deadtime while still having a good signal to noise ratio is particularly important. For men, the saying “Happy wife, happy life” I think would work the other way as well. I just need a rhyme.
The post How to Get the Most out of Control Valves first appeared on the ISA Interchange blog site.
The data that is really needed when selecting and sizing a control valve is rarely understood and specified, which leads to excessive variability originating from the valve. In this presentation, ISA mentor Greg McMillan discusses pervasive problems and rampant misconceptions. He then provides guidance—supported by test results—on how to select a good throttling control valve. He also explains PID tuning adjustments and a key PID feature that can be utilized to provide precise, smooth, and fast control.
The post Webinar Recording: How to Get the Most out of Control Valves first appeared on the ISA Interchange blog site.
The post, Fixes for Deadly Deadband, first appeared on ControlGlobal.com's Control Talk blog.
While there are some cases where deadband is helpful, in most applications the effect is extremely detrimental and confusing. Deadband can arise from any sources either intentionally or inadvertently. Deadband creates deadtime and for certain conditions excessive and persistent oscillations.
The increase in loop deadtime is the deadband divided by the rate of change of controller output. The increase in deadtime can increase the peak error and integrated error from a load disturbance. If there are two or more integrators in the system due to integral action in the valve positioner, variable speed drive, controller(s), or process, a limit cycle will develop.
The biggest and most troublesome source of deadband is backlash from an on-off or isolation valve (tight shutoff valve) posing as a throttling valve. The positioner seeing feedback from the actuator shaft of such rotary valves often does not realizer the internal closure member (e.g. ball or disk) is not responding due to backlash from the connections between the shaft, stem and ball or disk or the shaft windup from seal friction. The positioner diagnostics say everything is fine even meeting the requirements set by the ISA-75.25.01 Standard for Measuring Valve Response. Creative story telling develops to explain the oscillations in the process.
An on-off or isolation valve offers a great advantage when used in series with a throttle valve. Besides achieving tight shutoff, the placement of a quickly stroked completely open or closed on-off or isolation valve close-coupled to the connection into the process eliminates the deadtime and any unbalance between ratioed flows during the start and stop of reactant and reagents enabling more precise composition and pH control. The throttle valve is located at a position that is more accessible for better maintenance and with some straight runs upstream and downstream. The throttle valve straight run requirements are rather minimal but can give a more consistent flow relationship between valve position and flow.
For the throttle valve, the best solution is to get rid of the excessive deadband. Given that you are literally and figuratively, stuck with deadband principally when the source is a big valve, an increase in the PID gain will reduce the peak and integrated absolute error (IAE) by increasing the rate of change of the PID output and thus decreasing the additional deadtime from deadband. If there is a limit cycle, increasing the PID gain reduces the amplitude and period of the limit cycle, decreasing the persistent IAE and increasing the ability of downstream volumes to filter out the oscillations. Open loop step tests don’t reveal the additional deadtime but show a decrease in process gain upon a reversal of direction of step change. A filter time can be judiciously added that is less than 20% of the total loop deadtime seen in the test to prevent changes in the PID output from noise exceeding the deadband of the valve. For more on the effects of backlash see the May 2016 Control article “How to specify control valves that don’t compromise control” and the recording of the YouTube recording to be posted in June on the “ISA Mentor Program Webinar Playlist” of my ISA Mentor WebEx “ ISA-Mentor-Program-WebEx-Best-Control-Valve-Rev0.pdf ”. The article white paper and presentation also shows that an increase in PID gain eliminates an oscillation from poor positioner sensitivity by making changes in the valve signal larger than sensitivity limit.
A simple algorithm can be configured to increase the change in PID output by an amount slightly less than the deadband when the output changes direction and the change is greater than the noise band seen in the PID output. The kick of the output upon a change in direction eliminates the deadtime and lost motion from backlash. The practical issue is the deadband may vary with valve position, time, operating conditions, and positioner tuning. These algorithms are often used for Model Predictive Control besides PID control.
A lead-lag on the valve signal can reduce the effect of deadband, resolution and positioner sensitivity but the valve movement can quickly become erratic for a lead much larger than the lag time and noise.
Often deadband is a parameter in a Variable Speed Drive (VSD) setup to reduce changes in speed from noise. Often deadband is set too large because of a lack of understanding of the detrimental effect. The deadband should be just slightly larger than ½ the noise band seen in the VSD setpoint.
Dynamic simulation with a backlash-stiction block and a PID with external reset feedback can show this and much more. The virtual plant is my lab to rapidly explore, discover, prototype and test solutions.
I recently went to a Grateful Dead tribute band concert. The “dead heads” were grateful the music of the band was not dead. Keep your control system alive by not succumbing to the deadly deadband.
This is the official online community site of the Emerson Global Users Exchange, a forum for the free exchange of non-proprietary information among the global user community of all Emerson Automation Solution's products and services. Our goal is to improve the efficiency and use of automation systems and solutions employed at members’ facilities by sharing our knowledge, experiences, and application information.
User Groups |
World Areas |
Community Guidelines |
Legal Information |
Contact Community Manager
Website translation provided by
© 2015 Emerson Global Users Exchange. All rights reserved.