*Posts on this page are from the Control Talk blog, which is one of the ControlGlobal.com blogs for process automation and instrumentation professionals and Greg McMillan’s contributions to the ISA Interchange blog.
The post How to Setup and Identify Process Models for Model Predictive Control first appeared on the ISA Interchange blog site.
The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.
Luis Navas is an ISA Certified Automation Professional and electronic engineer with more than 11 years of experience in process control systems, industrial instrumentation and safety instrumented systems. Luis’ questions on evaporator control are important to improve evaporator concentration control and minimize steam consumption
The process depicted in Figure 1 shows a concentrator with its process inputs and outputs. I have the following questions regarding the process testing in order to generate process models for a MPC in the correct way. I know that MPC process inputs must be perturbed to allow an identification and modeling of each process input and output relationship.
Figure 1: Variables for model predictive control of a concentrator
Before I start perturbing the feed flow or steam flow, should the disturbance be avoided or at least minimized? Or simply let it be as usual in the process since this disturbance is always present?
If it is not difficult, you can try to suppress the disturbance. That can help the model identification for the feed and steam. To get a model to the disturbance, you will want movement of the disturbance outside the noise level (best is four to five times). If possible, this may require making changes upstream (for example, LIC.SP or FIC.SP).
What about the steam flow? Should it be maintained a fix flow, (FIC in MAN with a fix % open FCV), while perturbing the feed flow and in the same way when perturbing the steam flow, should the feed flow be fixed? I know some MPC software packages excite its outputs in a PRBS (Pseudo Random Binary Sequence) practically at the same time while the process testing is being executed, and through mathematics catches the input and output relationships, finally generating the model.
The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.
Because the steam and feed setpoints are manipulated variables, it is best to keep them both in auto for the entire test. PRBS is an option, but it will take more setup effort to get the magnitudes and the average switching interval right. An option is to start with a manual test and switch to PRBS after you’ve got a feel for the process and the right step sizes. Note: a pretest should have already been conducted to identify instrument issues, control issues, tuning, etc. Much more detail is offered in my Section 9.3 on in the McGraw-Hill handbook Process/Industrial Instruments and Control Sixth Edition.
What are the pros & cons for process testing if the manipulated variables are perturbed through FIC Setpoints, (closed loop), or through FIC Outputs, (open loop)? Or simply: should it be done according with the MPC design? What are the pros & cons if in the final design the FCVs are directly manipulated by the MPC block or through FICs, as MPC’s downstream blocks? I know in this case the FICs will be faster than MPC, so I expect a good approach is to retain them.
Correct – do according to the MPC design. Note sometimes the design will need to change during a step test as you learn more about the process. Flow controllers should normally be retained unless they often saturate. This is the same idea for justifying a cascade – to have the inner loop manage the higher frequency disturbances (so the slower executing MPC doesn’t have to). The faster executing inner loop also helps with linearization (for example, valve position to flow).
See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).
About the AuthorGregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.
Connect with Greg
The post Webinar Recording: How to Use Modern Process Control to Maintain Batch-To-Batch Quality first appeared on the ISA Interchange blog site.
This educational ISA webinar was presented by Greg McMillan. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).
Understanding the difficulties of batch processing and the new technologies and techniques offered can lead to solutions by better automation and control that offer much greater increases in efficiency and capacity than usually obtained for continuous process. Industry veteran and author Greg McMillan discusses analyzing batch data, elevating the role of the operator, tuning key control loops, and setting up simple control strategies to optimize batch operations. The presentation concludes with an extensive list of best practices.
The post What Types of Process Control Models are Best? first appeared on the ISA Interchange blog site.
In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the U.S. This question comes from Daniel Rodrigues.
Daniel Rodrigues is one of our newest protégés in the ISA Mentor Program. Daniel has been working in research & development for Norsk Hydro Brazil since 2016 specializing in:
What is your take on process control based on phenomenological models (using first-principle models to guide the predictive part of controllers)? I am aware of the exponential growth of complexity in these, but I’d also like to have an experienced opinion regarding the reward/effort of these.
I prefer first principle models to gain a deeper understanding of cause and effects, process relationships, process gains, and the response to abnormal situations. Most of my control system improvements start with first principle models. The incorporation of the actual control system (digital twin) to form a virtual plant has made these models a more powerful tool. However, most first principle models use perfectly mixed volumes neglecting mixing delays and are missing transportation delays and automation system dynamics. For pH systems, including all of the non-ideal dynamics from piping and vessel design, control valves or variable speed pumps, and electrodes is particularly essential, I have consequently partitioned the total vessel volume into a series of plug flow and perfectly back mixed volumes to model the mixing dead times that originate from agitation pattern and the relative location of input and output streams. I add a transportation delay for reagent piping and dip tubes due to gravity flow or blending. For extremely low reagent flows (e.g., gph), I also add an equilibration time in dip tube after closure of a reagent valve associated with migration of the reagent into the process followed by migration of process fluid back up into the dip tube. I add a transportation delay to electrodes in piping. I use a variable dead time block and time constant blocks in series to show the effect of velocity, coating, age, buffering and direction of pH change on electrode response. I use a backlash-stiction and a variable dead time block to show the resolution and response time of control valves. The important goal is to get the total loop dead time and secondary lag right.
By having the much more complete model in a virtual plant, the true dynamic behavior of the system can be investigated and the best control system performance achieved by exploring, discovering, prototyping, testing, tuning, justifying, deploying, commissioning, maintaining and continuously improving, as described in the Control magazine feature article Virtual Plant Virtuosity.
Figure 1: Virtual Plant that includes Automation System Dynamics and Digital Twin Controller
Model predictive control is much better at ensuring you have the actual total dynamics including dead time, lags and lead times at a particular operating point. However, the models do not include the effect of backlash-stiction or actuator and positioner design on valve response time and consequentially on total loop dead time because by design the steps are made several times larger than the deadband and resolution or sensitivity limits of the control valve. Also, the models identified are for a particular operating point and normal operation. To cover different modes of operation and production rates, multiple models must be used requiring logic for a smooth transition or recently developed adaptive capabilities. I see an opportunity to use the results from the identification software used by MPC to provide a more accurate dead time, lag time and lead time by inserting these in blocks on the measurement of the process variable in first principle models. The identification software would be run for different operating points and operating conditions enabling the addition of supplemental dynamics in the first principle models. This addresses the fundamental deficiency of dead times, lag times and lead times being too small in first principle models.
Statistical models are great at identifying unsuspected relationships, disturbances and variability in the process and measurements. However, these are correlations and not necessarily cause and effect. Also, continuous processes require dynamic compensation of each process input so that it matches the dynamic response timewise of each process output being studied. This is often not stated in the literature and is a formidable task. Some methods propose using a dead time on the input but for large time constants, the dynamic response of the predicted output is in error during a transient. These models are more designed for steady state operation but this is often an ideal situation not realized due to disturbances originating from the control system due to interactions, resonance, tuning, and limit cycles from stiction as discussed in the Control Talk Blog The most disturbing disturbances are self-inflicted. Batch processes do not require dynamic compensation of inputs making data analytics much more useful in predicting batch end points.
I think there is a synergy to be gained by using MPC to find missing dynamics and statistical process control to help track down missing disturbances and relationships that are subsequently added to the first principle models. Recent advances in MPC capability (e.g., Aspen DMC3) to automatically identify changes in process gain, dead time and time constant including the ability to compute and update them online based on first principals has opened the door to increased benefits from the using MPC to improve first principle models and vice versa. Multivariable control and optimization where there are significant interactions and multiple controlled, manipulated and constraint variables are best handled by MPC. The exception is very fast systems where the PID controller is directly manipulating control valves or variable frequency drives for pressure control. Batch end point prediction might also be better implemented by data analytics. However, in all cases the first principle model should be accordingly improved and used to test the actual configuration and implementation of the MPC and analytics and to provide training of operators extended to all engineers and technicians supporting plant operation.
I would think for research and development, the ability to gain a deeper and wider understanding of different process relationships for different operating conditions would be extremely important. This knowledge can lead to process improvements and to better equipment and control system design. For pH and biological control systems, this capability is essential.
For a greater perspective on the capability of various modeling and control methodologies, see the ISA Mentor Program post with questions by protégé Danaca Jordan and answers by Hunter Vegas and I: What are the New Technologies and Approaches for Batch and Continuous Control?
The post, Many Objectives, Many Worlds of Process Control first appeared on ControlGlobal.com's Control Talk blog.
In many publications on process control, the common metric you see is integrated absolute error for a step disturbance on the process output. In many tests for tuning, setpoint changes are made and the most important criteria becomes overshoot of setpoint. Increasingly, oscillations of any type are looked at as inherently bad. What is really important varies because of the different loops and types of processes. Here we seek to open minds and develop a better understanding of what is important.
– Compressor surge, SIS activation, relief activation, undesirable reactions, poor cell health
– total amount of off-spec product to enable closer operation to optimum setpoint
– Interaction with heat integration and recycle loops in hydrocarbon gas unit operations
– Batch cycle time, startup time, transition time to new products and operating rates
– Wasted energy-reactants-reagents, poor cell health (high osmotic pressure)
– Passing of changes in input flows to output flows upsetting downstream unit ops
– Resonance, interaction and propagation of disturbances to other loops
* FRV is the Final Resting Value of PID output. Overshoot of FRV is necessary for setpoint and load response for integrating and runaway processes. However for self-regulating processes not involving highly mixed vessels (e.g., heat exchangers and plug flow reactors), aggressive action in terms of PID output can upset other loops and unit operations that are affected by the flow manipulated by the PID. Not recognized in the literature is that external-reset feedback of the manipulated flow enables setpoint rate limits to smooth out changes in manipulated flows without affecting the PID tuning.
– Fast self-regulating responses, interactions and complex secondary responses with sensitivity to SP and FRV overshoot, split range crossings and utility interactions.
– Important loops tend to have slow near or true integrating and runaway responses with minimizing peak and integrated errors and rise time as key objectives.
– Important loops tend to have fast near or true integrating responses with minimizing peak and integrated errors and interactions as key objectives.
– Fast self-regulating responses and interactions with propagation of variability into product (little to no attenuation of oscillations by back mixed volumes) with extreme sensitive to variability and resonance. Loops (particularly for sheets) can be dead time dominant due to transportation delays unless there are heat transfer lags.
– Most important loops tend have slow near or true integrating responses with extreme sensitivity to SP and FRV overshoot, split range crossings and utility interactions. Load disturbances originating from cells are incredibly slow and therefore not an issue.
A critical insight is that most disturbances are on the process input not the process output and are not step changes. The fastest disturbances are generally flow or liquid pressure but even these have an 86% response time of at least several seconds because of the 86% response time of valves and the tuning of PID controllers. The fastest and most disruptive disturbances are often manual actions by an operator or setpoint changes by a batch sequence. Setpoint rate limits and a 2 Degrees of Freedom (2DOF) PID structure with Beta and Gamma approaching zero can eliminate much of the disruption from setpoint changes by slowing down changes in the PID output from proportional and derivative action. A disturbance to a loop can be considered to be fast if it has a 86% response time less than the loop deadtime.
If you would like to hear more on this, checkout the ISA Mentor Program Webinar Recording: PID Options and Solutions Part1
If you want to be able to explain this to young engineers, check out the dictionary for translation of slang terms in the Control Talk Column “Hands-on Labs build real skills.”
The post How to Get Rid of Level Oscillations in Industrial Processes first appeared on the ISA Interchange blog site.
Luis Navas is an ISA Certified Automation Professional and electronic engineer with more than 11 years of experience in process control systems, industrial instrumentation and safety instrumented systems. Luis’ questions on effectively reducing evaporator level oscillations from an upstream batch operation so that the level controller can see the true level trajectory represent a widespread concern in chemical plants where the front end for conversion has batch operations and back end for separation has continuous operations.
For the MPC application I need to build a smoothed moving mean from a batch level to use as a controlled variable for my MPC, so the simple moving average is done as depicted below. However, I need to smooth the signal, (due there is some signal ripple still), I tried with a low-pass filter achieving some improvement as seen in Figure 1. But perhaps you know a better way to do it, or I simply need to increase the filter time.
Figure 1: Old Level Oscillations (blue: actual level and green: level with simple moving mean followed by simple moving mean + first order filter)
I use rate limiting when a ripple is significantly faster than a true change in the process variable. The velocity limit would be the maximum possible rate of change of the level. The velocity limit should be turned off when maintenance is being done and possibly during startup or shutdown. The standard velocity limit block should offer this option. A properly set velocity limit introduces no measurement lag. A level system (any integrator) is very sensitive to a lag anywhere.
If the oscillation stops when the controller is in manual, the oscillation could be from backlash or stiction. In your case, the controller appears to be in auto with a slow rolling oscillation possibly due to a PID reset time being too small.
I did a Control Talk Blog that discusses What are good signal filtering tips from various experts besides my intelligent velocity limit.
In many cases, I’ve seen signals overly filtered. Often, if the filtered signal looks good to your eye, it’s too much filtering. As Michel Ruel states: If period is known, moving average (sum of most recent N values divided by N) will nearly completely remove a uniform periodic cycle. So the issue is how much lag is introduced. Depending on the MPC, one may be able to specify variable CV weights as a function of the magnitude error, which will decrease the amount of MV movement when the CV weight is low; or the level signal could be brought in as a CV twice with different tuning or filtering applied to each.
Since the oscillation is uniform in period and amplitude, the moving average as described my Michel Ruel is best as a starting point. Any subsequent noise from nonuniformity can be removed by an additional filter but nearly all of this filter time becomes equivalent dead time in near and true integrating processes. You need to be careful that the reset time is not too small as you decrease the controller gain either due to filtering or to absorb variability. The product of PID gain and reset time should be greater than twice the inverse of the integrating process gain (1/sec) to prevent the slow rolling oscillations that decay gradually. Slide 29 of the ISA WebEx on PID Options and Solutions give the equations for the window of allowable PID gains. Slide 15 shows how to estimate the attenuation of an oscillation by a filter. The WebEx presentation and discussion is in the ISA Mentor Program post How to optimize PID controller settings.
If you need to minimize dead time introduced by filtering, you could develop a smarter statistical filter such as cumulative sum of measured values (CUSUM). For an excellent review of how to remove unwanted data signal components, see the InTech magazine article Data filtering in process automation systems.
My experience is that most times a cycle in a disturbance flow is already causing cycling in other variables (due to the multivariable nature of the process). And advanced control, including MPC, will not significantly improve the situation and may make it worse. So it is best to fix the cycle before proceeding with advanced control. Making a measured cyclic disturbance a feedforward to MPC likely won’t help much. MPC normally assumes the current value of the feedforward variables stays constant over the prediction horizon. What you’d want is to have the future prediction include the cycle. Unfortunately this is not easily done with the MPC packages today.
Often, levels are controlled by a PID loop, not in the MPC. The exception can be if there are multiple MVs that must be used to control the level (e.g., multiple outlet flows), or the manipulated flow is useful for alleviating a constraint (see the handbook). Another exception is if there is significant dead time between the flow and the level.
Thank you for the support. I think the ISA Mentor Program resources are a truly Elite support team, by the way,I have already read the blogs about signal filtering.
My comments and clarifications:
Figure 2: New Level Oscillations (blue: actual level and green: level with Ruel moving average)
The post Webinar Recording: Loop Tuning and Optimization first appeared on the ISA Interchange blog site.
This educational ISA webinar was presented by Greg McMillan in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).
In this ISA Mentor Program presentation, Michel Ruel, a process control expert and consultant, provides insight and guidance as to the importance of optimization and how to achieve it through better PID control.
Connect with Greg:
The post How to Optimize Industrial Evaporators first appeared on the ISA Interchange blog site.
The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.
Which criteria should I follow to define the final control strategy with model predictive control (MPC) in an existing PID strategy? Only one MPC for all existing PIDs? Or may be 1MPC + 1PID or 1MPC + 2 PIDs? What are the criteria to make the correct decision? What is the step by step procedure to deploy the advanced control in the real process in the safest way? Which are your hints, tips, advice and experiences regarding MPC implementations?
PID control of a double-effect evaporator
In general you try to include all of the controlled variables (CV), manipulated variables (MV), disturbance variables (DC), and constraint variables (QC) in the same MPC unless the equipment are not related, there is a great difference in time horizons or there is a cascade control opportunity like we see with Kiln MPC control where a slower MPC with more important controlled variables send setpoints to a secondary MPC for faster controlled variables. For your evaporator control, this does not appear to be the case.
We first discuss advanced PID control and its common limitations before moving into a MPC.
For optimization, a PID valve position controller could maximize production rate by pushing the steam valve to its furthest effective throttle position. So far as increasing efficiency in terms of minimizing steam use, this would be generally be achieved by tight concentration control that allows you to operate closer to minimum concentration spec. The level and concentration response would be true and near integrating. In both cases, PID integrating process tuning rules should be used. Do not decrease the PID gain computed by these rules without proportionally increasing the PID reset time. The product of the PID gain and reset time must be greater than the inverse of the integrating process gain to prevent slow rolling oscillations, a very common problem. Often the reset time is two or more orders of magnitude too small because user decreased the PID gain due to noise or thinking oscillations are caused by too high a PID gain.
I don’t see constraint control for a simple evaporator but if there were constraints, an override controller would be setup for each. However, only one constraint would be effectively governing operation at a given time via signal selection. Also, the proper tuning of override controllers and valve position controllers is not well known. Furthermore, the identification of dynamics for feedback and particularly feedforward control typically requires the expertise by a specialist. Often comparisons are done showing how much better Model Predictive Control is than PID control without good identification and tuning of feedback and feedforward control parameters.
While optimization limitations and typical errors in identification and tuning push your case toward the use of MPC, here are the best practices for PID control of evaporators.
The use of model predictive control software often does a good job of identifying the dynamics and automatically incorporating them into the controller. Also, it can simultaneously handle multiple constraints with predictive capability as to violation of constraints. Furthermore, a linear program or other optimizer built into MPC can find and achieve the optimum intersection of the minimum and maximum values of controlled, constraint, and manipulated variables plotted on a common axis of the manipulated variables.
I have asked for more detailed advice on MPC by Mark Darby, a great new resource, who wrote the MPC Sections for the McGraw-Hill Handbook Hunter and I just finished.
It is normally best to keep PID controls in place for basic regulatory control if they perform well, which may require re-tuning or reconfiguration of the strategy. Your case is getting into advanced control and optimization where the advantage shifts to MPC. Multiple interactions and measured disturbances are best done by MPC compared to PID decoupling and feedforward control. First principle models should be used to compute smarter disturbance variables such as solids feed flow rather than separate feed flow and feed concentration disturbance variables. Override control and valve position control schemes are better handled by MPC. More general optimization is also better done with an MPC. Remember to include PID outputs to valves as constraint variables if they can saturate in normal operation. If a valve is operated close to a limit (e.g., 5% or 95%), it may be better to have the MPC manipulate the valve signal directly using signal characterization as needed using installed flow characteristic to linearize response.
The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.
Here are some MPC best practices from Process/Industrial Instruments and Controls Handbook Sixth Edition, by Gregory K. McMillan and Hunter Vegas (co-editors), and scheduled to be published in early 2019. This sixth edition is revolutionary in having nearly 50 industry experts provide a focus on the steps needed for all aspects to achieve a successful automation project to maximize the return on investment.
Connect with Greg:
The post How to Calibrate a Thermocouple first appeared on the ISA Interchange blog site.
The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.
Daniel Brewer, one of our newest protégés, has over six years of industry experience as an I&E technician. He attended the University of Kansas process instrumentation and control online courses. Daniel’s questions focus on aspects affecting thermocouple accuracy.
How do you calibrate a thermocouple transmitter? How do you simulate a thermocouple? When do you use zero degree reference junction? What if your measuring junction temperature varies?
Most people use a thermocouple simulator to calibrate temperature transmitters. You can usually set them to generate a wide selection of thermocouple types. Just make sure the thermocouple lead you use to connect the simulator to the transmitter is the right kind of wire.
“Calibrating” the thermocouple is another matter – because realistically it works or it doesn’t. You can pull it and put it in a bath though very few people actually do that. However if it is critical most will take the time to either put the thermocouple in a bath or dry block or at least cross check the reading against another thermocouple or some other means to check it.
The zero degree junction is a bit more complicated. Basically any time two dissimilar metals are connected a slight millivolt signal is generated. That is what a thermocouple is – two dissimilar metals welded together which generate varying voltages depending on the temperature at the junction. When you run a thermocouple circuit you try to use the same metals as the thermocouple for the whole circuit – that is you run thermocouple wire that matches the thermocouple and you use special thermocouple terminal blocks that are the same kind. This eliminates any extra junctions – the same metal is always connected to itself. However at some point you have to hook up to some kind of device that has copper terminal blocks – (transmitter, indicator, etc.) Unfortunately this creates another thermocouple junction where the copper touches the wires. That junction will impact the reading and will also fluctuate with temperature so the error will be variable.
To fix this most devices have a cold junction compensation circuit built in that automatically senses the temperature of the terminal block and subtracts the effect from the reading. Nearly every transmitter and read out device has it build in as a standard feature now – only older equipment would lack it.
The error from properly calibrated smart temperature transmitter with the correct span is generally negligible compared to the noise and errors from the sensor and signal wiring and connections. The use of Class 1 special grade instead of Class 2 standard grade thermocouples and extension lead wires enables an accuracy that is 50% better. The use of thermocouple input cards instead of smart transmitters introduces large errors due to the large spans and inability to individualize the calibrations.
Thermocouple (TC) drift can vary from 1 to 20 degrees Fahrenheit per year and the repeatability can vary from 1 to 8 degrees Fahrenheit depending upon the TC type and application conditions. For critical operations demanding high accuracy, the frequency of sensor calibrations needed is problematic. While a dry block calibrator is faster than a wet batch and can cover a higher temperature range, the removal of the sensor from the process is disruptive to operations and the time required compared to a simple transmitter calibration is still considerable. The best bet is a single point temperature check to compensate for the offset due to drift and manufacturing tolerances.
In a distillation column application, operations were perplexed and more than annoyed at the terrible column performance when the thermocouple was calibrated or replaced. It turns out operations had homed in on a temperature setpoint that had effectively compensated for the offset in the thermocouple measurement. Even after realizing the need for a new setpoint due to a more accurate thermocouple, it would take months to years to find the best setpoint.
Temperature is critical for column control because it is an inference of composition. It is also critical for reactor control because the reaction rate determining process capacity and selectivity setting process efficiency and product quality is greatly affected by temperature. In these applications where the operating temperature is below 400 degrees Fahrenheit, a resistance temperature detector (RTD) is a much better choice. Table 1 compares the performance of a thermocouple and RTD.
Table 1: Temperature Sensor Precision, Accuracy, Signal, Size and Linearity
Stepped thermowells should be specified with an insertion length greater than five times the tip diameter (L/D > 5) to minimize error from heat going from thermowell tip to pipe or equipment connection from thermal conduction and an insertion length less than 20 times the tip diameter (L/D < 20) to minimize vibration from wake frequencies. Calculations by supplier on length should be done to confirm that heat conduction error and vibration damage is not a problem. Stepped thermowells reduce the error and damage and provide a faster response. Spring loaded grounded thermocouples as seen in Figure 1 with minimum annular clearance between sheath and thermowell interior walls provide the fastest response that minimizes errors introduced by the sensor tip temperature lagging the actual process temperature.
Figure 1: Spring loaded compression fitting for sheathed TC or RTD
Thermowell material must provide corrosion resistance and if possible, a thermal conductivity to minimize conduction error or response time, whichever is most important. The tapered tip of the thermowell must be close to center line of pipe and the tapered portion of the thermowell completely past the equipment wall including any baffles. For columns, the location showing the largest and most symmetrical change in temperature for an increase and decrease in manipulated flow should be used. Simulations can help find this but it is wise to have several connections to confirm by field tests the best location, The tip of the thermowell must see the liquid, which may require a longer extension length or mounting on the opposite side of the downcomer to avoid the tip being in the vapor phase due the drop in level at the downcomer.
For TCs above 600 degrees Celsius, ensure sheath material is compatible with TC type. For TCs above temperature limit of sheaths, use the ceramic material with best thermal conductivity and design to minimize measurement lag time. For TCs above the temperature limit of sheaths with gaseous contaminants or reducing conditions, use possibly purged primary (outer) and secondary (inner) protection tubes to prevent contamination of TC element and provide a faster response.
The best location for a thermowell for small diameter pipelines (e.g., less than 12 inch) is in a pipe elbow facing upstream to maximize insertion length in center of the pipe. If abrasion from solids is an issue, the thermowell can be installed in the elbow facing downstream but a greater length is needed to reduce noise from swirling. If a pipe is half filled, the installation should ensure the narrowed diameter of the stepped thermowell is in liquid and not vapor.
The location of a thermowell must be sufficiently downstream of a joining of streams or heat exchanger tube side outlet to enable remixing of streams. The location must not be too far downstream due to the increase in transportation delay, which is the residence time for plug flow that is the pipe volume between the outlet or junction and sensor location divided by the pipe flow (volume/flow). For a length that is 25 times the pipe diameter (L/D = 25), the increase in loop deadtime of a few seconds is not as detrimental as a poor signal to noise ratio from poor uniformity. For desuperheaters, to prevent water droplets from creating noise, the thermowell must provide a residence time that is greater 0.3 seconds, which for high gas velocities can be much further than the distance required for liquid heat exchangers.
For greater reliability and better diagnostics dual isolated sensing elements can be used but the more effective solution is redundant installations of thermowells and transmitters. The middle signal selection of three completely redundant measurements offers best reliability and least effect of drift, noise, repeatability and slow response. The measurement from middle signal selection will be valid for any type of failure of one measurement. There is also considerable knowledge gained to head off problems from comparison of each measurement to middle.
Drift in the sensor shows up as a different average controller output at the same production rate assuming there is no fouling or change in raw materials. Poor repeatability in the sensor shows up as excessive variability in temperature controller output. For very tight control where the controller gain is high, sensor variability is most apparent in the controller output assuming the controller is tuned properly and the valve has a smooth consistent response.
For much more on calibration and temperature measurement see the Beamex e-book Calibration Essentials and Rosemount’s The Engineer’s Guide to Industrial Temperature Measurement.
The post, When is Reducing Variability Wrong?, first appeared on the ControlGlobal.com Control Talk blog.
Having the blind wholesale goal of reducing variability can lead to doing the wrong thing that can reduce plant safety and performance. Here we look at some common mistakes made that users may not realize until they have better concept of what is really going on. We seek to provide some insightful knowledge here to keep you out of trouble.
Is a smoother data historian plot or a statistical analysis showing less short term variability good or bad? The answer is no for the following situations misleading users and data analytics.
First of all, the most obvious case is surge tank level control. Here we want to maximize the variation in level to minimize the variation in manipulated flow typically to downstream users. This objective has a positive name of absorption of variability. What this is really indicative of is the principle that control loops do not make variability disappear but transfer variability from a controlled variable to a manipulated variable. Process engineers often have a problem with this concept because they think of setting flows per a Process Flow Diagram (PFD) and are reluctant to let a controller freely move them per some algorithm they do not fully understand. This is seen in predetermined sequential additions of feeds or heating and cooling in a batch operation rather allowing a concentration or temperature controller do what is needed via fed-batch control. No matter how smart a process engineer is, not all of the situations, unknowns and disturbances can be accounted for continuously. This is why fed-batch control is called semi-continuous. I have seen where process engineers, believe or not, sequence air flows and reagent flows to a batch bioreactor rather than going to Dissolved Oxygen or pH control. We need to teach chemical and biochemical engineers process control fundamentals including the transfer of variability.
The variability of a controlled variable is minimized by maximizing the transfer of variability to the manipulated variable. Unnecessary sharp movements of the manipulated variability can be prevented by a setpoint rate of change limit on analog output blocks for valve positioners or VFDs or directly on other secondary controllers (e.g., flow or coolant temperature) and the use of external-reset feedback (e.g., dynamic reset limit) with fast feedback of the actual manipulated variable (e.g., position, speed, flow, or coolant temperature). There is no need to retune the primary process variable controller by the use of external-reset feedback.
Data analytics programs need to use manipulated variables in addition to controlled variables to indicate what is happening. For tight control and infrequent setpoint changes to a process controller, what is really happening is seen in the manipulated variable (e.g., analog output).
A frequent problem is data compression in a data historian that conceals what is really going on. Hopefully, this is only affecting the trend displays and not the actual variables been used by a controller.
The next most common problem has been extensively discussed by me so at this point you may want to move on to more pressing needs. This problem is the excessive use of signal filters that may even be more insidious because the controller does not see a developing problem as quickly. A signal filter that is less than the largest time constant in the loop (hopefully in the process) creates dead time. If the signal filter becomes the largest time constant in the loop, the previously largest time constant creates dead time. Since the controller tuning based on largest time constant has no idea where it is, the controller gain can be increased, which combined with the smoother trends can lead one to believe the large filter was beneficial. The key here is a noticeably increase in the oscillation period particularly if the reset time was not increased. Signal filters become increasingly detrimental as the process loses self-regulation. Integrating processes such as level, gas pressure and batch temperature are particularly sensitive. Extremely dangerous is the use of a large filter on the temperature measurement for a highly exothermic reaction. If the PID gain window (ratio of maximum to minimum PID gain) reduces due to measurement lag to the point of not being able to withstand nonlinearities (e.g., ratio less than 6), there is a significant safety risk.
A slow thermowell response often due to a sensor that is loose or not touching the bottom of the thermowell causes the same problem as a signal filter. An electrode that is old or coated can have a time constant that is orders of magnitude larger (e.g., 300 sec) than a clean new pH electrode. If the velocity is slightly low (e.g., less than 5 fps), pH electrodes become more likely to foul and if the velocity is very low (e.g., less than 0.5 fps), the electrode time constant can increase by one order of magnitude (e.g., 30 sec) compared to electrode seeing recommended velocity. If the thermowell or electrode is being hidden by a baffle, the response is smoother but not representative of what is actually going on.
For gas pressure control, any measurement filter including that due to transmitter damping generally needs to be less than 0.2 sec, particularly if volume boosters on a valve positioner output(s) or a variable frequency drive is needed for a faster response.
Practitioners experienced in doing Model Predictive Control (MPC), want data compression and signal filters to be completely removed so that the noise can be seen and a better identification of process dynamics especially dead time is possible.
Virtual plants can show how fast the actual process variables should be changing revealing poor analyzer or sensor resolution and response time and excessive filtering. In general, you want measurement lags to total up to being less than 10% of the total loop dead time or less than 5% of reset time. However, you cannot get a good idea of the loop dead time unless you remove the filter and look for the time it takes to see a change in the right direction beyond noise after a controller setpoint or output change.
For more on the deception causes by a measurement time constant, see the Control Talk Blog “Measurement Attenuation and Deception.”
The post Key Insights to Control System Dynamics first appeared on the ISA Interchange blog site.
Caroline Cisneros, a recent graduate of the University of Texas who became a protégé about a year ago, is gaining significant experience working with some of the best process control engineers in an advanced control applications group. Caroline asks some questions about dynamics that play such a big role in improving control systems. The questions are basic but have enormous practical implications as seen in answers.
Is an increase/decrease in process gain, time constant, dead time, controller gain, reset, and rate good or bad in terms of effects on loop performance?
This is an excellent question with widespread significant implications. I offer here some key insights that can lead to better career and system performance. The first obstacle is terminology that over the years has resulted in considerable misconceptions and missing recognition of the source and nature of problems and the solutions needed. To overcome what is preventing a more common and better understanding see the Control Talk Blog Understanding Terminology to Advance Yourself and the Automation Profession. Also, for much more on how all of these dynamic terms affect what you do with your PID and the consequences in loop performance see the ISA Mentor post How to Optimize PID Settings and Options.
Increases in process gain can be helpful but challenging.
In distillation control, the tray that shows the largest temperature change for a change in reflux to feed ratio (largest process gain) in both directions has the best temperature to be used as the controlled variable. This location offers much better control because of the increased sensitivity of temperature that is an inferential measurement of column composition. Tests are done in simulations and in plants to find the best locations for temperature sensors.
In pH control, a titration curve (plot of pH versus ratio of reagent added to sample volume) with a slope that goes from flat to incredibly steep due to strong acids and strong bases can create an incredibly large and variable process gain. The X axis (abscissa) is converted to a ratio of reagent flow to influent flow taking into account engineering units. The shape stays the same, and if volumetric units are used and concentrations are the same in the lab and plant, the X axis has the same numeric values. The slope of the curve is the process gain. The slope and thus the process gain can theoretically change by a factor of 10 for every pH unit deviation from neutrality for a strong acid and strong base. The straight nearly vertical line at 7 pH seen in a plot of a laboratory titration curve is actually another curve if you zoom in on the neutral region as seen in Figure 1. If only a few data points are provided between 8 and 10 pH (common problem), you will not see the curve. The lab needs to be instructed to dramatically reduce the size of the reagent addition as the titrated pH gets closer to 7pH.
Figure 1: Titration Curve for Strong Acid and Strong Base
The steep slope provides incredible sensitivity to changes in hydrogen ion concentration, but less than ideal mixing will create enormous noise, and any stiction in the control valve will create enormous oscillations. The amplitude from stiction can be larger than 2 pH for even the best control valve. Even if we could have perfect mixing and control valve, we would not appreciate the orders of magnitude improvement in hydrogen ion control because we are only looking at what we measure, which is pH. Thus, for pH control we seek to have weak acids and weak bases and conjugate salts to moderate the slope of the titration curve. There is also a flow ratio gain that occurs for all composition and temperature control loops as detailed in the Control Talk Blog Hidden Factor in our Most Important Control Loops.
Often the term “process gain” includes the effect of more than the process. The better term is “open loop gain” that is the product of the manipulated variable gain (e.g., valve gain), process gain, and measurement gain (e.g., 100%/span). The valve gain (slope of installed flow characteristic that is the flow change in engineering units per signal change in percent) must not be too small (e.g., large disk or ball valve rotations where installed characteristic is flat) or too large (e.g., quick opening characteristic) because the stiction or backlash expressed as a percent of signal translates to a larger amount of errant flow. Oversized valves cause an even greater problem because of operation near the closed position where stiction is greatest from seal and seat friction. Small measurement spans causing a high measurement gain may be beneficial when accuracy of measurement is a percent of span. The use of thermocouple and RTD input cards, rather than transmitters with spans narrowed to range of interest, introduces too much error. In conclusion, automation systems gains must not be too small or too large. Too small of a valve gain or measurement gain is problematic because of less sensitivity and greater error that reduces the ability to accurately see and completely correct a process change. Too high of a valve gain is also bad from the standpoint of an increase in size of the flow change associated with backlash and stiction. An increase in this flow change accordingly reduces the precision of a correction for a process change and increases the amplitude of oscillations (e.g., limit cycle).
An increase in the largest (primary) time constant in a self-regulating process (process that reaches steady state in manual for no continuing upsets) is beneficial because it enables a large PID gain. The process time constant also slows down process input disturbances, giving the PID more time to catch up. While this proportionally decreases peak and integrated errors, a large time constant is perceived by some as bad. The tuning is more challenging, which requires greater patience and time commitment for open loop tests that seek to identify the primary time constant. The time for identification of the dynamics needed to tune the loop can be reduced by 80% or more for some well mixed vessel temperature loops by identifying the dead time and initial ramp rate (treating it like an integrating process). It has been verified by extensive test results that a loop with a process time constant larger than 4 times the deadtime should be classified as near-integrating. Integrating process tuning rules are consequently used to enable more immediate feedback correction to potentially stop a process excursion within 4 dead times. The tuning parameter changes from a closed loop time constant for self-regulating process tuning rules to an arrest time for integrating process tuning rules in order to take advantage of the ability to increase the proportional and integral action to reject load disturbances.
While the largest time constant is beneficial if it is in the process, the second largest process time constant creates effectively deadtime and is detrimental. It can be largely cancelled by a rate time setting. Going from a single loop to a cascade loop where a secondary loop encloses a process time constant smaller than largest time constant converts a term with a bad effect (secondary time constant increasing dead time in original single loop) into a term with a good effect (primary time constant slowing down disturbances in secondary loop). The reduction in the dead time also decreases the ultimate period of the primary loop.
For true integrating and runaway processes, any time constant is detrimental. It becomes more important to cancel the time constant by a rate time equal to or larger than time constant
Any time constant in the automation system is detrimental. A measurement and control valve time constant slows down the recognition and correction, respectively, of a disturbance. An automation system time constant also effectively creates dead time. Signal filters and transmitter damping settings add time constants. See Figure 1 to help recognize the many time constants in an automation system.
A measurement time constant larger than process time constant can be deceptive in that for self-regulating processes, it enables a larger PID gain, and the amplitude of oscillations may look less due to filtering action. However, the key to realize is that the actual process error or amplitude in engineering units is larger and the period of the oscillation is larger. All measurement and valve time constants should be less than 10% of the total loop deadtime for the effect on loop performance to be negligible. This objective for a valve time constant is difficult to achieve in liquid flow, pressure control, and compressor surge control because the process dead times in these applications are so small., A valve time constant becomes large for large signal changes (e.g., > 40%) due to stroking time, particularly for large valves. A valve time constant becomes large for small signal changes (e.g., < 0.4%) due to backlash, stiction, and poor positioner and actuator sensitivity. For more on how to identify and fix valve response problems, see the article How to specify valves and positioners that don’t compromise control.
Dead time anywhere in the loop is detrimental by creating a delay in the recognition or correction of a change in the process variable. For a setpoint change, dead time in the manipulated variable (e.g., manipulated flow) or process causes a delay in the start of the change in the process, and dead time in the measurement or controller creates additional delays in the appearance as a process variable response to the setpoint change. The minimum possible peak error and integrated error for an unmeasured load disturbance is proportional to the total loop dead time and dead time squared, respectively. The total loop dead time is the sum of all dead times in the loop. The dead time from digital devices and algorithms is ½ the update rate (execution rate or scan time) plus the latency (time required to communicate change in digital output after a change in digital input). Most digital devices have negligible latency. Simulation tests that always have the disturbance arrive immediately before, instead of after, the PID execution do not show the full adverse effect of PID execution rate, which leads to misconceptions as to the adverse effect of execution rate. On the average, the disturbance arrives in the middle of the time interval of PID executions, which is consistent with the dead time being ½ the execution rate for negligible latency. The latency for complex modules with complex calculations may approach the update rate. The latency for most at-line analyzers is the analyzer cycle time since the analysis is not completed until the end of the cycle. The result is a dead time that is 1.5 times the cycle time. Most of a time constant much smaller than the process time constant or in an integrating process can be taken as equivalent dead time. Since dead time is nearly always underestimated, I simply sum up all of the small time constants as being equivalent dead time. The block diagram in Figure 2 shows many but not all the sources of dead time.
Figure 2: Automation System and Process Dynamics in a Control Loop
The dead time from backlash and stiction is insidious in that it does not show up for step changes in signal. The dead time is the dead band or resolution limit divided by the signal rate of change.
Simulations typically do not have enough dead time because volumes are perfectly mixed and the dead time is missing from transportation delays particularly from dip tubes and piping to sensors or sample lines to analyzers, valve response time, backlash, stiction, sensor lags, thermowell lags, transmitter damping, wireless update times, and analyzer cycle times.
For pH applications with extremely large and nonlinear process gains due to strong acids and strong bases, there is a particularly great need to minimize the total loop dead time. This reduces the pH excursion on the titration curve, reducing the extent of the operating point nonlinearity seen. Poor mixing, piping design, valve response, and coated, dehydrated or old electrodes can introduce incredibly large dead times, killing a pH loop. My early specialty being pH control sensitized me to making sure the total system design including equipment, agitation, and piping would enable a pH loop to do its job by minimizing dead time. For much more on the implications on total system design from a very experience oriented view see the ISA book Advanced pH Measurement and Control.
The proportional mode provides a contribution to the PID output that is the error multiplied by the PID gain. Except for dead time dominant loops, humans tend not to use enough proportional action due to the perceived bad aspects in reasons listed below to decrease PID gain. For more on the missed opportunities see the Control Talk Blog Surprising Gains from PID Gain.
Reasons to increase PID gain:
Reasons to decrease PID gain:
The integral mode provides a contribution to the PID output that is the integral of the error multiplied by the PID gain and divided by the reset time. External-reset feedback (dynamic reset limit) suspends this action (further changes in output from integral mode) when manipulated variable stops changing. Except for dead time dominant loops, humans tend to use too much integral action due to the perceived good aspects in reasons listed below to decrease reset time.
Reasons to increase PID reset time (decrease reset action):
Reasons to decrease PID reset time (increase reset action):
The derivative mode provides a contribution to the PID output that is the derivative of the error (PID on error structure) or derivative of the process variable (PI on error and D on PV structure) multiplied by the PID gain and the rate time. It provides an anticipatory action basically projecting a value of the PV one rate time into the future based on the rate of change multiplied by the rate time. Some plants have mistakenly decided not to use derivative action anywhere due to the perceived bad aspects in reasons listed below to decrease rate time. Good tuning software could have prevented this bad practice of only allowing PI control (rate time always zero).
Reasons to increase PID rate time:
Reasons to decrease PID rate time:
The post How to Measure pH in Ultra-Pure Water Applications first appeared on the ISA Interchange blog site.
Danny Parrott is an instrumentation and controls specialist at Spallation Neutron Source. Danny is a detail-oriented instrumentation and controls professional experienced in the areas of electrical, electronics and controls specification, installation, maintenance, and project planning. Danny’s question is important in dealing with the many challenges for reliable and accurate pH measurement in ultra pure water and more generally in streams with exceptionally low conductivity.
What are some opinions, thoughts, or practical experience relating to pH measurements in ultra-pure water applications?
Ultra-pure water applications pose special problems because of the exceptionally low conductivity of the fluid from the absence of ions. The consequences are extreme sensitivity to fluid velocity and spurious ions, unstable reference junction potentials, sample contamination, and loss of electrical continuity between the reference and measurement electrodes. The functional electrical diagram showing resistances and potentials provides insightful view of nearly all of the sources of problems with pH measurements in general. Ultra-pure water and process fluids with an exceptionally small near zero fluid conductivity threaten the continuity of the electrical circuit between the reference and measurement electrode terminals at the transmitter by an extraordinarily large electrical resistance R8 in Figure 1, a pH electrode functional electrical circuit diagram for a combination electrode that is a great way of recognizing the many potential source of errors in a pH measurement.
Figure 1: pH Electrode Functional Electrical Circuit Diagram
The solution for online measurements is to use a flowing junction reference electrode to provide a small fixed liquid junction potential in a low flow assembly for a combination electrode. The combination electrode assembly ensures a short fixed distance path of reference electrolyte to the measurement electrode and a small fixed fluid velocity. The assembly also provides mounting of an electrolyte reservoir that sustains a small fixed reference junction flow as shown in Figure 2. The flow of reference electrode electrolyte reduces the fluid velocity and electrical resistance (R8) in the fluid path and provides a much more constant liquid junction potential (E5) that does not jump or shift due to the appearance of spurious ions. The resistances and potentials in the diagram provide a wealth of information. The flow assembly also has a special cup holder for calibration with buffer solutions. A solution ground connection reduces the effect of ground potentials. Temperature compensation must be accurate and fast.
Figure 2: Low Flow Assembly With Flowing Reference Junction for Low Conductivity pH Applications
The pH measurement calibration needs to be checked and adjusted before installation and periodically thereafter by inserting the electrode(s) in buffer solutions. Making a pH measurement of a sample is very problematic because of contamination from glass beaker ions, absorption of carbon dioxide creating carbonic acid and accumulation of electrolyte ions from flowing junction. The sample volume needs to be large and the measurement made quickly to reduce effect of accumulating ions. A closed plastic sample container is employed to minimize contamination. The same type of electrode(s) in the online measurement should be utilized for sample measurement so that reference junction potentials are consistent. Since these sample pH measurement requirements are rarely satisfied, buffers instead of process samples should be used for calibration checks.
In exceptionally low conductivity process fluids, there is often not enough water content to keep the glass measurement electrode hydrated. Also, the activity of the hydrogen ion is severely decreased by the lack of water and the extremely different dissociation constant of a non-aqueous solvent can cause a pH range that is outside of the normal 0 to 14 pH range. For these applications, a flowing reference electrode is also needed but an automatically retractable insertion assembly is useful to periodically retract, flush, soak and calibrate the electrodes reducing process exposure time and hydrating/rejuvenating the measurement electrode’s glass surface. For more on the challenges of semi-aqueous pH measurements see the Control Talk article The wild side of pH measurement. For a much more complete view of what is needed for pH applications, see the ISA book Advanced pH Measurement and Control.
For pH measurements used for process control, I recommend three pH assemblies and middle signal selection. Lower lifecycle costs from less and more effective maintenance and better process performance more than pays for cost of the three measurements. Middle signal selection will inherently ignore a single measurement failure of any type and dramatically reduce the effect of spikes, noise, and the consequences of slow or insensitive glass electrodes. The middle selection also eliminates unnecessary calibration checks and provides much more intelligent knowledge on electrode performance enabling optimum time for calibration and replacement.
To download a free PDF excerpt from Advanced pH Measurement and Control, click here.
The post How to Optimize PID Controller Settings and Options first appeared on the ISA Interchange blog site.
The following discussion is based on the ISA Mentor Program webinar recordings of the three-part series on PID options and solutions. The webinar is discussed in the Control Talk blog post PID Options and Solutions – Part 1 and the post PID Options and Solutions – Parts 2 and 3. Since the following questions from one of our most knowledgeable recent protégés Adrian Taylor refer to slide numbers, please open or download the presentation slide deck ISA Mentor Program WebEx PID Options and Solutions.
Figure 1: ISA Standard Form (see slide #42 from the presentation PDF)
Figure 1 depicts the only known time domain block diagram for ISA Standard Form with eight different PID structures by setpoint weight factors and the positive feedback implementation of integral mode that enables true external-reset feedback (ERF). Many capabilities, such as deadtime compensation, directional move suppression, and the elimination oscillations from deadband, poor resolution, poor sensitivity, wireless update times, and analyzer sample times, are readily achieved by turning on ERF.
Adrian Taylor’s Question 1:
Slide 9 details a Y factor which varies between 0.28 and 0.88 for converting the faster lags to apparent dead time. You mentioned this Y factor can be looked up on charts given by Ziegler and Nichols (Z&N), are you able to provide with a copy of these charts or point me to where I can get a copy?
Greg McMillan’s Answer 1:
The chart and equations to compute Y are on page 137 of my Tuning and Control Loop Performance Fourth Edition (Momentum Press 2015). The original source is Ziegler, J. G., and Nichols, N. B., “Process Lags in Automatic Control Circuits,” ASME Transactions, 1943.
Adrian Taylor’s Question 2:
On slide 17 you give recommendations for setting of Lambda, I’m presuming these recommendations are for integrating and near integrating systems only and wondered what your recommendations are for setting of Lambda when using the self-regulating rules?
Greg McMillan’s Answer 2:
I was focused on near-integrating and integrating processes since these are the more important ones in the type of plants I worked in but the recommendations for Lambda apply to self-regulating as well. Lambda being a fraction of deadtime for extremely aggressive control is of theoretical value only to show how good PID can be if you are publishing a paper. I would never advocate a Lambda less than one deadtime unless you are absolutely confident you exactly know dynamics and that they never change and you can tolerate some oscillation. Lambda being a multiple of deadtime for robust control is of practical value for dealing with changing or unknown dynamics and providing a smooth response.
Adrian Taylor’s Question 3:
On slide 17 where the recommended Lambda settings are given, it recommends a Lambda of 3 and 6 respectively for adverse changes in loop dynamics of less than 5 and 10. What do 5 and 10 refer to?
Greg McMillan’s Answer 3:
I should have said “factor of 5” and “factor of 10” instead of just “5” and “10”, respectively in statement on robustness. These factors are actually gain margins. I also should not have rounded up to a factor of 10 and instead said a factor of 9 for Lambda 6x deadtime. While this specifically indicates what increase in self-regulating or integrating process gain as a factor of original can occur without the loop going unstable, it can be extended to give an approximate idea of how much other adverse changes in loop dynamics can be tolerated if the process gain is constant. For example, the factor applies roughly to the increase in total loop deadtime for deadtime dominant self-regulating processes and decrease in process time constant for lag dominant processes that would cause instability. This extension assumes Lambda tuning where the Lambda in every case is a factor of deadtime with the reset time being proportional to process time constant for deadtime dominant processes and reset time being proportional to deadtime for lag dominant processes. The reasoning can be seen in the equations for PID gain and reset time on slides 30 and 32 without my minimum limits on reset time.
Adrian Taylor’s Question 4:
On slide 17 there is a statement “Adverse changes are multiplicative…”. I didn’t quite understand the context of this statement if you are able to expand a little more? (Probably goes hand in hand with question 3 above).
Greg McMillan’s Answer 4:
An increase in process gain by factor of 2 will result in a combined factor of 9 for adverse changes such as a decrease in process time constant for lag dominant self-regulating processes by factor of 4.5 or an increase in loop dead time by a factor of 4.5 for deadtime dominant processes.
Adrian Taylor’s Question 5:
On slide 31 when calculating arrest time we use a value Δ% which is described as the maximum allowable level change (%). Just to be sure I understand the value to be used here… If I had a setpoint high limit of 80% and the tank overflow is at 100% then the value of Δ% would be equal to 100-80=20%?
Greg McMillan’s Answer 5:
Yes, if the high level alarm is above the high setpoint limit. Δ% is maximum allowable deviation that is often the difference between an operating point and the point where there is an alarm.
Adrian Taylor’s Question 6:
On slide 31 when calculating arrest time we use a value Δ% which is described as the maximum allowable PID output change? Is this just simply the difference between the output high and low limits…. So if the output high limit was 100% and the output low limit was 0%, then the value of Δ% would be equal to 100-0=100%?
Greg McMillan’s Answer 6:
Yes. This term in the equation is counter intuitive but results from derivation of equation in Tuning and Control Loop Performance Fourth Edition using minimum integrating process gain.
Adrian Taylor’s Question 7:
I am going to purchase a copy of your ‘Tuning and Control Loop Performance’ book shown at the end of the presentation. I am curious if you think it is also worth purchasing the tuning rules pocket book, or if all the content of the pocket book is also contained in the larger book I am already purchasing?
Greg McMillan’s Answer 7:
The ‘Tuning and Control Loop Performance’ book is much more complete and explanatory but can be overwhelming. The pocket guide provides a more concise and focused way of knowing what to do.
Adrian Taylor’s Question 8:
At the end of webinar a question was posed around tuning loops where it is not possible to put the loops in manual, I am seeking more specifics based on my notes on procedure:
Greg McMillan’s Answer :
A closed loop procedure using your notes to give approximate tuning that keeps loop in automatic is as follows if you cannot put loop in manual or do not have software to identify the loop dynamics:
Inverse response, negative resistance, positive feedback and discontinuities can cause processes to jump, accelerate and oscillate confusing the control system and the operator. Not properly addressing these situations can result in equipment damage and plant shutdowns besides the loss of process efficiency. Here we first develop a fundamental understanding of the causes and then quickly move on to the solutions to keep the process safe and productive.
We can appreciate how positive feedback causes problems with sound systems. We can also appreciate from circuit theory how negative resistance and positive feedback would cause an acceleration of a change in current flow. We can turn this insight into an understanding of how a similar situation develops for compressor, steam-jet ejector, exothermic reactor and parallel heat exchanger control.
The compressor characteristic curves from the compressor manufacturer that are a plot of compressor pressure rise versus suction flow shows a curve of decreasing pressure rise for each speed or suction vane position whose slope magnitude increases as the suction flow increases in the normal operating region. The pressure rise consequently decreases more as the flow increases opposing additional increases in compressor flow creating a positive resistance to flow. Not commonly seen is that compressor characteristic curve slope to the left of the surge point becomes zero as you decrease flow, which denotes a point on the surge curve, and then as the flow decreases further, the pressure rise decreases causing a further decrease in compressor flow creating a negative resistance to a decrease in flow. When the flow becomes negative, the slope reverses sign creating a positive resistance with a shape similar to that seen in the normal operating region to the right of the surge point. The compressor flow then increases to a positive flow at which point the slope reverses sign creating negative resistance. The compressor flow jumps in about 0.03 seconds from the start of negative resistance to some point of positive resistance. The result is a jump in 0.03 seconds to negative flow across the negative resistance, a slower transition along positive resistance to zero flow, than a jump in 0.03 seconds across the negative resistance to a positive flow well to the right of the surge curve. If the surge valve is not open far enough, the operating point walks about 0.5 to 0.75 seconds along the positive resistance to the surge point. The whole cycle repeats itself with an oscillation period of 1 to 2 seconds.
The following plot of a pilot plant compressor characteristic for a single speed shows the path 2 along the curve 1. When the operating point reaches point B, which is where the compressor characteristic curve slope is zero, the operating point jumps to point C due to the negative resistance. This jump corresponds to the precipitous drop in flow that signals the start of the surge cycle and the subsequent reversal of flow (negative acfm). After this jump to point C, the operating point follows the compressor curve from point C to point D as the plenum volume is emptied due to reverse flow. When the operating point reaches point D, which is where the compressor characteristic slope is zero again, the operating point jumps to point A due to negative resistance. If the surge valve is not opened, the operating point walks again from A to B starting the whole oscillation all over again.
Once a compressor gets into surge, the very rapid jumps and oscillations confuse the PID controller. Even a very fast PID and control valve is not fast enough. Consequently, the oscillation persists until an open loop backup holds open the surge valves until the operating point is sustained well to the right of the surge curve for about 10 seconds at which point there is a bumpless transfer back to PID control. The solution is a very fast valve and PID and an open loop backup that detects a zero slope indicating an approach to surge or a rapid dip in flow indicating an actual surge. The operating point should always be kept well to the right of the surge point.
The same shape but with much less of a dip in the compressor curve, sometimes occurs just to the right of the surge point. This local dip causes a jumping back and forth called buzzing. While the oscillation is much less severe than surge, the continual buzzing is disruptive to users.
A similar sort of dip in a curve occurs in a plot of pumping rate versus absolute pressure for a steam-jet ejector. The result is a jumping across the path of negative resistance. The solution here is a different operating pressure or nozzle design, or multiple jets to reduce the operating range so that operation to one side or the other of the dip can be assured.
Positive feedback occurs in exothermic reactors when the heat of reaction exceeds the cooling rate causing an accelerating rise in temperature that further increases the heat of reaction. The solution is to always insure the cooling rate is larger than the heat of reaction. However, in polymerization reactions the rate of reaction can accelerate so fast the cooling rate cannot be increased fast enough causing a shutdown or a severe oscillation. For safety and process performance, an aggressively tuned PID is essential where the time constants and dead time associated with heat transfer in cooling surface and thermowell and loop response are much less than the positive feedback time constant. Derivative action must be maximized and integral action must be minimized. In some cases a proportional plus derivative controller is used.
Positive feedback can also occur when parallel heat exchanges have a common process fluid input each with outlet temperature controller(s) with a setpoint close to the boiling point or temperature resulting in vaporization of a component in the process fluid. Each temperature controller is manipulating a utility stream providing heat input. The control system is stable if the process flow is exactly the same to all exchangers. However, a sudden reduction in one process flow causes overheating causing bubbles to form expanding back into the exchanger causing an increase in back pressure and hence a further decrease in process flow. The increasing back pressure eventually forces all of the process flow into the colder heat exchanger making it colder. The high velocity in the hot exchanger from boiling and vaporization causes vibration and possibly damage to any discontinuity in its path from slugs of water. When all of the water is pushed out of the hot exchanger, its temperature drops drawing feed from the cold heat exchanger that causes it to overheat repeating the whole cycle. The solution is separate flow controllers and pumps for all streams so that changes in one flow do not affect another and a lower temperature setpoint.
Acceleration in the response of a process variable is also seen as the pH approaches neutrality in a strong acid and strong base system due to the increase in process gain by a factor of 10 for each pH unit closer to neutrality (e.g., 7 pH). The result is a limit cycle across the steep portions of titration curve to the flat portions where the slope is much smaller (process gain is much smaller). The solution is to use signal characterization, excellent mixing, and very precise reagent valves.
Inverse response from changes in phase commonly seen in boiler level control occurs when increases in cold feedwater flow causes a collapse of bubbles in the downcomers causing drum level to shrink from liquid in drum falling back down into downcomers. In the opposite direction, decreases in cold feedwater flow causes a formation of bubbles in the downcomers causing drum level to swell from liquid rising up from downcomers into the drum. Preheating the feedwater can greatly reduce shrink and swell. The control solution is normally a feedforward of steam flow to feedforward flow and less feedback action in a setup called three element drum level control. However, the shrink and swell can be so large for drums too small as a result of a misguided attempt to save capital costs or as a result of pushing plant capacity beyond original design. In these cases, the direction is reversed for the very start of a feedforward signal change and is decayed out to be the right direction for the material balance. This counterintuitive action helps prevent a level shutdown but must be very carefully monitored. A warmer feedwater can make this the wrong action.
Limit cycles also develop from resolution limits typically as a result of stiction in control valves but can also originate from input cards to change speed for variable frequency drives (VFDs). Believe it or not the standard VFD input card used maybe even to this day has a resolution of only about 0.4%. Resolution limits create limit cycles that cannot be stopped unless all integrating action is removed from the process and controllers. The limit cycle period can be reduced by increasing the controller gain but the amplitude is set by the process gain.
If there are two or more sources of integrating action in a control loop, limit cycles also develop from deadband typically as a result of backlash in control valves but can also originate from deadband settings in the setup of variable frequency drives (VFDs) or configured in the split range point of controllers. The limit cycle period and amplitude can be reduced by increasing the controller gain.
Many processes have integrating action. Positioners may mistakenly have integral action. Most controllers have integrating action but it can be suspended by an integral deadband or by the use of external-reset feedback (dynamic reset limit) where there is a fast readback of actual valve position or VFD speed. For a near-integrating, true integrating or runaway process response there is a PID gain window where oscillations result from too low besides too high of a PID gain. The low PID gain limit increases as integral action increases (reset time decreases).
To summarize, to eliminate oscillations, the best solution is a design that eliminates negative resistance, inverse response, positive feedback and discontinuities. When this cannot provide the total solution, operating points may need to be restricted and the controller gain increased and integral action decreased or suspended. Not covered here are the oscillations due to resonance and interaction. In these situations, better pairing of controlled and manipulated variables is the first choice. If this is not possible, see if the faster loop can be made faster and the slower loop made slower so that the closed loop response times of the loops are different by a factor of five or more. The suspension of integral action best done by external-reset feedback can also help. The same rule and solution works for cascade control. If pairing and tuning does not solve the interaction problem, then decoupling via feedforward of one controller output to the other is needed or moving on to model predictive control.
If this gives you a headache from concerns raised about your applications, suspend thinking about the problems and use creativity and better tuning when you can actually do something.
The post Webinar Recording: Feedforward and Ratio Control first appeared on the ISA Interchange blog site.
This educational ISA webinar on control valves was presented by Greg McMillan in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).
Feedforward control and ratio control that preemptively correct for load disturbances or changes in leader flow are greatly underutilized due to the lack of understanding of how to configure the control and determine the parameters needed. This webinar provides key insights on how feedforward control often simplifies to ratio control. It also explains how to identify the parameters so that the feedforward or ratio control correction does not arrive too late or too soon and the correction has the right sign and value to cancel out the load disturbance or achieve the right ratio to the leader flow.
The post, Maximizing Synergy between Engineering and Operations, first appeared on ControlGlobal.com's Control Talk blog.
The operator is by far the most important person in the control room having the most intimate knowledge and “hands on” experience with the process. Engineers who are most successful with process improvements realize they need to sit with and observe what operators are doing to deal with a variety of situations. Process engineers tend to recognize this need more than automation engineers. Improvements in operator interfaces, alarms, measurements, valves, and control systems are best accomplished by a synergy of knowledge gained by meetings between research, design, support, maintenance and operations where each talk about what they think are problems and opportunities. ISA Standards and Virtual Plants can provide a mutual understanding in these discussions.
The most successful process control improvement initiative at Monsanto and Solutia used such discussions with some preparatory work on what the process is actually doing and capable of doing. An opportunity sizing that detailed gaps between current and potential performance estimated by identifying the best performance found from cost sheets and from a design of experiments (DOE) most often done by a virtual plant due to increasingly greater limitations to such experimentation in an actual plant. After completion of the opportunity sizing, a one or two day opportunity assessment was held led by a process engineer with input sought and given by operations, accounting and management, marketing, maintenance, field and lab analyzer specialists, instrument and electrical engineers, and process control engineers. Marketing provided the perspective and details on how the demand and value of different products is expected to change. This knowledge was crucial for effectively estimating the benefits from increases in process flexibility and capacity. Opportunities for procedure automation and plantwide ratio control making the transition from one product to another faster and more efficient were consequently identified. Agreement was sought and often reached on the percentage of each gap that could be eliminated by potential PCI proposed and discussed during the meeting. A rough estimate of the cost and time required for each PCI implementation was also listed. The ones with the least cost and time requirements were noted as “Quick Hits”. To take advantage of the knowledge and enthusiasm and momentum, the “Quick Hits” were usually started immediately after the meeting or the following week.
Synergy can be maximized by exploring a wide spectrum of scenarios in a virtual plant that can run faster than real time and discussed in training sessions. Every engineer, scientist technician, and operator should be involved. If necessary this can be done at luncheons. Any resulting Webinar should be recorded including discussions. See the Control article “Virtual Plant Virtuosity” and ISA Mentor Program Webinar Recordings for this and much more in terms of gaining and using operational, process and automation system knowledge.
Webinar recordings should focus on the level of understanding needed and achievable in the plant and not what a supplier would like to promote. The ability of operators to learn the essential aspects and principles of process, equipment, and automation system performance should not be underestimated. We want to ensure the operator knows exactly and quickly what is happening being able to get at the root cause of a problem preemptively preventing poor process performance and SIS activation. Operators need to be aware of the severe adverse effect of deadtime. Fortunately, operators want to learn!
Finding the real causes of potential abnormal situations is critical for improving HMI, alarm systems, engineering, maintenance and operations. Ideally there should be a single alarm of elevated importance identifying the root cause (e.g., state based alarm) and the operator should be able in HMI to readily investigate conditions associated with root cause. Maintenance should be able to know what mechanical or automation component to repair or replace. Engineering should design procedure automation (state based control) to automatically deal with the abnormal situation.
Often the very first abnormal measurement is an indication of root cause. However, the abnormal condition should be upstream and the measurement of the abnormal condition should be faster than the measurement of other problems that occur as a consequence or coincidence. This is a particular concern for temperature because thermowells lags can be 10 to 100 seconds depending upon fit and velocity. For pH, the electrode lags can range from 5 to 200 seconds depending upon glass age, dehydration, fouling and velocity. There is also the deadtime associated with any transportation delay to the sensor. Finally, an output correlated with an input is not necessarily a cause and effect relationship. I find that process analysis and some form of a fault tree diagram and investigating relevant scenarios in a virtual plant as most useful.
Sharing useful knowledge is the biggest obstacle to success. The biggest obstacle can become the biggest achievement.
The post Webinar Recording: Strange but True Process Control Stories first appeared on the ISA Interchange blog site.
Greg McMillan presents lessons learned the hard way during his 40-year career, through concise “War Stories” of mistakes made in the field. Many of these mistakes are still being made today with some posing a safety risk, as well as potentially reducing process efficiency or capacity.
The post Webinar Recording: Temperature Measurement and Control first appeared on the ISA Interchange blog site.
Temperature is the most important common measurement that is critical for process efficiency and capacity because it not only affects energy use but also production rate and quality. Temperature plays a critical role in the formation, separation, and purification of product. Here we see how to get the most accurate and responsive measurements and the best control for key unit operations.
The usual concern is whether an automation system is too slow. There are some applications where an automation system is disruptive by being too fast. Here we look at what determines whether a system should be faster or slower and what are the limiting factors and thus the solution to meeting a speed of response objective. In the process, we will find there are a lot of misconceptions. The good news is that most of corrections needed are within the realm of the automation engineer’s responsibility.
The more general case with possible safety and process performance consequences is when the final control element (e.g., control valve or variable frequency drive), transportation delay, sensor lag(s), transmitter damping, signal filtering, wireless update rate and PID execution rate is too slow. The question is what are the criteria and priorities in terms of increasing the speed of response.
The key to understanding the impact of slowness is to realize that the minimum peak error and integrated absolute error are proportional to the deadtime and deadtime squared, respectively. The exception is deadtime dominant loops that basically have a peak error equal to the open loop error (error if the PID is in manual) and thus an integrated error that is proportional to deadtime. It is important to realize that this deadtime is not just the process deadtime but a total loop deadtime that is the summation of all the pure delays and the equivalent deadtime from lags in control loop whether in the process, valve, measurement or controller.
These minimum errors are only achieved by aggressive tuning seen in the literature but not used in practice because of the inevitable changes and unknowns concerning gains, deadtime, and lags. There is always a tradeoff between minimization of errors and robustness. Less aggressive and more robust tuning while necessary results in a greater impact of deadtime in that the gain margin (ratio of ultimate gain to PID gain) and the phase margin (degrees that a process time constant can decrease) is achieved by setting the tuning to be a greater factor of deadtime. For example, to achieve a gain margin of 6 and a phase margin of 76 degrees, lambda is set as 3 times the deadtime.
The actual errors get larger as the tuning becomes less aggressive. The actual peak error is inversely proportional to the PID gain. The actual integrated error is proportional to the ratio of the integral time (reset time) to PID gain. Consider the use of lambda integrating process tuning rules for a near integrating process where lambda is an arrest time. If you triple the deadtime used in setting the PID gain and reset to maintain a gain margin of about six and a phase margin of 76 degrees, you decrease the PID gain by about a factor of two times the new deadtime and increase the reset time by about a factor of two times the new deadtime increasing the actual integrated error by a factor of thirty six when the new deadtime is 3 times the original deadtime.
Consequently, how fast automation system components need to be depends on how much they increase the total loop deadtime. The components to make the loop faster is first chosen based on ease such as decreasing PID and wireless execution rate, signal filtering and transmitter damping assuming these are more than ten percent of total loop deadtime. Next you need to decrease the largest source of deadtime that may take more time and money such as a better thermowell or electrode design, location and installation or a more precise and faster valve. The deadtime from PID and wireless update rates is about ½ the time between updates. The deadtimes from transmitter damping or sensor lags increase logarithmically from about 0.28 to 0.88 times the lag as the ratio of the lag to the largest open loop time constant decreases from 1 to 0.01. The deadtime from backlash, stiction and poor sensitivity is the deadband or resolution limit divided by the rate of change of the controller output. Fortunately, deadtime is generally easier and quicker to identify than the open loop time constant and open loop gain. See the Control Talk Blog “Deadtime, the Simple Easy Key to Better Control.”
For flow and pressure processes, the process deadtime is often less than one second making by far the control system components the largest source of deadtime. For compressor, liquid pressure and furnace pressure control, the control valve is the largest source of deadtime even when a booster is added. Transmitter damping is generally the next largest source followed by PID execution rate.
There is a common misconception that the wireless update time should be less than a fraction (e.g., 1/6) of the response time. For the more interesting processes such as temperature and pH, the time constant is much larger than the deadtime. A well-mixed vessel could have a process time constant that is more than 40 times the process deadtime. If you use the criteria of 1/6 the response time assuming the best case scenario of a 63% response time, the increase in deadtime can be as large as 3 times the deadtime from the wireless update rate. Fortunately, wireless update rates are never that slow. Another reason not to focus on response time is because in integrating processes where there is no steady state, a response time is irrelevant.
The remaining question is when is the automation system too fast? The example that most comes to mind is when the faster system causes greater resonance or interaction. You want the most important loops to be able to see an oscillation from less important loops whose period is at least four times its ultimate period to reduce resonance and interaction. Hopefully, this is done by making the more important loop faster but if necessary is done by making the less important loops slower. A less recognized but very common case of needing to slow down an automation loop is when it creates a load disturbance to other loops (e.g., feed rate change). While step changes are what are analyzed in the literature so far as disturbances, in real applications there are seldom any step changes due the tuning of the PID and the response of the valve. This effect can be approximated by applying a time constant to the load disturbance and realizing that the resulting errors are reduced compared to the step disturbance by a factor that is one minus the base e to the negative power of lambda divided by the disturbance time constant.
Overshoot of a temperature or pH setpoint is extremely detrimental to bioreactor cell life and productivity. Making the loop response much slower by much less aggressive tuning settings and a PID structure of Integral on Error and Proportional -Derivative on Process Variable (I on E and PD on PV) is greatly needed and permitted because the load disturbances from cell growth rate or production rate are incredibly slow (effective process time constant in days). In fact, fast disturbances are the result of one loop affecting another (e.g., pH and dissolved oxygen control).
In dryer control, the difference between inlet and outlet temperatures that is used as the inferential measurement of dryer moisture is filtered by a large time constant that is greater than the moisture controller’s reset time. This is necessary to prevent a spiraling oscillation from positive feedback.
Filters on setpoints are used in loops whose setpoint is set by an operator or a valve position controller to change the process operating point or production rate. This filter can provide synchronization in ratio control of reactant flow maintaining the ability of each flow loop to be tuned to deal with supply pressure disturbances and positioner sensitivity limits. However, a filter on a secondary lower loop setpoint in cascade control is generally detrimental because it slows down the ability of the primary loop to react to disturbances.
Finally, more controversial but potentially useful is a filter on the pH at the outlet of static mixer for a strong acid and base to control in the neutral region. Here the filter acts to average the inevitable extremely large oscillations due to nearly non-existent back mixing and the steep titration curve. The result is a happier valve and operator. The averaged pH setpoint should be corrected by a downstream pH loop that is on a well-mixed vessel that sees a much smoother pH on a much narrower region of the titration curve. A better solution is signal characterization. The static mixer controlled variable becomes the abscissa of the titration curve (reagent demand) rather than the ordinate (pH). This linearization greatly reduces the oscillations from the steep portion of the titration curve and enables a larger PID gain to be used. The titration curve must not be very accurate but must include the effect of absorption of carbon dioxide from exposure to air and the change in dissociation constants and consequently actual solution pH with temperature not addressed by a standard temperature compensator that is simply addressing the temperature effect in the Nernst equation. You need to be also aware that the pH of process samples and consequently the shape of the titration curve can change due to changes in sample liquid phase composition from reaction, evaporation, absorption and dissolution. The longer the time is between the sample being taken and titrated, the more problematic are these changes.
The post, Solutions to Prevent Harmful Feedforwards, originally appeared on the ControlGlobal.com Control Talk blog.
Here we looks at applications where feedforward can do more harm than good and what to do to prevent this situation. This problem is more common than one might think. In the literature we mostly hear how beneficial feedforward can be for measured load disturbances. Statements are made that the limitation is the accuracy of the feedforward and that consequently an error of 2% can still result in a 50:1 improvement in control. This optimistic view does not take into account process, load and valve dynamics. The feedforward correction needs to arrive in the process at the same point and the same time as the load disturbance. This is traditionally achieved by passing the feedforward (FF) through a deadtime block and lead-lag block. The FF deadtime is set equal the load path deadtime minus the correction path deadtime. The FF lead time is set equal to the correction path lag time. The FF lag time is set equal to the load path lag time. If the FF arrives too soon, we create inverse response and if the FF arrives too late, we create a second disturbance. Setting up tuning software to identify and compute the FF dynamic can be challenging. Even more problematic are the following feedforward applications that do more harm than good despite dynamic compensation.
(1) Inverse response from the manipulated flow causes excessive reaction in the opposite direction of load. The inverse response from a feedwater change can be so large as to cause a boiler drum high or low level trip, a situation that particularly occurs for undersized drums and missing feedwater heaters due misguided attempts to save on capital costs. The solution here is to use a traditional three element drum level control but added to the traditional feedforward is an unconventional feedforward with the opposite sign that is decayed out over the period of the inverse response. In other words, for a step increase in steam flow, there would be initially a step decrease in boiler feedwater feedforward added to the three element drum level controller output that is trying to increase feedwater flow. This prevents shrink and a low level trip from bubbles collapsing in the downcomers from an increase in cold feedwater. For a step decrease in steam flow, there would be a step increase in boiler feedwater feedforward added to the three element drum level controller output that is trying to decrease feedwater flow. This prevents swell and a high level trip from bubbles forming in the downcomers from a decrease in cold feedwater. A severe problem of inverse response can occur in furnace pressure control when the scale is a few inches of water column and the incoming air manipulated is not sufficiently heated. The inverse response from the ideal gas law can cause a pressure trip. An increase in cold air flow causes a decrease in gas temperature and consequently a relatively large decrease in gas pressure at the furnace pressure sensor. A decrease in cold air flow causes an increase in gas temperature and consequently a relatively large increase in gas pressure at the furnace pressure sensor.
(2) Deadtime in correction path is greater than the deadtime in the load path. The result is a feedforward that arrives too late creating a second disturbance and worse control than if there was no feedforward. This occurs whenever the correction path is longer than the load path. An example is a distillation column control when the feed load upset stream is closer to the temperature control tray than the corrective change in reflux flow. The solution is to generate the feedforward signal for ratio control based on a setpoint change that is then delayed before being used by the feed flow controller. The delay is equal to the correction path deadtime minus the load path deadtime. The same problem can occur for a reagent injection delay that often occurs due to conventional sized dip tubes and small reagent flows. The same solution applies in terms of using an influent flow controller setpoint for feedforward ratio control of reagent and delaying the setpoint used by the influent flow controller.
(3) Feedforward correction makes response from an unmeasured disturbance worse. This occurs in unit operations such as distillation columns and neutralizers where the unmeasured disturbance from a feed composition change is made worse by a feedforward correction based on feed flow. Often feed composition is not measured and is large due to parallel unit operations and a combination of flows that become the feed flow. For pH, the nonlinearity of titration curve increases the sensitivity to feed composition. Even if the influent pH is measured, the pH electrode error or uncertainty of the titration curve makes feedforward correction for feed pH to do more harm than good for setpoints on the steep part of the curve. If the feed composition change requires a decrease in manipulated flow and there is a coincidental increase in feed flow that corresponds to an increase in manipulated flow or vice versa, the feedforward does more harm than good. The solution is to compute the required rate of change of manipulated from the unmeasured disturbance and subtract this from the computed rate of change for the feedforward correction needed paying attention to the signs of the rate of change. If the unmeasured disturbance rate of change of manipulated flow is in the same direction and exceeds the computed feedforward correction rate of change in the manipulated flow, the feedforward rate of change is clamped at zero to prevent making control worse. If the rates of change for the manipulated are in the same direction, the magnitude of the feedforward rate of change is correspondingly increased.
I am trying to see how all this applies in my responses to known and unknown upsets to my spouse.
I have seen engineers and technologists thrown into the world of process instrumentation and control (PIC) with little or no
Bergotech's N.E. (Bill) Battikha
knowledge of this engineering specialty—and they were expected to perform immediately. At best, they may have taken a course in control theory, which is very rarely (if ever) used in a plant environment.
PIC typically represents a substantial cost to an average industrial project. It’s a high-tech discipline critical to the success and survival of a plant, and yet it is typically learned “on the job.” Many people working in the discipline lack the proper training needed to make appropriate decisions. An error could result in a very expensive or hazardous situation.
Many of them don't know the basics. Over the years, PIC personnel have come to me with questions such as, “How does an orifice plate work? With a square root output? Why?” and “How can I describe all this logic? In a logic diagram? What’s that?”
Worse, I have seen so-called experienced PIC personnel facing a ground loop problem because both the transmitter and receiver had their signals grounded. The solution they took? They went back to the vendor of the receiver and asked to have the equipment isolated from ground. In other words, a modification to an electrically-approved off-the-shelf product. The cost of modifying the circuit boards on these fancy receivers and obtaining the required approvals—and there were 20 of them—was astronomical. The experienced personnel and their supervisor had never heard of a loop isolator. Unbelievable, but true.
My examples could go on, filling a few pages. However, I will stop here as the topic of this article is not about listing my complaints. But you can understand what is typically encountered due to lack of knowledge, which is due to the lack of good training.
This is not the fault of the people doing the work. They were never properly trained. The result of this lack of training is poor performance and longer times to correctly implement control systems in a competitive environment squeezed by tight budgets and stiff competition.
There are two main problems facing the need for training: time and money. Time is a problem because organizations operate with a skeleton staff and therefore, it is very hard for a manager to let an employee take time off for training. Money is another problem as budgets are tight, and global competition does not leave much room for “extra” spending on training. In addition to course fees, there is also the cost of traveling and accommodation expenses to a location where face-to-face training is provided.
Besides the time and money issues, many engineering associations have now implemented a requirement for continuing professional development (CPD) for its members. Under such a requirement, members must provide a declaration of competence combined with a reporting of how they are maintaining competence in their discipline. So, adding to the time and money issues, we now have CPD requirements. What can be done?
Training is available in different formats, each with its advantages and disadvantages. It can be provided in the classical form of face-to-face in regular classrooms. However, face-to-face teaching is relatively expensive due to the student having to travel to a remote location (where the class is conducted). In addition, the employee is absent from his/her workplace.
A multitude of face-to-face courses are available from equipment vendors and manufacturers, training companies, universities and technical colleges. The majority of them are not in a sequential format to allow a person to start with the basics and move on to more complex topics. In addition, and quite often, these courses are either too theoretical, or are geared for someone who already has a reasonably good basic knowledge of PIC.
Self-teach programs are another format of training, available either from self-teach books or from software loaded onto personal computers, some which are interactive. This solution is probably the lowest in cost. However, without an instructor available to answer questions, it is up to the student to understand the information at hand and, more important, to have the self-discipline to proceed and complete the learning process independently. At the end, self-teach programs do not typically provide proof of successful completion and understanding by the student.
How about those who want to learn about PIC in an organized fashion, in a condensed time frame, from a practical point of view, with limited training funds and without the (almost impossible) absence from work? The solution is instructor-led quality online learning. This approach provides training without the student having to travel, keeping the personnel on site and costs reduced to a relatively affordable minimum.
Online education allows a student to progress at a relatively convenient pace. With good instructional material fit to the course, students learn and complete quizzes and exams to confirm their acquired knowledge. This approach, with an instructor to answer questions, provides an incentive to finish the study program. It is followed by a certificate obtained on successful completion of quizzes and exams, and is relatively low in cost while keeping the student available at work since the online sessions are typically held in the evening.
I have been teaching university-based online PIC courses for about eight years. I have learned through trial and error as well as through students’ comments and suggestions that the most effective approach for a quality PIC online course is to present it in three modules spread over a year. Such a course would cover the different facets of PIC from a non-mathematical and practical point of view. The spread over one year allows the students to gradually apply and practice some of the information learned. It also avoids students’ information overload.
Including theory such as Laplace Transform, Bode Plots and the like in a PIC practical course has little value in day-to-day plant operation. And speaking from personal experience, this type of theoretical information would be forgotten shortly after the course is completed.
To the best of my knowledge, such online, instructor-led, university-based PIC training programs are presently being taught in North America by three institutions. All of the three use the same award-winning reference book published by the ISA and titled, “The Condensed Handbook of Measurement and Control.”
In the United States, the course is offered by: Penn State University - Berks (phone# 610-396-6221) and University of Kansas Continuing Education (phone# 1-877-404-5823 or 785-864-5823).
In Canada, the course is offered by: Dalhousie University Continuing Education (phone # 800-565-1179 or 902-494-6079).
These three organizations offer a university certificate that is awarded after the successful completion of the three modules, including all quizzes and final exams. The three modules of these certificate programs amount to approximately 150 classroom hours. The universities recommend that participants attend Modules 1, 2, and 3 in their sequential order, however, some of the students, due to their prior knowledge of PIC, take the modules in a different order and have successfully passed all quizzes and exams.
I’ve successfully instructed face-to-face PIC courses for more than 10 years in many industrial plants, at ISA functions, and at several North American universities. Then, and due to a substantial drop in student enrollment following the financial problems of 2018-2019, I started online training at two universities. At the beginning, I was hesitant about the potential effectiveness and success of online training. I have now changed my mind. In addition to avoiding the effects of cost and time lost away from the workplace, online training has proven to be effective and practical for the students. A five-fold increase in the number of students occurred with the implementation of the online course compared to the face-to-face it replaced, proving its success and benefits.
Online courses have their limitations. They can replace many face-to-face courses, but not all. For example, online learning can’t provide hands-on training such as control equipment maintenance. Dedicated training facilities provide such training, often at a vendor’s facility.
The main benefits of an on-line university-based and instructor-led certificate program are:
Online PIC training, when accompanied by a good reference book, quality course notes, quizzes, and exams, provides students with the knowledge and confidence needed to grasp this field of technology.
As a final note, if you think education is expensive, try ignorance. You'll find it more expensive.
N.E. (Bill) Battikha, PE, president, Bergotech / firstname.lastname@example.org
About the Author
N.E. (Bill) Battikha, P.E., has more than 40 years of experience in PIC, working mainly in the USA and Canada. He holds a Bachelor of Science in Engineering and is a member of the Delaware Association of Professional Engineers. Throughout his career, Bill has gained a lot of experience in management, engineering and training. Bill has generated and conducted training courses for many universities in the USA and Canada, including Penn State University, the University of Wisconsin, Kansas State University, the University of Toronto and Dalhousie University. He co-authored a patent and a commercial software package. He also wrote four books on PIC, all published by the ISA, with the third one (The Condensed Handbook of Measurement and Control) twice awarded the Raymond D. Molloy Award as an ISA best-seller. Bill is the president of Bergotech Inc., a firm specializing in teaching online engineering courses in a variety of disciplines as well as implementing university-based online programs. For more info, or to contact the author, please visit www.bergotech.com
The post What are New Technologies and Approaches for Batch and Continuous Process Control? first appeared on the ISA Interchange blog site.
The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.
What is the technical basis and ability of technologies other than PID and model predictive control (MPC)? These technologies seem fascinating and I would like to know more, particularly as I study for the ISA Certified Automation Professional (CAP) exam.
Michel Ruel has achieved considerable success in the use of fuzzy logic control (FLC) in mineral processing as documented in “Ruel’s Rules for Use of PID, MPC and FLC.” The process interrelationships and dynamics in the processing of ores are not defined due to the predominance of missing measurements and unknown effects. Mineral processing PID loops are often in manual, not only for the usual reasons of valve and measurement problems, but also because process dynamics between a controlled and manipulated variable radically change, including even the sign of the process action (reverse or direct) based on complex multivariable effects that can’t be quantified.
If the FLC configuration and interface is set up properly for visibility, understandability and adjustability of the rules, the plant can change the rules as needed, enabling sustainable benefits. In the application cited by Michel Ruel, every week metallurgists validate rules, make slight adjustments, and work with control engineers to make slight adjustments. A production record was achieved in the first week. The average use of energy per ton had decreased by 8 percent, and the tonnage per day had increased by 14 percent.
There have been successful applications of PID and MPC in the mining industry as detailed in the Control Talk columns “Process control challenges and solutions in mineral processing” and “Smart measurement and control in mineral processing.”
I have successfully used FLC on a waste treatment pH system to prevent RCRA violations at a Pensacola, Fla. plant because of my initial excitement about the technology. It did very well for decades but the plant was afraid to touch it. The 2007 Control magazine article “Virtual Control of Real pH” with Mark Sowell showed how you could replace the FLC with an MPC and PID strategy that could be better maintained, tuned and optimized.
We used FLC integrated into the software for a major supplier of expert systems in the 1980s and 1990s but there were no real success stories for FLC. There was one successful application of an expert system for a smart level alarm but it did not use FLC. However, a simple material balance could have done as well. There were several applications for smart alarms that were turned off. After nearly 100 man-years, we have not much at all to show for these expert systems. You could add a lot rules for FLC and logic based on the expertise of the developer of the application, but how these rules played together and how you could tell which rule needed to be changed was a major problem. When the developer left the production unit, operators and process engineers were not able to make changes inevitably needed.
The standalone field FLC advertised for better temperature setpoint response cannot do better than a well-tuned PID if you use all of the PID options summarized in the Control magazine article “The greatest source of process control knowledge,” including PID structure such as 2 Degrees of Freedom (2DOF) or a setpoint lead-lag. You can also use gain scheduling in the PID if necessary. The problem with FLC is how you tune it and update it for changing process conditions. I wrote the original section on FLC in A Guide to the Automation Body of Knowledge but the next edition is going to have it omitted due to common agreement between me and ISA that making more room to help get the most out of your PID was more generally useful.
FLC has been used in pulp and paper. I remember instances of FLC for kiln control but since then we have developed much better PID and MPC strategies that eliminate interaction and tuning problems.
So far as artificial neural networks (ANN), I have seen some successful applications in batch end point detection and prediction and for inferential dryer moisture control. The insertion of time delays on inputs to make them coincide with measured output is required for continuous operations. For plug flow operations like dryers, this can be readily done since the deadtime is simply the volume divided by the flow rate. For continuous vessels and columns, the insertion of very large lag times and possibly a small lead time are needed besides dead time. No dynamic compensation is needed for batch operation end point prediction.
You have to be very careful not to be outside of the test data range because of bizarre nonlinear predictions. You can also get local reversals of process gain sign causing buzzing if the predicted variable is used for closed loop control. Finally, you need to eliminate correlations between inputs. I prefer multivariate statistical process control (MSPC) that eliminates cross correlation of inputs by virtue of principle component analysis and does not exhibit process gain sign reversals or bizarre nonlinearity upon extrapolation outside of test data range. Also, MSPC can provide a piecewise linear fit to nonlinear batch profiles, which is a technique we commonly do with signal characterizers for any nonlinearity. I think there is an opportunity for MSPC to provide more intelligent and linear variables for an MPC like we do with signal characterizers.
For any type of analysis or prediction, whether using ANN or MSPC, you need to have inputs that show the variability in the process. If a process variable is tightly controlled, the PID or MPC has transferred the variability to the manipulated variable. Ideally, flow measurements should be used, but if only position or a speed is available and the installed flow characteristic is nonlinear, signal characterization should be used to convert position or speed to a flow.
I implemented a neural network some years ago on a distillation column level control. The column was notoriously difficult to control. The level would swing all over and anything would set it off, such as weather or feed changes. The operators had to run it in manual because automatic was a hopeless waste of time.
At the time (and this information might be dated) the neural network was created by bringing a stack of parameters into the calculation and “training” it on the data. Theoretically the calculation would strengthen the parameters that mattered, weaken the parameters that didn’t, and eventually configure itself to learn the system.
The process taught me much. Here are my main learning points:
1) Choose the training data wisely. If you give it straight line data then it learns straight lines. You need to teach it using upset data so it learns what to do when things go wrong. (Then use new upset data to test it.)
2) Choose the input parameters wisely. I started by giving it everything. Over time I came to realize that the data it needed wasn’t the obvious. In this case it needed:
3) Ultimately the system worked very well – but honestly by the time I had gone through four iterations of training and building the system I KNEW the physics behind it. The calculation for controlling the level was fairly simple when all was said and done. I probably could have just fed it into a feedforward PID and accomplished the same thing.
The experience was interesting and fun, and I actually got an award from ISA for the work. However when all was said and done, I realized it wasn’t nearly as impressive a tool as all the marketing brochures suggested. (At the time it was all the rage – companies were selling neutral network controller packages and magazine articles were predicting it would replace PID in a matter of years.)
Thank you, this is a lot more practical insight than I have been able to glean from the books.
I imagine the batch data analytics program offered by a major supplier of control systems is an example of the MSPC you mentioned. I think I have some papers on it stashed somewhere, since we have considered using it for some of our batch systems. What is batch data analytics and what can it do?
Yes, batch data analytics uses MSPC technology with some additional features, such as dynamic time warping. The supplier of the control system software worked with Lubrizol’s technology manager Robert Wojewodka to develop and improve the product for batch processes as highlighted in the InTech magazine article “Data Analytics in Batch Operations.” Data analytics eliminates relationships between process inputs (cross correlations) and reduces the number of process inputs by the use of principal components constructed that are orthogonal and thus independent of each other in a plot of a process output versus principle components. For two principal components, this is readily seen as an X, Y and Z plot with each axis at a 90-degree angle to the each other axis. The X and Y axis covers the range of values principal components and the Z axis is the process output. The user can drill down into each principal component to see the contribution of each process input. The use of graphics to show this can greatly increase operator understanding. Data analytics excels at identifying unsuspected relationships. For process conditions outside of the data range used in developing the empirical models, linear extrapolation helps prevent bizarre extraneous predictions. Also, the use of a piecewise linear fit means there are no humps or bumps that cause a local reversal of process gain and buzzing.
Batch data analytics (MSPC) does not need to identify the process dynamics because all of the process inputs are focused on a process output at a particular part of the batch cycle (e.g., endpoint). This is incredibly liberating. The piecewise linear fit to the batch profile enables batch data analytics to deal with the nonlinearity of the batch response. The results can be used to make mid-batch corrections.
There is an opportunity for ANN to be used with MSPC to deal with some of the nonlinearities of inputs but the proponents of MSPC and ANN often think their technologies is the total solution and don’t work together. Some even think their favorite technology can replace all types of controllers.
Getting laboratory information on a consistent basis is a challenge. I think for training the model, you could enter the batch results manually. When choosing batches, you want to include a variety of batches but all with normal operation (no outliers from failures of devices or equipment or improper operations). The applications as noted in the Wojewodka article emphasize what you want to have as a model is the average batch and not the best batch (not the “golden batch”). I think this is right to start detecting abnormal batches but process control seeks to find the best and reduce the variability from the best so eventually you want a model that is representative of the best batches.
I like MSPC “worm plots” because they tell me from tail to the head the past and future of batches with tightness of coil adding insight. The worm plot is a series of batch end points expressed as a key process variable (PV1n) that is plotted as scores of principal component 1 (PC1) and principal component 2 (PC2)
If you want to do some automated correction of the prediction by taking a fraction of the difference between the predicted result and lab result, you would need to get the lab result into your DCS probably via OPC or some lab entry system interfaced to your DCS. Again the timing of the correction is not important for batch operations. Whenever the bias correction comes in, the prediction is improved for the next batch. The bias correction is similar to what is done in MPC and the trend of the bias is useful as a history of how the accuracy is changing and whether there is possibly noise in the lab result or model prediction.
The really big name in MSPC is John F. MacGregor at McMaster University in Ontario, Canada. McMaster University has expanded beyond MSPC to offer a process control degree. Another big name there is Tom Marlin, who I think came originally from the Monsanto Solutia Pensacola Nylon Intermediates plant. Tom gives his view in the InTech magazine article “Educating the engineer,” Part 2 of a two-part series. Part 1 of the series, “Student to engineer,” focused on engineering curriculum in universities.
For more on my view of why some technologies have been much more successful than others, see my Control Talk blog “Keys to Successful Control Technologies.”
The post, Common Mistakes not Commonly Understood - Finale, first appeared on the Controlglobal.com Control Talk blog.
Here is our finale just in time to serve as a momentous process control gift for the Holidays. Just don’t try to re-gift this to anyone unless they are into the automation profession or you big time.
(21) Misuse and Missing Use of Setpoint Filter. The use of a setpoint filter on a secondary loop setpoint will slow down the ability of the primary loop to make a correction for changes in load to or the setpoint of the primary controller in cascade control. For this reason, there has been a general rule of thumb that a setpoint filter should not be used on secondary controllers. However, as with most rules of thumb there are important exceptions derived from a deeper understanding. The setpoint filter on the secondary loop does not interfere with the ability of the secondary loop to reject disturbances and to deal with nonlinearities within the secondary loop that is often the most frequent and important role. If the setpoint filter is judicially applied so that it is less than 10% of the primary loop dead time, the effect on the ability of the primary loop to reject disturbances originating in the primary loop is negligible. The use of a judicious setpoint filter can ensure there are no temporary unbalances from changes of multiple flows under ratio control by enabling all the flows to move in concert. This is critical for reactant flows and the inline mixing of any flows. Often this unbalance was prevented by tuning the secondary flow loops to have the same closed loop time constant. Unfortunately, this forces the tuning of loops to be as slow as the slowest or most nonlinear flow loop. This detuning reduces the ability of the other loops to deal with their pressure disturbances and nonlinearities of their installed flow characteristics. Also, the use of setpoint rate limits that are different up and down gives directional move suppression to provide a fast approach to a better operating condition and a fast getaway to prevent undesirable operating point in the process variable (PV) or manipulated valve position. This is important to provide a fast opening and slow closing surge valve for compressor control, to optimize user valve position to improve process efficiency or maximize production by a valve position control, and to prevent oscillations across a split range point. For primary loops, a setpoint filter time equal to the reset time is the same as a PID structure of Proportional and Derivative on PV and Integral on Error (PD on PV, I on Error) so that the setpoint and load response are the same. The addition of lead time to the setpoint that is about 25% of the setpoint filter where the filter is lag time of a lead-lag block enables a faster setpoint response.
(22) Choosing and Achieving Level Control Objective. We are increasingly becoming aware that level loops are tuned too aggressively causing rapid changes in the manipulated flow upsetting downstream unit operations. The solution for level loops that need loose control is not to simply reduce the level controller gain because this can cause slow rolling oscillations. The tuning objective is to so minimize the transfer of variability of level to the manipulated flow and is more often stated as the maximization of the absorption of variability. The solution is to first increase the reset time dramatically, like one or two orders of magnitude, and then decrease the PID gain so that the product of the PID gain and reset time is greater than twice the inverse of the integrating process gain whose units are %/sec/% (1/sec) as discussed in Mistake 2 in Part 1. For surge tank level control, the objective is obviously maximization of absorption of variability. This objective has gained such popularity that the cases where the level controller must be tuned tightly are not recognized and addressed. In fact some may say there are no such cases and that feedforward control can take care of providing tighter level control when needed. There are exceptions. The biggest one that comes to mind is the distillate receiver level controller that manipulates reflux flow. Tight level feedback control achieves internal reflux control where changes in column top temperature, particularly from blue northerners, causes a change in overhead distillate flow and hence distillate receiver level that results in a correction of manipulated reflux flow in the direction to minimize the disturbance. Another case is where tight residence time control in continuous reactors provides enough time to complete a reaction but not so much time as to cause side reactions or polymer buildup. A change in production rate must result in a change in level setpoint that must be reached quickly by tight level control so that residence time (level/flow) is as constant as possible. A similar concern may exist for continuous crystallizers. For multiple effect evaporators, changes in discharge flow from the last stage to control product solids concentration must be translated to changes in feed coming into each stage by its level controller to affect product concentration. There are similar requirements whenever an upstream flow is manipulated to control a level, such as a raw material makeup flow to deal with changes in a recovered recycle flow. While feedforward flow and ratio control can help, good level control deals with the inevitable errors that cause unbalances in stoichiometry.
(23) Misunderstanding of Load Disturbances. There is a huge disconnect in the literature and what really happens in a plant in terms of the supposed location of a disturbance. The literature and consequently many tuning methods and new algorithms supposedly better than PID are based on the disturbance being on the process output downstream of time constants and dead times in the process and even in most cases ignoring any time constants or dead times in the measurement. This view is convenient for thinking model predictive control and internal model control are best for disturbance rejection and that tuning for setpoint changes is sufficient since a disturbance on the process output is as quick as a change in setpoint. The reality is that nearly all disturbances occur as a process input and are delayed by process dead times or slowed by process time constants. For lag dominant processes, this recognition is particularly important and is the basis of switching from self-regulating tuning rules where lambda is a closed loop time constant for a setpoint change to integrating process tuning rules where lambda is an arrest time for rejection of a load disturbance on the process input. Of course there are a few exceptions where the disturbance is on a process output that would benefit from a larger reset time but this can be identified by tuning the controller for a setpoint response. Also, when in doubt a larger reset time is always a good thing to try since integral action is destabilizing. More proportional action can be stabilizing as discussed in Mistakes 2 and 3 in Part 1.
(24) Missing Automated Startup. Often loops cannot be simply put in automatic for startup.The controller approach to setpoint is often not as smooth and consistent with other controllers approach to setpoint. Often the operator manually position valves to get the process to a reasonable operating state before going to automatic control. The best practices of the best operators can be automated and implemented with much better timing and repeatability enabling continuous improvement by better recognition of what is left to be addressed. If operators say the situation is too complex or conditional on their expertise to be automated, it is even a greater opportunity and motivation for automation. For much more on how procedural automation can be used for startup and dealing with abnormal situations, see the Sept 2016 Control Talk column “Continuous improvement for continuous processes”
(25) Missing Ratio Control. Nearly all process inputs are flows that have a specific ratio to each other for a unit operation as seen on a Process Flow Diagram. The simple use of ratio control is inherently powerful where a “leader” flow is chosen that is often a major feed flow and the other flow controller setpoints designated as “followers” are ratioed to the “leader” flow controller setpoint. If the flows need to work in concert with each other, a filter is applied to each flow setpoint including the “leader” flow for reasons noted in Mistake 21. The actual ratio must be displayed for the operator based on measured flows and the operator must be given the ability to change the ratio for startup and abnormal operating conditions via a ratio controller for each “follower” flow. The ratio often has a feedback correction by a primary temperature or composition controller output. For plug flow volumes, conveyors and sheet lines, the feedback correction changes the ratio setpoint. For back mixed volumes, the feedback correction biases the ratio controller output. For more on ratio control, see the 1/31/2017 Control Talk Blog “Uncommon Knowledge for Achieving Best Feedforward and Ratio Control”
(26) Misleading response time statements. The term response time has no value unless a percentage of the final response is noted. For linear systems, the 63% response time is the dead time plus one time constant. The 86% response time often used for valve response is the dead time plus two time constants. The 95% and 98% response times are the dead time plus three and four time constants, respectively. Waiting for the 98% response takes a lot of time making the test vulnerable to changing conditions and disturbances. For large distillation columns, it could take days to see a 98% response.
(27) Not knowing the dead time to time constant ratio. The tuning and performance of a control loop for self-regulating processes depends heavily upon the dead time to time constant ratio. Most studies in the literature are for loops that are smart enough to include dead time have a dead time to time constant ratio between 2 and 0.5 that are termed “balanced” self-regulating processes. When the dead time to time constant ratio is much larger than 1, the process is termed “dead time dominant” and the reset time can be significantly decreased and PID gain should be decreased to reduce the reaction to noise and abrupt response due to the lack of a significant time constant. For more on these processes see the 12/1/2016 Control Talk Blog “Deadtime Dominance - Sources, Consequences and Solutions”
When the dead time to time constant ratio is less than ¼, the process is termed “lag dominant” and “near-integrating”. Integrating process tuning rules should be used that increase the reset time and PID gain to account for the reduced degree of negative feedback in the process. In the time frame of a major PID reaction (4 dead times), the process ramps and appears to be similar to an integrating process. The integrated absolute error (IAE) for all processes is proportional to the ratio of the reset time to controller gain. The peak error for “dead time dominant” processes approaches the open loop error (error if controller was in manual). The peak error for “lag dominant” processes is inversely proportional to the controller gain since the PID gain can be quite high dominating the initial response. For more on how the dead time to time constant ratio affects performance see the 7/17/2017 Control Talk Blog “Insights to Process and Loop Performance”
(28) Unnecessary crossings of split range point. The valve stiction and nonlinearity and process discontinuity is greatest when switching from the manipulation of one valve and stream to another. Once a controller output crosses a split range point, the tendency is to oscillate back and forth unless there is a predominant need for one stream versus the other. Putting a dead band into the split range point will cause oscillations if there are two integrators either due to integrating action in a process or from the integral mode cascade loop or a positioner. The best way to prevent an unnecessary crossing of the split range point is to put up and down rate limits on a valve or flow controller setpoint where the rate limit is slow in the direction of going back to the split range point if there is no safety issue. For cooling and heating, the movement toward heating across split range point may be slowed down for temperature control. For venting and gas inlet flows, the movement toward more inlet flow across split range point may be slowed down for pressure control. For mammalian bioreactor pH control, the movement toward adding a base across the split range point, such as sodium bicarbonate, is slowed down to reduce sodium ion accumulation that increases cell osmotic pressure and cell lysis. External reset feedback to the primary PID of valve position or flow should be used so that primary PID (temperature, pressure or pH) does not try to change a valve or flow faster than it can respond. This is a great feature in general for cascade control and to provide directional move suppression for valve position control and surge control. For more on external reset feedback see the 4/26/2012 Control Talk Blog “What is the Key PID Feature for Basic and Advanced Control” A corrected and improved block diagram in time domain for a PID with the positive feedback implementation of integral action for the ISA Standard Form is
(29) Minimizing instrumentation cost. We get hung up on saving a few thousand dollars putting millions of dollars often at stake in terms of poor process performance and inadvertent shutdowns. A big mistake is allowing a packaged equipment supplier to choose the instrumentation since the supplier is seek to win the low bid contest. Similarly allowing purchasing to decide which instruments to buy is fundamentally bad. We need to take our knowledge about instrumentation performance and insist on the best even when the justification is not clear and pressure is put on to lower system cost. My cohort Stan Weiner would purposely increase the initial estimates for projects to give him the freedom to choose the best instrumentation. He favored inline meters such as Magnetic Flow Meters and Coriolis meters over differential head meters because of the better rangeability, accuracy, and maintainability and insensitivity to piping system. Similarly, for temperatures less than 400 degrees C, I use RTDs with “sensor matching” instead of thermocouples to reduce drift and improve accuracy and sensitivity by orders of magnitude despite a few hundred dollars more in cost.
(30) Lack of middle signal selection. The best way to avoid unnecessary shutdowns, eliminate the reaction to any possible type of single failure including a measurement stuck at setpoint, reduce the reaction to noise, spikes, and drift and provide intelligence as to what is wrong with a measurement is middle signal selection of three independent measurements. For pH this is almost essential. To me it is bizarre how multimillion bioreactor batches are put at risk from using two instead of three pH electrodes resulting in anybody’s guess as to which electrode is right. Some batches can be ruined by a pH that is off by just a few tenths yet engineers are reluctant to spend a couple of thousand dollars upfront not realizing that even if you disregard the cost of a potentially spoiled batch, the reduction in unnecessary maintenance more than pays for the extra electrode. For one large intermediate continuous process, the use of middle signal selection on all of the measurements used by the Safety Instrumented System (SIS) reduced the number of shutdowns from 2 per year to less than one every 2 years saving tens of millions of dollars each year. The risk of a disastrous operator mistake was also realty reduced because startup is the most difficult and hazardous mode for operations.
I could keep on talking but I think this is enough to start your “New Year Resolutions”. Hopefully, you will be better at keeping them than me.
The post What Are Best Practices and Standards for Control Narratives? first appeared on the ISA Interchange blog site.
The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.
At the place I work we are typically good at documenting how we configure our controls in the form of DDS documents but not always as good at documenting why they have been configured that way in the form of rigorous control narratives.
We now have an initiative to start retrospectively producing detailed control narratives for all our existing controls and I am looking for best practice, standards and examples of what good looks like for control narratives.
I wondered if you had any good resources in this regard or you could point me in any direction. (I did look at ANSI/ISA-5.06.01-2007 but this seems more concerned with URS/DDS/FDS documents rather than narratives).
We are mainly DeltaV now.
We do a lot of DeltaV systems and we use 3 different ways to “document” the control system. As a system integrator “document” for me may mean something than different than for you so let me explain that these documents are my way to tell my programmers exactly how I want the system to be configured. These documents fully define the system’s logic so they can program it and I can test against it.
As I said there are three parts:
Obviously batch flowsheets do not apply if your system isn’t batch but the same flow sheets can be used to define an involved sequence.
The tag list is simply a large excel spreadsheet that includes all of the key parameters – module name, IO Name, tuning constants, alarm constants, etc . It also includes a “comment” cell that can include relatively simple logic like “Man only on/off FC valve with open/close limits and 30 sec stroke” or “analog input”, or “Rev acting PID with man/auto modes and FO valve” etc. Most of the modules can be defined on this spreadsheet.
The logic notes are usually a couple of paragraphs each and explain logic that is more complicated. Maybe we have an involved set of interlocks or ratio or cascade logic. If I have a logic note I’ll reference it in the tag list so the programmer knows to look for it.
The flow sheets are the last part. I usually have a flow sheet for every phase which defines the phase parameters, logic paths, failures, etc. (See Figure 1 for an example of an agitate phase.) Then I create a flow chart for every recipe which defines what phases I am using and what parameters are being passed. (See Figure 2 for an example of a partial recipe.)
Figure 1: Control Narrative Best Practices Agitator Phase
Figure 2: Control Narrative Best Practices Recipe Sample
Hiten Dalal’s Pipeline Feed System Example
I find the American Petroleum Institute Standard API RP 554 Part 1 (R2016) “Process Control Systems: Part 1-Process Control Systems Functions and Functional Specification Development” and the ISA Standard ANSI / ISA 5.06.01-2007 Functional Requirements Documentation for Control Software Applications to be very useful. ANSI/ISA95 also offers guidance on “Enterprise-Control System Integration.” These types of documents in my opinion help include the opinion of all stakeholders in the logic without the stakeholder having to be familiar with flow charting or logic diagrams or specific control system engineering terminology. The functional specification in my opinion is a progressive elaboration of a simple process description done by the process engineer. Once finalized, the functional specification can be developed into a SCADA/DCS operations manual by listing normal sequence of operation along with analysis of applicable responsibility such as operator action/responsibility, logic solver responsibility, and HMI display. You may download my example of a pipeline control system functional specification: Condensate Feed Pump & Alignment Motor Operated Valves (MOVs).
The post When and How to Use Derivative Action in a PID Controller first appeared on the ISA Interchange blog site.
Derivative action is the least frequently used mode in the PID controller. Some plants do not like to use derivative action at all because they see abrupt changes in PID output and lack an understanding of benefits and guidance on how to set the tuning parameter (rate time). Here we have a question from one of the original protégés of the ISA Mentor Program and answers by a key resource on control Michel Ruel concluding with my view.
Is there a guideline in terms of when to enable the derivate term in a PID?
Derivative is more useful when dead time is not pure dead time but instead a series of small time constants; using derivative “eliminate” one of those small time constants.
You should use the derivative time equal to the largest of those small time constants. Since we usually do not know the details, a good rule of thumb is adjusting Derivative time to half the dead time.
Adding derivative (D) will increase robustness (higher gain and phase margin) since D will reduce apparent dead time of the closed loop.
A good example is the thermowell in a temperature loop: if the thermowell represents a time constant of 10 s, using a D of 10 seconds will eliminate the lag of the thermowell.
Hence, the apparent dead time of the closed loop is reduced and you can use more propositional, shorter integral time; the settling time will be shorter and stability better.
When you look at formulas to reject a disturbance, you observe that in presence of D, proportional and integral can be stronger.
We recommend using derivative only if the derivative function contains a built-in filter to remove high frequency noise. Most DCSs and PLCs have this function but some do not or there is a switch to activate the derivative filter.
What does having a higher phase margin increase the robustness?
Robustness means that the control loop will remain stable even if the model changes. Phase and gain margin represents the amplitude of the change before it becomes unstable, i.e. before reaching -180 degrees or a loop gain above one.
Ta analyze, we use open loop frequency response, the product of controller model and process model. On a Bode plot, gain are multiplied (or added if plot in dB) and total phase is the sum of process phase and controller phase.
Phase margin is the number of degrees required to reach -180 degrees when the open loop gain is 1 (0 dB). If this number is large (high phase margin), the system is robust meaning that the apparent dead time can increase without reaching instability. If the phase margin is small, a slight change in apparent dead time will bring the control loop to instability.
Adding derivative adds a positive phase, hence increases phase margin (compare to adding a dead time or a time constant that reduces the phase margin).
The use of derivative is more important in lag dominant (near-integrating), true integrating, and runaway processes (highly exothermic reactions). The derivative action benefit declines as the primary time constant (largest lag) approaches the dead time because the process changes become too abrupt due to lack of a significant filtering action by a process time constant.
Temperature loops have a large secondary time constant courtesy of heat transfer lags in the thermowell or the process heat transfer areas. Setting the derivative time equal to the largest of the secondary lags can cancel out almost 90 percent of the lag assuming the derivative filter is about 1/8 to 1/10 the rate time setting. Highly exothermic reactors can have positive feedback that causes acceleration of the temperature. Some of these temperature loops have only proportional and derivative action because integral action is viewed as unsafe.
If a PID Series Form is used, increasing the rate time reduces the integral mode action (increases the effective reset time), reduces the proportional mode action (decreases effective PID gain or increases effective PID proportional band) and moderates the increase in derivative action. The interaction factors moderates all of the modes preventing the resulting effective rate time from being greater than one-quarter the effective reset time. This helps prevent instability if the rate time setting approaches the reset time setting. There is no such inherent protection in the ISA Standard Form. It is critical that the user prevent the rate time from being larger than one-quarter the reset time in the ISA Standard Form. While in general it is best to identify multiple time constants, a general rule of thumb I use is the rate time should be the largest of a secondary time constant identified or one-half the dead time and never larger than one-quarter the reset time.
It is critical to convert tuning based on setting units and PID form used as you go from one vintage or supplier to another. It is best to verify the conversion with the supplier of the new system. The general rules for converting from different PID forms are given in the ISA Mentor Program Q&A blog post How Do You Convert Tuning Settings of an Independent PID with the last series of equations K1 thru K3 showing how to convert from a series PID form to the ISA Standard Form.
In general, PID structures should have derivative action on the process variable and not error unless the resulting kick in the PID output upon a setpoint change is useful to get to setpoint faster particularly if there is a significant control valve or VFD deadband or resolution limit.
A small setpoint filter in the analog output or secondary loop setpoint along with external reset feedback of the manipulated variable can make the kick a bump. A setpoint lead-lag on the primary loop where the lag time is the reset time and the lead is one-quarter of the lag or a two degrees of freedom structure with the beta set equal to 0.5 and the gamma set equal to about 0.25 can provide a compromise where the kick is moderated while getting to the primary setpoint faster.
Image Credit: Wikipedia
The post, Common Mistakes not Commonly Understood - Part 2, appeared first on the ControlGlobal.com Control Talk blog.
Here we continue on in our exploration of what we should know but don’t know and how it hurts us.
(11) Ignoring actuator and positioner sensitivity. Piston actuators are attractive due to smaller size and lower cost but have a sensitivity that can be 10 times worse than diaphragm actuators. Many positioners look fine for conventional tests but increase response time to almost a 100 times larger for step changes in signal less than 0.2%. The result is extremely confusing erratic spikey oscillations that only get worse as you decrease the PID gain. My ISA 2017 Process Control and Safety Symposium presentation ISA-PCS-2017-Presentation-Solutions-to-Stop-Most-Oscillations.pdf show the bizarre oscillations for poor positioner sensitivity. Slides 21 through 23 show the situation, the confusion and a simple tuning fix. While the fix helps stop the oscillations, the best solution is a better valve positioner to provide precise control (critical for pH systems with a strong acid or strong base due to amplification of oscillations from sensitivity limit).
(12) Ignoring drift in thermocouples. The drift in thermocouples (TCs) can be several degrees per year. Thus, even if there is tight control, the temperature loop setpoint is wrong resulting in the wrong operating point. Since temperature loops often determine product composition and quality, the effect on process performance is considerable with the culprit largely unrecognized leading to some creative opinions. Some operators may home in on a setpoint to get them closer to the best operating point, but the next shift operator may put the setpoint back at what is defined in the operating procedures. Replacement of the thermocouple sensor means the setpoint becomes wrong. The solution is a Resistance Temperature Detector (RTD) that inherently has 2 orders of magnitude less drift and better sensitivity for temperatures less than 400 degrees C. The slightly slower response of an RTD sensor is negligible compared to the thermowell thermal lags. The only reason not to use an RTD is a huge amount of vibration or a high temperature. Please don’t say you use TCs because they are cheaper. You would be surprised at the installed cost and lifecycle cost of a TC versus an RTD. See the ISA Mentor Program Webinar “Temperature Measurement and Control” for a startling table on slide 4 comparing TCs and RTDs and a disclosure of real versus perceived reasons to use TCs on slide 7.
(13) Not realizing the effect of flow ratio on process gain. The process gain of essentially all composition, pH and temperature loops is the slope of the process variable plotted versus the ratio of the manipulated flow to the main feed flow. This means the process gain is inversely proportional to feed flow besides being proportional to slope of plot. In order to convert the slope that is the change in process variable (PV) divided by the change in ratio to the required units of the change in PV per change in manipulated flow, you have to divide by feed flow. The plot of temperature or composition versus ratio is not commonly seen or even realized as necessary. The same sort of relationship holds true where the manipulated variable is an additive or reactant flow for composition or a cooling or heating stream flow for temperature. For temperature control, the slope of the curve is often also steeper at low flow creating a double whammy as to the increase in process gain at low flow. Also, for jackets, coils and heat exchangers, the coolant flow may be lower creating more dead time for a sensor on the outlet. Fortunately for pH, we have pH titration curves where pH is plotted versus a ratio of reagent volume added to sample volume although often the sample volume added is just used on the X axis. In this case, you need to find out the sample volume so you can put the proper abscissa on the laboratory curve. In the process application the titration curve abscissa that is the ratio of reagent volume to sample volume is simply the ratio of volumetric reagent flow to volumetric feed flow if the reagent concentrations are the same. You can then use this plot with application abscissa in terms of flow ratios to determine process gain and valve capacity, rangeability, backlash (deadband) and stiction (resolution) requirements. An intelligent analysis of the amplification of oscillations by the slope of limit cycle amplitude from deadband and resolution limitations determines the number of stages and size of reagent valves needed. Often for strong acid or bases, two or three stages of neutralization with largest reagent valve on first stage and smallest reagent valve on last stage are needed due to valve rangeability or precision limitations. For more details, check out the 12/12/2015 Control Talk Blog “Hidden Factor in Our Most Important Control Loops”.
(14) Ignoring effect of temperature on actual solution pH. We are accustomed to using the built-in temperature compensator that has been in pH transmitters for 60 or more years to account for the effect of temperature seen in glass electrode Nernst Equation. What we don’t tend to do is quantify and take advantage of the solution pH compensation in smart transmitters. The dissociation constant for water, acids and bases are a function of temperature. If you express the water dissociation constant as a pKw and the acid as well as the base dissociation constant as a pka, whenever the pH is within 4 pH of the pKw or pKa, there is a significant effect of temperature on actual pH. Physical property tables can detail the pKw and pKa as a function of temperature but the best bet is to vary the temperature of a lab sample and note the change in pH after correction for the Nernst equation (some lab meters don’t do even Nernst temperature compensation).
(15) Replacing a positioner with a booster. Extensive guidelines dating back to Nyquist plot studies in the 1960s concluded that fast loops should use a booster instead of a positioner. I still hear this rule cited today. This is downright dangerous due to positive feedback from the high outlet port sensitivity of the booster and flexure of the diaphragm actuator causing valve to slam shut. The volume booster should be placed on the output of the positioner with its bypass valve slightly open to stop any high frequency oscillations as seen in the ISA Mentor Program Webinar “How to Get the Most out of Control Valves”. You can fast forward to slides 18 and 19 to see the setup.
(16) Putting VFD speed control in the DCS. We like putting controls and logic as much as possible in the control room for adjustment and maintenance. While this is normally a good idea, a speed loop in the VFD instead of DCS is orders of magnitude faster enabling much tighter control. In fact if the speed loop is put in the DCS for flow and pressure control, you will violate the cascade rule that the secondary loop (speed) must be 5 times faster than primary loop.
(17) Putting a deadband into split range block for integrating processes or cascade control or integral action in positioner. The deadband creates a limit cycle just like deadband from backlash in a control valve when there are two or more integrators in loop whether in process, PID or positioner.
(18) Not taking into account temperature and pH cross sectional profile in pipelines. The temperature and pH varies extensively across a pipeline especially for high viscosity feeds or reagents. The tip should be near the centerline. For small pipelines, this may require installing the sensor in an elbow preferably facing into flow unless too abrasive. The pH sensor tip must, of course, be pointed down preferably at about a 45 degree angle so that bubble in internal fill of electrode does not reside in tip. The angle prevents the bubble from residing at the internal electrode (relatively low probability but possible).
(19) Not preventing measurement noise from phases and mixing. Thinking we need to have the sensor see a process change as fast as possible, we fail to realize a few seconds of transportation delay is better than a poor signal to noise ratio. To prevent a sensor seeing bubbles or undissolved solids in a liquid and droplets or condensate in a gas or steam, you need to locate a sensor sufficiently downstream of static mixer or exchanger or desuperheater outlet or where any streams come together. You need to keep sensor away from a sparge and avoid top or bottom of vessel or horizontal line. For temperature control of a jacketed vessel with split ranged manipulation of cooling water and steam, you should use jacket outlet instead of inlet temperature measurement to allow time for water to vaporize and for steam to condense. An even better solution is to use a steam injector to heat up the cooling water eliminating the transition of phases from going back and forth between steam and cooling water in the jacket. The injector provides rapid and smooth transitions from cooling to heating over quite a temperature range going from cold to hot water.
(20) Tuning to make smooth approach of PID output to final resting value in near and true integrating chemical processes. The main task of composition, temperature and pH loops in chemical processes is to be able to effectively reject load disturbances at the process input. This requires a maximization of controller gain and significant overshoot by the controller output of the final resting value to balance the load. Many experts in tuning who worked mostly on self-regulating processes don’t realize this requirement and may even say you should never tune the controller output to overshoot the final resting value failing to realize near-integrating processes will take an incredible long time to recover and true integrating processes will never recover from load disturbance. To understand the necessity of overshoot in PID output, think of a level loop where the level has increased because the flow into the vessel has increased. To bring the level back down to setpoint, the outlet flow manipulated by the level controller must be greater than the inlet flow to lower the level to setpoint before outlet flow settles out to match the inlet flow (final resting value of PID output).
Always remember to …………………………………………………………..………….. Oh shoot, I forget ... senior moment.
This is the official online community site of the Emerson Global Users Exchange, a forum for the free exchange of non-proprietary information among the global user community of all Emerson Automation Solution's products and services. Our goal is to improve the efficiency and use of automation systems and solutions employed at members’ facilities by sharing our knowledge, experiences, and application information.
User Groups |
World Areas |
Community Guidelines |
Legal Information |
Contact Community Manager
Website translation provided by
© 2015 Emerson Global Users Exchange. All rights reserved.