*Posts on this page are from the Control Talk blog, which is one of the ControlGlobal.com blogs for process automation and instrumentation professionals and Greg McMillan’s contributions to the ISA Interchange blog.

Tips for New Process Automation Folks
    • 21 Nov 2018

    How to Setup and Identify Process Models for Model Predictive Control

    The post How to Setup and Identify Process Models for Model Predictive Control first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    Luis Navas is an ISA Certified Automation Professional and electronic engineer with more than 11 years of experience in process control systems, industrial instrumentation and safety instrumented systems. Luis’ questions on evaporator control are important to improve evaporator concentration control and minimize steam consumption

    Luis Navas’ Introduction

    The process depicted in Figure 1 shows a concentrator with its process inputs and outputs. I have the following questions regarding the process testing in order to generate process models for a MPC in the correct way. I know that MPC process inputs must be perturbed to allow an identification and modeling of each process input and output relationship.  

    Figure 1: Variables for model predictive control of a concentrator

     

    Luis Navas’ First Question

    Before I start perturbing the feed flow or steam flow, should the disturbance be avoided or at least minimized? Or simply let it be as usual in the process since this disturbance is always present?

    Mark Darby’s Answer

    If it is not difficult, you can try to suppress the disturbance. That can help the model identification for the feed and steam. To get a model to the disturbance, you will want movement of the disturbance outside the noise level (best is four to five times). If possible, this may require making changes upstream (for example, LIC.SP or FIC.SP).

    Luis Navas’ Second Question

    What about the steam flow? Should it be maintained a fix flow, (FIC in MAN with a fix % open FCV), while perturbing the feed flow and in the same way when perturbing the steam flow, should the feed flow be fixed? I know some MPC software packages excite its outputs in a PRBS (Pseudo Random Binary Sequence) practically at the same time while the process testing is being executed, and through mathematics catches the input and output relationships, finally generating the model.  

    Join the ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about how you can join the ISA Mentor Program.

    Mark Darby’s Answer

    Because the steam and feed setpoints are manipulated variables, it is best to keep them both in auto for the entire test. PRBS is an option, but it will take more setup effort to get the magnitudes and the average switching interval right. An option is to start with a manual test and switch to PRBS after you’ve got a feel for the process and the right step sizes. Note: a pretest should have already been conducted to identify instrument issues, control issues, tuning, etc. Much more detail is offered in my Section 9.3 on in the McGraw-Hill handbook Process/Industrial Instruments and Control Sixth Edition.

    Luis Navas’s Last Questions

    What are the pros & cons for process testing  if the manipulated variables are perturbed through FIC Setpoints, (closed loop), or through FIC Outputs, (open loop)? Or simply: should it be done according with the MPC design? What are the pros & cons if in the final design the FCVs are directly manipulated by the MPC block or through FICs, as MPC’s downstream blocks? I know in this case the FICs will be faster than MPC, so I expect a good approach is to retain them.

    Mark Darby’s Answers

    Correct – do according to the MPC design. Note sometimes the design will need to change during a step test as you learn more about the process. Flow controllers should normally be retained unless they often saturate. This is the same idea for justifying a cascade – to have the inner loop manage the higher frequency disturbances (so the slower executing MPC doesn’t have to). The faster executing inner loop also helps with linearization (for example, valve position to flow).  

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 12 Nov 2018

    Webinar Recording: How to Use Modern Process Control to Maintain Batch-To-Batch Quality

    The post Webinar Recording: How to Use Modern Process Control to Maintain Batch-To-Batch Quality first appeared on the ISA Interchange blog site.

    This educational ISA webinar was presented by Greg McMillan. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).

    Understanding the difficulties of batch processing and the new technologies and techniques offered can lead to solutions by better automation and control that offer much greater increases in efficiency and capacity than usually obtained for continuous process. Industry veteran and author Greg McMillan discusses analyzing batch data, elevating the role of the operator, tuning key control loops, and setting up simple control strategies to optimize batch operations. The presentation concludes with an extensive list of best practices.

    About the Presenter
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 12 Nov 2018

    Webinar Recording: How to Use Modern Process Control to Maintain Batch-To-Batch Quality

    The post Webinar Recording: How to Use Modern Process Control to Maintain Batch-To-Batch Quality first appeared on the ISA Interchange blog site.

    This educational ISA webinar was presented by Greg McMillan. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).

    Understanding the difficulties of batch processing and the new technologies and techniques offered can lead to solutions by better automation and control that offer much greater increases in efficiency and capacity than usually obtained for continuous process. Industry veteran and author Greg McMillan discusses analyzing batch data, elevating the role of the operator, tuning key control loops, and setting up simple control strategies to optimize batch operations. The presentation concludes with an extensive list of best practices.

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 29 Oct 2018

    What Types of Process Control Models are Best?

    The post What Types of Process Control Models are Best? first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the U.S. This question comes from Daniel Rodrigues.

    Daniel Rodrigues is one of our newest protégés in the ISA Mentor Program. Daniel has been working in research & development for Norsk Hydro Brazil since 2016 specializing in:

    • Development of a greener, safer, more accurate, and cheaper analytical method
    • Cost reduction, efficiency enhancement opportunities identification
    • Process modelling and advanced control logic development and assessment
    • Research methodology development, execution, and planning
    • Statistical analysis of process variables and test results

    Daniel Rodrigues’ Question

    What is your take on process control based on phenomenological models (using first-principle models to guide the predictive part of controllers)? I am aware of the exponential growth of complexity in these, but I’d also like to have an experienced opinion regarding the reward/effort of these.

    Greg McMillan’s Answer

    I prefer first principle models to gain a deeper understanding of cause and effects, process relationships, process gains, and the response to abnormal situations. Most of my control system improvements start with first principle models. The incorporation of the actual control system (digital twin) to form a virtual plant has made these models a more powerful tool.  However, most first principle models use perfectly mixed volumes neglecting mixing delays and are missing transportation delays and automation system dynamics. For pH systems, including all of the non-ideal dynamics from piping and vessel design, control valves or variable speed pumps, and electrodes is particularly essential, I have consequently partitioned the total vessel volume into a series of plug flow and perfectly back mixed volumes to model the mixing dead times that originate from agitation pattern and the relative location of input and output streams. I add a transportation delay for reagent piping and dip tubes due to gravity flow or blending. For extremely low reagent flows (e.g., gph), I also add an equilibration time in dip tube after closure of a reagent valve associated with migration of the reagent into the process followed by migration of process fluid back up into the dip tube. I add a transportation delay to electrodes in piping. I use a variable dead time block and time constant blocks in series to show the effect of velocity, coating, age, buffering and direction of pH change on electrode response. I use a backlash-stiction and a variable dead time block to show the resolution and response time of control valves. The important goal is to get the total loop dead time and secondary lag right.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program.

    By having the much more complete model in a virtual plant, the true dynamic behavior of the system can be investigated and the best control system performance achieved by exploring, discovering, prototyping, testing, tuning, justifying, deploying, commissioning, maintaining and continuously improving, as described in the Control magazine feature article Virtual Plant Virtuosity.

     

    Figure 1: Virtual Plant that includes Automation System Dynamics and Digital Twin Controller

     

    Model predictive control is much better at ensuring you have the actual total dynamics including dead time, lags and lead times at a particular operating point. However, the models do not include the effect of backlash-stiction or actuator and positioner design on valve response time and consequentially on total loop dead time because by design the steps are made several times larger than the deadband and resolution or sensitivity limits of the control valve. Also, the models identified are for a particular operating point and normal operation. To cover different modes of operation and production rates, multiple models must be used requiring logic for a smooth transition or recently developed adaptive capabilities. I see an opportunity to use the results from the identification software used by MPC to provide a more accurate dead time, lag time and lead time by inserting these in blocks on the measurement of the process variable in first principle models. The identification software would be run for different operating points and operating conditions enabling the addition of supplemental dynamics in the first principle models. This addresses the fundamental deficiency of dead times, lag times and lead times being too small in first principle models.

    Statistical models are great at identifying unsuspected relationships, disturbances and variability in the process and measurements. However, these are correlations and not necessarily cause and effect. Also, continuous processes require dynamic compensation of each process input so that it matches the dynamic response timewise of each process output being studied. This is often not stated in the literature and is a formidable task. Some methods propose using a dead time on the input but for large time constants, the dynamic response of the predicted output is in error during a transient. These models are more designed for steady state operation but this is often an ideal situation not realized due to disturbances originating from the control system due to interactions, resonance, tuning, and limit cycles from stiction as discussed in the Control Talk Blog The most disturbing disturbances are self-inflicted. Batch processes do not require dynamic compensation of inputs making data analytics much more useful in predicting batch end points.

    I think there is a synergy to be gained by using MPC to find missing dynamics and statistical process control to help track down missing disturbances and relationships that are subsequently added to the first principle models. Recent advances in MPC capability (e.g., Aspen DMC3) to automatically identify changes in process gain, dead time and time constant including the ability to compute and update them online based on first principals has opened the door to increased benefits from the using MPC to improve first principle models and vice versa. Multivariable control and optimization where there are significant interactions and multiple controlled, manipulated and constraint variables are best handled by MPC. The exception is very fast systems where the PID controller is directly manipulating control valves or variable frequency drives for pressure control. Batch end point prediction might also be better implemented by data analytics. However, in all cases the first principle model should be accordingly improved and used to test the actual configuration and implementation of the MPC and analytics and to provide training of operators extended to all engineers and technicians supporting plant operation.

    I would think for research and development, the ability to gain a deeper and wider understanding of different process relationships for different operating conditions would be extremely important. This knowledge can lead to process improvements and to better equipment and control system design. For pH and biological control systems, this capability is essential.

    For a greater perspective on the capability of various modeling and control methodologies, see the ISA Mentor Program post with questions by protégé Danaca Jordan and answers by Hunter Vegas and I: What are the New Technologies and Approaches for Batch and Continuous Control?

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 29 Oct 2018

    What Types of Process Control Models are Best?

    The post What Types of Process Control Models are Best? first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the U.S. This question comes from Daniel Rodrigues.

    Daniel Rodrigues is one of our newest protégés in the ISA Mentor Program. Daniel has been working in research & development for Norsk Hydro Brazil since 2016 specializing in:

    • Development of a greener, safer, more accurate, and cheaper analytical method
    • Cost reduction, efficiency enhancement opportunities identification
    • Process modelling and advanced control logic development and assessment
    • Research methodology development, execution, and planning
    • Statistical analysis of process variables and test results

    Daniel Rodrigues’ Question

    What is your take on process control based on phenomenological models (using first-principle models to guide the predictive part of controllers)? I am aware of the exponential growth of complexity in these, but I’d also like to have an experienced opinion regarding the reward/effort of these.

    Greg McMillan’s Answer

    I prefer first principle models to gain a deeper understanding of cause and effects, process relationships, process gains, and the response to abnormal situations. Most of my control system improvements start with first principle models. The incorporation of the actual control system (digital twin) to form a virtual plant has made these models a more powerful tool.  However, most first principle models use perfectly mixed volumes neglecting mixing delays and are missing transportation delays and automation system dynamics. For pH systems, including all of the non-ideal dynamics from piping and vessel design, control valves or variable speed pumps, and electrodes is particularly essential, I have consequently partitioned the total vessel volume into a series of plug flow and perfectly back mixed volumes to model the mixing dead times that originate from agitation pattern and the relative location of input and output streams. I add a transportation delay for reagent piping and dip tubes due to gravity flow or blending. For extremely low reagent flows (e.g., gph), I also add an equilibration time in dip tube after closure of a reagent valve associated with migration of the reagent into the process followed by migration of process fluid back up into the dip tube. I add a transportation delay to electrodes in piping. I use a variable dead time block and time constant blocks in series to show the effect of velocity, coating, age, buffering and direction of pH change on electrode response. I use a backlash-stiction and a variable dead time block to show the resolution and response time of control valves. The important goal is to get the total loop dead time and secondary lag right.

    Join the ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about how you can join the ISA Mentor Program.

    By having the much more complete model in a virtual plant, the true dynamic behavior of the system can be investigated and the best control system performance achieved by exploring, discovering, prototyping, testing, tuning, justifying, deploying, commissioning, maintaining and continuously improving, as described in the Control magazine feature article Virtual Plant Virtuosity.

     

    Figure 1: Virtual Plant that includes Automation System Dynamics and Digital Twin Controller

     

    Model predictive control is much better at ensuring you have the actual total dynamics including dead time, lags and lead times at a particular operating point. However, the models do not include the effect of backlash-stiction or actuator and positioner design on valve response time and consequentially on total loop dead time because by design the steps are made several times larger than the deadband and resolution or sensitivity limits of the control valve. Also, the models identified are for a particular operating point and normal operation. To cover different modes of operation and production rates, multiple models must be used requiring logic for a smooth transition or recently developed adaptive capabilities. I see an opportunity to use the results from the identification software used by MPC to provide a more accurate dead time, lag time and lead time by inserting these in blocks on the measurement of the process variable in first principle models. The identification software would be run for different operating points and operating conditions enabling the addition of supplemental dynamics in the first principle models. This addresses the fundamental deficiency of dead times, lag times and lead times being too small in first principle models.

    Statistical models are great at identifying unsuspected relationships, disturbances and variability in the process and measurements. However, these are correlations and not necessarily cause and effect. Also, continuous processes require dynamic compensation of each process input so that it matches the dynamic response timewise of each process output being studied. This is often not stated in the literature and is a formidable task. Some methods propose using a dead time on the input but for large time constants, the dynamic response of the predicted output is in error during a transient. These models are more designed for steady state operation but this is often an ideal situation not realized due to disturbances originating from the control system due to interactions, resonance, tuning, and limit cycles from stiction as discussed in the Control Talk Blog The most disturbing disturbances are self-inflicted. Batch processes do not require dynamic compensation of inputs making data analytics much more useful in predicting batch end points.

    I think there is a synergy to be gained by using MPC to find missing dynamics and statistical process control to help track down missing disturbances and relationships that are subsequently added to the first principle models. Recent advances in MPC capability (e.g., Aspen DMC3) to automatically identify changes in process gain, dead time and time constant including the ability to compute and update them online based on first principals has opened the door to increased benefits from the using MPC to improve first principle models and vice versa. Multivariable control and optimization where there are significant interactions and multiple controlled, manipulated and constraint variables are best handled by MPC. The exception is very fast systems where the PID controller is directly manipulating control valves or variable frequency drives for pressure control. Batch end point prediction might also be better implemented by data analytics. However, in all cases the first principle model should be accordingly improved and used to test the actual configuration and implementation of the MPC and analytics and to provide training of operators extended to all engineers and technicians supporting plant operation.

    I would think for research and development, the ability to gain a deeper and wider understanding of different process relationships for different operating conditions would be extremely important. This knowledge can lead to process improvements and to better equipment and control system design. For pH and biological control systems, this capability is essential.

    For a greater perspective on the capability of various modeling and control methodologies, see the ISA Mentor Program post with questions by protégé Danaca Jordan and answers by Hunter Vegas and I: What are the New Technologies and Approaches for Batch and Continuous Control?

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 23 Oct 2018

    Many Objectives, Many Worlds of Process Control

    The post, Many Objectives, Many Worlds of Process Control first appeared on ControlGlobal.com's Control Talk blog.

    In many publications on process control, the common metric you see is integrated absolute error for a step disturbance on the process output. In many tests for tuning, setpoint changes are made and the most important criteria becomes overshoot of setpoint. Increasingly, oscillations of any type are looked at as inherently bad. What is really important varies because of the different loops and types of processes. Here we seek to open minds and develop a better understanding of what is important.

    Many Objectives

    • Minimum PV peak error in load response to prevent:

    –        Compressor surge, SIS activation, relief activation, undesirable reactions, poor cell health

    • Minimum PV integrated error in load or setpoint response to minimize:

    –        total amount of off-spec product to enable closer operation to optimum setpoint

    • Minimum PV overshoot of SP in setpoint response to prevent:

    –        Compressor surge, SIS activation, relief activation, undesirable reactions, poor cell health

    • Minimum Out overshoot of FRV* in setpoint response to prevent:

    –        Interaction with heat integration and recycle loops in hydrocarbon gas unit operations

    • Minimum PV time to reach SP in setpoint response to minimize:

    –        Batch cycle time, startup time, transition time to new products and operating rates

    • Minimum split range point crossings to prevent:

    –        Wasted energy-reactants-reagents, poor cell health (high osmotic pressure)

    • Maximum absorption of variability in level control (e.g. surge tank) to prevent:

    –        Passing of changes in input flows to output flows upsetting downstream unit ops

    • Optimum transfer of variability from controlled variable to manipulated variable to prevent:

    –        Resonance, interaction and propagation of disturbances to other loops

    * FRV is the Final Resting Value of PID output. Overshoot of FRV is necessary for setpoint and load response for integrating and runaway processes. However for self-regulating processes not involving highly mixed vessels (e.g., heat exchangers and plug flow reactors),  aggressive action in terms of PID output can upset other loops and unit operations that are affected by the flow manipulated by the PID. Not recognized in the literature is that external-reset feedback of the manipulated flow enables setpoint rate limits to smooth out changes in manipulated flows without affecting the PID tuning.

    Many Worlds

    • Hydrocarbon processes and other gas unit operations with plug flow, heat integration & recycle streams (e.g. crackers, furnaces, reformers)

    –        Fast self-regulating responses, interactions and complex secondary responses with sensitivity to SP and FRV overshoot, split range crossings and utility interactions.

    • Chemical batch and continuous processes with vessels and columns

    –        Important loops tend to have slow near or true integrating and runaway responses with minimizing peak and integrated errors and rise time as key objectives.

    • Utility systems (e.g., boilers, steam headers, chillers, compressors)

    –        Important loops tend to have fast near or true integrating responses with minimizing peak and integrated errors and interactions as key objectives.

    • Pulp, paper, food and polymer inline, extrusion and sheet processes

    –        Fast self-regulating responses and interactions with propagation of variability into product (little to no attenuation of oscillations by back mixed volumes) with extreme sensitive to variability and resonance. Loops (particularly for sheets) can be dead time dominant due to transportation delays unless there are heat transfer lags.

    • Biological vessels (e.g., fermenters and bioreactors)

    –        Most important loops tend have slow near or true integrating responses with extreme sensitivity to SP and FRV overshoot, split range crossings and utility interactions. Load disturbances originating from cells are incredibly slow and therefore not an issue.

    A critical insight is that most disturbances are on the process input not the process output and are not step changes. The fastest disturbances are generally flow or liquid pressure but even these have an 86% response time of at least several seconds because of the 86% response time of valves and the tuning of PID controllers. The fastest and most disruptive disturbances are often manual actions by an operator or setpoint changes by a batch sequence. Setpoint rate limits and a 2 Degrees of Freedom (2DOF) PID structure with Beta and Gamma approaching zero can eliminate much of the disruption from setpoint changes by slowing down changes in the PID output from proportional and derivative action. A disturbance to a loop can be considered to be fast if it has a 86% response time less than the loop deadtime.

    If you would like to hear more on this, checkout the ISA Mentor Program Webinar Recording: PID Options and Solutions Part1

    If you want to be able to explain this to young engineers, check out the dictionary for translation of slang terms in the Control Talk Column “Hands-on Labs build real skills.”

    • 15 Oct 2018

    How to Get Rid of Level Oscillations in Industrial Processes

    The post How to Get Rid of Level Oscillations in Industrial Processes first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the U.S. This question comes from Luis Navas.

    Luis Navas is an ISA Certified Automation Professional and electronic engineer with more than 11 years of experience in process control systems, industrial instrumentation and safety instrumented systems. Luis’ questions on evaporator control are important to improve evaporator concentration control and minimize steam consumption

    Luis Navas’ Questions

    For an MPC application I need to build a smoothed moving mean from a batch level to use as a controlled variable for my MPC, so the simple moving average is done as depicted below. However, I need to smooth the signal, due there is some signal ripple still. I tried with a low-pass filter achieving some improvement as seen in Figure 1. But perhaps you know a better way to do it, or I simply need to increase the filter time.

    Figure 1: Old Level Oscillations (blue: actual level and green: level with simple moving mean followed by simple moving mean + first order filter)

    Greg McMillan’s Initial Answer

    I use rate limiting when a ripple is significantly faster than a true change in the process variable. The velocity limit would be the maximum possible rate of change of the level. The velocity limit should be turned off when maintenance is being done and possibly during startup or shutdown. The standard velocity limit block should offer this option. A properl Save & Exit y set velocity limit introduces no measurement lag. A level system (any integrator) is very sensitive to a lag anywhere.

    If the oscillation stops when the controller is in manual, the oscillation could be from backlash or stiction. In your case, the controller appears to be in auto with a slow rolling oscillation possibly due to a PID reset time being too small.

    I did a Control Talk Blog that discusses good signal filtering tips from various experts besides my intelligent velocity limit.

    Mark Darby’s Initial Answer

    In many cases, I’ve seen signals overly filtered. Often, if the filtered signal looks good to your eye, it’s too much filtering. As Michel Ruel states: If period is known, moving average (sum of most recent N values divided by N) will nearly completely remove a uniform periodic cycle. So the issue is how much lag is introduced. Depending on the MPC, one may be able to specify variable CV weights as a function of the magnitude error, which will decrease the amount of MV movement when the CV weight is low; or the level signal could be brought in as a CV twice with different tuning or filtering applied to each.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program.

    Greg McMillan’s Follow-Up Answer

    Since the oscillation is uniform in period and amplitude, the moving average as described my Michel Ruel is best as a starting point. Any subsequent noise from non-uniformity can be removed by an additional filter but nearly all of this filter time becomes equivalent dead time in near and true integrating processes. You need to be careful that the reset time is not too small as you decrease the controller gain either due to filtering or to absorb variability. The product of PID gain and reset time should be greater than twice the inverse of the integrating process gain (1/sec) to prevent the slow rolling oscillations that decay gradually. Slide 29 of the ISA webinar on PID options and solutions give the equations for the window of allowable PID gains. Slide 15 shows how to estimate the attenuation of an oscillation by a filter. The webinar presentation and discussion is in the ISA Mentor Program post How to optimize PID controller settings.

    If you need to minimize dead time introduced by filtering, you could develop a smarter statistical filter such as cumulative sum of measured values (CUSUM). For an excellent review of how to remove unwanted data signal components, see the InTech magazine article Data filtering in process automation systems.

    Mark Darby’s Follow-Up Answer

    My experience is that most times a cycle in a disturbance flow is already causing cycling in other variables (due to the multivariable nature of the process).  And advanced control, including MPC, will not significantly improve the situation and may make it worse.  So it is best to fix the cycle before proceeding with advanced control.  Making a measured cyclic disturbance a feedforward to MPC likely won’t help much.  MPC normally assumes the current value of the feedforward variables stays constant over the prediction horizon. What you’d want is to have the future prediction include the cycle.  Unfortunately this is not easily done with the MPC packages today.

    Often, levels are controlled by a PID loop, not in the MPC.  The exception can be if there are multiple MVs that must be used to control the level (e.g., multiple outlet flows), or the manipulated flow is useful for alleviating a constraint (see the handbook).  Another exception is if there is significant dead time between the flow and the level.

    Luis Navas’ Follow-up Response

    Thank you for the support. I think the ISA Mentor Program resources are a truly elite support team, by the way, I have already read the blogs about signal filtering.

    My comments and clarifications:

    1. The signal corresponds to a tank level in a batch process, due that it has an oscillating behavior (without noise).
    2. The downstream process is continuous, (evaporator) and the idea is control the Feed tank level with MPC (using the moving average), through evaporator flow input. The feed tank level is critical for the evaporator works fine.
    3. I have applied the Michel Ruel statement: If period is known, moving average (sum of most recent N values divided by N) will nearly completely remove a periodic cycle. Now the moving average is better as seen in Figure 2.

     

    Figure 2: New Level Oscillations (blue: actual level and green: level with Ruel moving average)

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 15 Oct 2018

    How to Get Rid of Level Oscillations in Industrial Processes

    The post How to Get Rid of Level Oscillations in Industrial Processes first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    Luis Navas is an ISA Certified Automation Professional and electronic engineer with more than 11 years of experience in process control systems, industrial instrumentation and safety instrumented systems. Luis’ questions on effectively reducing evaporator level oscillations from an upstream batch operation so that the level controller can see the true level trajectory represent a widespread concern in chemical plants where the front end for conversion has batch operations and back end for separation has continuous operations.

     Luis Navas’ Questions

    For the MPC application I need to build a smoothed moving mean from a batch level to use as a controlled variable for my MPC, so the simple moving average is done as depicted below. However, I need to smooth the signal, (due there is some signal ripple still), I tried with a low-pass filter achieving some improvement as seen in Figure 1. But perhaps you know a better way to do it, or I simply need to increase the filter time.

     

    Figure 1: Old Level Oscillations (blue: actual level and green: level with simple moving mean followed by simple moving mean + first order filter)

     

    Greg McMillan’s Initial Answer

    I use rate limiting when a ripple is significantly faster than a true change in the process variable. The velocity limit would be the maximum possible rate of change of the level. The velocity limit should be turned off when maintenance is being done and possibly during startup or shutdown. The standard velocity limit block should offer this option. A properly set velocity limit introduces no measurement lag. A level system (any integrator) is very sensitive to a lag anywhere.

    If the oscillation stops when the controller is in manual, the oscillation could be from backlash or stiction. In your case, the controller appears to be in auto with a slow rolling oscillation possibly due to a PID reset time being too small.

    I did a Control Talk Blog that discusses What are good signal filtering tips from various experts besides my intelligent velocity limit.

    Mark Darby’s Initial Answer

    In many cases, I’ve seen signals overly filtered.  Often, if the filtered signal looks good to your eye, it’s too much filtering. As Michel Ruel states: If period is known, moving average (sum of most recent N values divided by N) will nearly completely remove a uniform periodic cycle. So the issue is how much lag is introduced. Depending on the MPC, one may be able to specify variable CV weights as a function of the magnitude error, which will decrease the amount of MV movement when the CV weight is low; or the level signal could be brought in as a CV twice with different tuning or filtering applied to each.

    Join the ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about how you can join the ISA Mentor Program.

    Greg McMillan’s Follow-up Answer

    Since the oscillation is uniform in period and amplitude, the moving average as described my Michel Ruel is best as a starting point. Any subsequent noise from nonuniformity can be removed by an additional filter but nearly all of this filter time becomes equivalent dead time in near and true integrating processes. You need to be careful that the reset time is not too small as you decrease the controller gain either due to filtering or to absorb variability. The product of PID gain and reset time should be greater than twice the inverse of the integrating process gain (1/sec) to prevent the slow rolling oscillations that decay gradually. Slide 29 of the ISA WebEx on PID Options and Solutions give the equations for the window of allowable PID gains. Slide 15 shows how to estimate the attenuation of an oscillation by a filter. The WebEx presentation and discussion is in the ISA Mentor Program post How to optimize PID controller settings.

    If you need to minimize dead time introduced by filtering, you could develop a smarter statistical filter such as cumulative sum of measured values (CUSUM). For an excellent review of how to remove unwanted data signal components, see the InTech magazine article Data filtering in process automation systems.

    Mark Darby’s Follow-up Answer

    My experience is that most times a cycle in a disturbance flow is already causing cycling in other variables (due to the multivariable nature of the process).  And advanced control, including MPC, will not significantly improve the situation and may make it worse.  So it is best to fix the cycle before proceeding with advanced control.  Making a measured cyclic disturbance a feedforward to MPC likely won’t help much.  MPC normally assumes the current value of the feedforward variables stays constant over the prediction horizon. What you’d want is to have the future prediction include the cycle.  Unfortunately this is not easily done with the MPC packages today.

    Often, levels are controlled by a PID loop, not in the MPC.  The exception can be if there are multiple MVs that must be used to control the level (e.g., multiple outlet flows), or the manipulated flow is useful for alleviating a constraint (see the handbook).  Another exception is if there is significant dead time between the flow and the level.

    Luis Navas’ Follow-up Response

    Thank you for the support. I think the ISA Mentor Program resources are a truly Elite support team, by the way,I have already read the blogs about signal filtering.

    My comments and clarifications:

    1. The signal corresponds to a tank level in a batch process, due that it has an oscillating behavior, (without noise).
    2. The downstream process is continuous, (evaporator) and the idea is control the Feed tank level with MPC, (using the Moving average), through evaporator flow input. The feed tank level is critical for the evaporator works fine.
    3. I have applied the “Michel Ruel statement: If period is known, moving average (sum of most recent N values divided by N) will nearly completely remove a periodic cycle”, and now the moving average is better as seen in Figure 2.

     

    Figure 2: New Level Oscillations (blue: actual level and green: level with Ruel moving average)

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 1 Oct 2018

    Webinar Recording: Loop Tuning and Optimization

    The post Webinar Recording: Loop Tuning and Optimization first appeared on the ISA Interchange blog site.

    This educational ISA webinar was presented by Greg McMillan in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).

    In this ISA Mentor Program presentation, Michel Ruel, a process control expert and consultant, provides insight and guidance as to the importance of optimization and how to achieve it through better PID control.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program.

    About the Presenter
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 1 Oct 2018

    Webinar Recording: Loop Tuning and Optimization

    The post Webinar Recording: Loop Tuning and Optimization first appeared on the ISA Interchange blog site.

    This educational ISA webinar was presented by Greg McMillan in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).

    In this ISA Mentor Program presentation, Michel Ruel, a process control expert and consultant, provides insight and guidance as to the importance of optimization and how to achieve it through better PID control.

    Join the ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about how you can join the ISA Mentor Program.

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg:

    LinkedIn

    • 26 Sep 2018

    How to Optimize Industrial Evaporators

    The post How to Optimize Industrial Evaporators first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    In the ISA Mentor Program, I am providing guidance for extremely talented individuals from Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Luis Navas.

    Luis Navas is an ISA Certified Automation Professional and electronic engineer with more than 11 years of experience in process control systems, industrial instrumentation and safety instrumented systems. Luis’ questions on evaporator control are important to improve evaporator concentration control and minimize steam consumption

    Luis Navas’ Questions

    Which criteria should I follow to define the final control strategy with model predictive control (MPC) in an existing PID strategy? Only one MPC for all existing PIDs? Or may be 1MPC + 1PID or 1MPC + 2 PIDs? What are the criteria to make the correct decision? What is the step by step procedure to deploy the advanced control in the real process in the safest way? Which are your hints, tips, advice and experiences regarding MPC implementations?

    Greg McMillan’s Initial Answer

    In general you try to include all of the controlled variables (CV), manipulated variables (MV), disturbance variables (DC), and constraint variables (QC) in the same MPC unless the equipment are not related, there is a great difference in time horizons or there is a cascade control opportunity like we see with Kiln MPC control where a slower MPC with more important controlled variables send setpoints to a secondary MPC for faster controlled variables. For your evaporator control, this does not appear to be the case.

    We first discuss advanced PID control and its common limitations before moving into a MPC.

    For optimization, a PID valve position controller could maximize production rate by pushing the steam valve to its furthest effective throttle position. So far as increasing efficiency in terms of minimizing steam use, this would be generally be achieved by tight concentration control that allows you to operate closer to minimum concentration spec. The level and concentration response would be true and near integrating. In both cases, PID integrating process tuning rules should be used. Do not decrease the PID gain computed by these rules without proportionally increasing the PID reset time. The product of the PID gain and reset time must be greater than the inverse of the integrating process gain to prevent slow rolling oscillations, a very common problem. Often the reset time is two or more orders of magnitude too small because user decreased the PID gain due to noise or thinking oscillations are caused by too high a PID gain.

    I don’t see constraint control for a simple evaporator but if there were constraints, an override controller would be setup for each. However, only one constraint would be effectively governing operation at a given time via signal selection. Also, the proper tuning of override controllers and valve position controllers is not well known. Furthermore, the identification of dynamics for feedback and particularly feedforward control typically requires the expertise by a specialist. Often comparisons are done showing how much better Model Predictive Control is than PID control without good identification and tuning of feedback and feedforward control parameters.

    While optimization limitations and typical errors in identification and tuning push your case toward the use of MPC, here are the best practices for PID control of evaporators.

    1. Measure product concentration by a Coriolis meter on evaporator system discharge.
    2. Control product concentration by manipulation of the heat input to product flow ratio.
    3. Use evaporator level measurements with an excellent sensitivity and signal noise ratio.
    4. When possible, use radar instead of capillary systems to reduce level noise, drift, and lag
    5. Control product concentration by changing heat input to feed rate ratio. If production rate is set by discharge flow, use PID to manipulate heat input. If production rate is set by heat input, use PID to manipulate product flow rate.
    6. Use near integrator rules maximizing rate action in PID concentration controller tuning.
    7. Use a flow feedforward of product flow rate to set feed rate to minimize the response time for production rate or product concentration control.
    8. For feed concentration disturbances, use feedforward to correct the heat input based on feed solids concentration computed from density measured by a feed Coriolis meter.
    9. The actual heat to feed ratio must be displayed and manual adjustment of desired ratio be provided to operations for startup and abnormal operation.
    10. To provide faster concentration control for small disturbances, use a PD controller to manipulate a small bypass flow whose bias is about 50% of maximum bypass flow.

    The use of model predictive control software often does a good job of identifying the dynamics and automatically incorporating them into the controller. Also, it can simultaneously handle multiple constraints with predictive capability as to violation of constraints. Furthermore, a linear program or other optimizer built into MPC can find and achieve the optimum intersection of the minimum and maximum values of controlled, constraint, and manipulated variables plotted on a common axis of the manipulated variables.

    I have asked for more detailed advice on MPC by Mark Darby, a great new resource, who wrote the MPC Sections for the McGraw-Hill Handbook Hunter and I just finished.

    Mark Darby’s Initial Answer

    It is normally best to keep PID controls in place for basic regulatory control if they perform well, which may require re-tuning or reconfiguration of the strategy.  Your case is getting into advanced control and optimization where the advantage shifts to MPC. Multiple interactions and measured disturbances are best done by MPC compared to PID decoupling and feedforward control. First principle models should be used to compute smarter disturbance variables such as solids feed flow rather than separate feed flow and feed concentration disturbance variables. Override control and valve position control schemes are better handled by MPC.  More general optimization is also better done with an MPC. Remember to include PID outputs to valves as constraint variables if they can saturate in normal operation.  If a valve is operated close to a limit (e.g., 5% or 95%), it may be better to have the MPC manipulate the valve signal directly using signal characterization as needed using installed flow characteristic to linearize response.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program.

    Here are some MPC best practices from Process/Industrial Instruments and Controls Handbook Sixth Edition, by Gregory K. McMillan and Hunter Vegas (co-editors), and scheduled to be published in early 2019. This sixth edition is revolutionary in having nearly 50 industry experts provide a focus on the steps needed for all aspects to achieve a successful automation project to maximize the return on investment.

     

    MPC Project Best Practices

    1. Project team members should include not only control engineers, but also process engineers and operations personnel.
    2. First level support of MPC requires staff with knowledge of both the MPC and the process.  Site staff needs to have sufficient understanding to troubleshoot and answer the questions of operations.  Larger companies often have central teams for second level support and to participate in projects.
    3. Even in companies with experienced teams, it is not unusual to use outside MPC consultants. The right level of usage of outside consultants is rarely 0% or 100%.
    4. It may be tempting to avoid the benefit estimation and/or post audit, especially when a company has previous successful history with MPC. But doing so carries a risk.  New management may not have experience or understand the value of MPC, leading to the inevitable question: “What is MPC doing for me today?”
    5. The other temptation is to forgo needed instrumentation or hardware repairs and proceed directly with an MPC project, arguing that MPC can compensate for such deficiencies.  This carries the risk of not meeting expectations and MPC getting a bad reputation, which will be difficult to erase.
    6. Regular reporting of relevant KPIs and benefits is seen as the best way of keeping the organization in the know and motivating additional MPC applications.

    MPC Design Best Practices

    1. Develop a functional design with input from operations, process engineering, economics staff, and instrument techs.  Update the design as the project progresses, and after the project is completed to reflect the as built MPC.
    2. Not all MPC variables must be determined up front in the project.  Most important is identifying the MVs.  The final selection of CVs and DVs can be made after plant testing, assuming data for these variables was collected.
    3. The use of a dynamic simulation can be useful for testing a new regulatory control strategy.  It can also be used to test and demonstrate an MPC, which can be quite illustrative and educational, particularly if MPC is being applied for the first time in a facility.
    4. If filtering of a CV or DV for MPC is required, it needs to be done at the DCS or PLC level.  The faster scan times allow effective filtering (usually on the order of seconds) without significantly affecting the underlying dynamics of the signal.  In addition, filters associated with the PVs of PID loops should be reviewed to ensure excessive filtering is not being used to mask other problems.
    5. The use of a steady-state or dynamic simulation can be useful for determining thermo-physical equation parameters for PID calculated control variables (e.g., duty or PCT) and MPC CVs, estimating process gains, and evaluating possible inferential predictors.
    6. With most MPC products, adding MVs, CVs, and DVs is a straightforward task once models are identified.  This allows starting with a smaller MPC on one part of the unit, and later increasing the scope as experience and confidence is gained.
    7. Inferential models can be developed ahead of the plant test, which allows the model to be evaluated and adjustments made.  For data driven regression based inferentials, one needs to have at least confirmed that measurements exist that correlate with the analyzed valve.  Final determination of model inputs can be made during the modeling phase.
    8. A challenge with lab-based inferentials is accurately knowing when a lab sample is collected.  A technique for automating this is to install a thermocouple directly in the line of the sample point. A spike in the temperature measurement is used to detect a sample collection.
    9. When implementing a steady-state inferential model online, it is often useful to filter inputs to the calculation to remove phantom effects such as overshoot or inverse response.

    MPC Model Development Best Practices

    Plant Testing

    1. A test plan should be developed with operations and process engineering. It will need to be flexible to accommodate the needs of the modeling as well as operational issues that may arise.
    2. Data collected for model identification should not use data compression. A separate data collection is recommended to minimize the likelihood of latency effects such as PVs exhibiting changes before SPs.
    3. The data collection should include all pertinent tags for the units being tested. This can allows integrity checks to be made, and models to be identified for new CVs and DVs that may be added later to the MPC.
    4. Model identification runs should be done frequently, typically at least once per day. This allows the testing to be modified to emphasize MV-CV models that are insufficiently identified.
    5. The plant test is an opportunity to answer operational or process questions of which there are differing opinions, such as the effect of a recycle on recovery or on a constraint. This can help to develop consensus on a new strategy.
    6. MPC products that include automatic closed-loop should provide the necessary logic to change step sizes, bring MVs in and out of test mode, and displays to follow the testing.
    7. Lab sample collection for inferential model: include multiple samples collected at same time to assess sample container differences (reproducibility) and lab repeatability. When multiple sample containers are used, record the container used for each sample.  Coordinate lab sample with collection personnel and record time samples are collected from the process.

    Model Identification

    1. The MPC identification package should automatically handle the required scaling of MVs and CVs and differencing and/or de-trending.
    2. The ability to slice or section out bad data is a necessary feature. Note that each section of data that is excluded requires a re-initialization of the identification algorithm for the next section of good data.
    3. A useful technique for deciding on which MVs and DVs are significant in a model, and should therefore be included, is to compare their contribution to the CV response based on the average move sizes made during the plant test.
    4. Model assessment tools to guide model quality assessments are desirable. These may be error bounds on the step responses or other techniques to grade the model.  A common technique for assessing model errors are bode error plots which express errors as a function of frequency.  They can be useful for modifying the test to improve certain aspects of the model (e.g., reducing errors at low or high frequencies.
    5. Features to assist in the development of nonlinear transformation are desirable. Ideally, the necessary pre- and post-controller calculations to support transformations are a standard option in the MPC.
    6. Features that help document the various model runs and the construction of the final MPC model is a desirable feature.
    7. Even if an MPC includes online options for removing weak degrees of freedom, it is recommended that known consistency relationships be imposed as part of model identification.

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 26 Sep 2018

    How to Optimize Industrial Evaporators

    The post How to Optimize Industrial Evaporators first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the
    ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

     

    Luis Navas

    Luis Navas is an ISA Certified Automation Professional and electronic engineer with more than 11 years of experience in process control systems, industrial instrumentation and safety instrumented systems. Luis’ questions on evaporator control are important to improve evaporator concentration control and minimize steam consumption

     

     

    Luis Navas’ Questions

    Which criteria should I follow to define the final control strategy with model predictive control (MPC) in an existing PID strategy? Only one MPC for all existing PIDs? Or may be 1MPC + 1PID or 1MPC + 2 PIDs? What are the criteria to make the correct decision? What is the step by step procedure to deploy the advanced control in the real process in the safest way? Which are your hints, tips, advice and experiences regarding MPC implementations?

     

    PID control of a double-effect evaporator

     

    Greg McMillan’s Initial Answer

    In general you try to include all of the controlled variables (CV), manipulated variables (MV), disturbance variables (DC), and constraint variables (QC) in the same MPC unless the equipment are not related, there is a great difference in time horizons or there is a cascade control opportunity like we see with Kiln MPC control where a slower MPC with more important controlled variables send setpoints to a secondary MPC for faster controlled variables. For your evaporator control, this does not appear to be the case.

    We first discuss advanced PID control and its common limitations before moving into a MPC.

    For optimization, a PID valve position controller could maximize production rate by pushing the steam valve to its furthest effective throttle position. So far as increasing efficiency in terms of minimizing steam use, this would be generally be achieved by tight concentration control that allows you to operate closer to minimum concentration spec. The level and concentration response would be true and near integrating. In both cases, PID integrating process tuning rules should be used. Do not decrease the PID gain computed by these rules without proportionally increasing the PID reset time. The product of the PID gain and reset time must be greater than the inverse of the integrating process gain to prevent slow rolling oscillations, a very common problem. Often the reset time is two or more orders of magnitude too small because user decreased the PID gain due to noise or thinking oscillations are caused by too high a PID gain.

    I don’t see constraint control for a simple evaporator but if there were constraints, an override controller would be setup for each. However, only one constraint would be effectively governing operation at a given time via signal selection. Also, the proper tuning of override controllers and valve position controllers is not well known. Furthermore, the identification of dynamics for feedback and particularly feedforward control typically requires the expertise by a specialist. Often comparisons are done showing how much better Model Predictive Control is than PID control without good identification and tuning of feedback and feedforward control parameters.

    While optimization limitations and typical errors in identification and tuning push your case toward the use of MPC, here are the best practices for PID control of evaporators.

    1. Measure product concentration by a Coriolis meter on evaporator system discharge.
    2. Control product concentration by manipulation of the heat input to product flow ratio.
    3. Use evaporator level measurements with an excellent sensitivity and signal noise ratio.
    4. When possible, use radar instead of capillary systems to reduce level noise, drift, and lag
    5. Control product concentration by changing heat input to feed rate ratio. If production rate is set by discharge flow, use PID to manipulate heat input. If production rate is set by heat input, use PID to manipulate product flow rate.
    6. Use near integrator rules maximizing rate action in PID concentration controller tuning.
    7. Use a flow feedforward of product flow rate to set feed rate to minimize the response time for production rate or product concentration control.
    8. For feed concentration disturbances, use feedforward to correct the heat input based on feed solids concentration computed from density measured by a feed Coriolis meter.
    9. The actual heat to feed ratio must be displayed and manual adjustment of desired ratio be provided to operations for startup and abnormal operation.
    10. To provide faster concentration control for small disturbances, use a PD controller to manipulate a small bypass flow whose bias is about 50% of maximum bypass flow.

    The use of model predictive control software often does a good job of identifying the dynamics and automatically incorporating them into the controller. Also, it can simultaneously handle multiple constraints with predictive capability as to violation of constraints. Furthermore, a linear program or other optimizer built into MPC can find and achieve the optimum intersection of the minimum and maximum values of controlled, constraint, and manipulated variables plotted on a common axis of the manipulated variables.

    I have asked for more detailed advice on MPC by Mark Darby, a great new resource, who wrote the MPC Sections for the McGraw-Hill Handbook Hunter and I just finished.

    Mark Darby’s Initial Answer

    It is normally best to keep PID controls in place for basic regulatory control if they perform well, which may require re-tuning or reconfiguration of the strategy.  Your case is getting into advanced control and optimization where the advantage shifts to MPC. Multiple interactions and measured disturbances are best done by MPC compared to PID decoupling and feedforward control. First principle models should be used to compute smarter disturbance variables such as solids feed flow rather than separate feed flow and feed concentration disturbance variables. Override control and valve position control schemes are better handled by MPC.  More general optimization is also better done with an MPC. Remember to include PID outputs to valves as constraint variables if they can saturate in normal operation.  If a valve is operated close to a limit (e.g., 5% or 95%), it may be better to have the MPC manipulate the valve signal directly using signal characterization as needed using installed flow characteristic to linearize response.

     

    Join the ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.

     

    Here are some MPC best practices from Process/Industrial Instruments and Controls Handbook Sixth Edition, by Gregory K. McMillan and Hunter Vegas (co-editors), and scheduled to be published in early 2019. This sixth edition is revolutionary in having nearly 50 industry experts provide a focus on the steps needed for all aspects to achieve a successful automation project to maximize the return on investment.

     MPC Project Best Practices

    1. Project team members should include not only control engineers, but also process engineers and operations personnel.
    2. First level support of MPC requires staff with knowledge of both the MPC and the process.  Site staff needs to have sufficient understanding to troubleshoot and answer the questions of operations.  Larger companies often have central teams for second level support and to participate in projects.
    3. Even in companies with experienced teams, it is not unusual to use outside MPC consultants. The right level of usage of outside consultants is rarely 0% or 100%.
    4. It may be tempting to avoid the benefit estimation and/or post audit, especially when a company has previous successful history with MPC. But doing so carries a risk.  New management may not have experience or understand the value of MPC, leading to the inevitable question: “What is MPC doing for me today?”
    5. The other temptation is to forgo needed instrumentation or hardware repairs and proceed directly with an MPC project, arguing that MPC can compensate for such deficiencies.  This carries the risk of not meeting expectations and MPC getting a bad reputation, which will be difficult to erase.
    6. Regular reporting of relevant KPIs and benefits is seen as the best way of keeping the organization in the know and motivating additional MPC applications.

    MPC Design Best Practices

    1. Develop a functional design with input from operations, process engineering, economics staff, and instrument techs.  Update the design as the project progresses, and after the project is completed to reflect the as built MPC.
    2. Not all MPC variables must be determined up front in the project.  Most important is identifying the MVs.  The final selection of CVs and DVs can be made after plant testing, assuming data for these variables was collected.
    3. The use of a dynamic simulation can be useful for testing a new regulatory control strategy.  It can also be used to test and demonstrate an MPC, which can be quite illustrative and educational, particularly if MPC is being applied for the first time in a facility.
    4. If filtering of a CV or DV for MPC is required, it needs to be done at the DCS or PLC level.  The faster scan times allow effective filtering (usually on the order of seconds) without significantly affecting the underlying dynamics of the signal.  In addition, filters associated with the PVs of PID loops should be reviewed to ensure excessive filtering is not being used to mask other problems.
    5. The use of a steady-state or dynamic simulation can be useful for determining thermo-physical equation parameters for PID calculated control variables (e.g., duty or PCT) and MPC CVs, estimating process gains, and evaluating possible inferential predictors.
    6. With most MPC products, adding MVs, CVs, and DVs is a straightforward task once models are identified.  This allows starting with a smaller MPC on one part of the unit, and later increasing the scope as experience and confidence is gained.
    7. Inferential models can be developed ahead of the plant test, which allows the model to be evaluated and adjustments made.  For data driven regression based inferentials, one needs to have at least confirmed that measurements exist that correlate with the analyzed valve.  Final determination of model inputs can be made during the modeling phase.
    8. A challenge with lab-based inferentials is accurately knowing when a lab sample is collected.  A technique for automating this is to install a thermocouple directly in the line of the sample point. A spike in the temperature measurement is used to detect a sample collection.
    9. When implementing a steady-state inferential model online, it is often useful to filter inputs to the calculation to remove phantom effects such as overshoot or inverse response.

    MPC Model Development Best Practices

    Plant Testing

    1. A test plan should be developed with operations and process engineering. It will need to be flexible to accommodate the needs of the modeling as well as operational issues that may arise.
    2. Data collected for model identification should not use data compression. A separate data collection is recommended to minimize the likelihood of latency effects such as PVs exhibiting changes before SPs.
    3. The data collection should include all pertinent tags for the units being tested. This can allows integrity checks to be made, and models to be identified for new CVs and DVs that may be added later to the MPC.
    4. Model identification runs should be done frequently, typically at least once per day. This allows the testing to be modified to emphasize MV-CV models that are insufficiently identified.
    5. The plant test is an opportunity to answer operational or process questions of which there are differing opinions, such as the effect of a recycle on recovery or on a constraint. This can help to develop consensus on a new strategy.
    6. MPC products that include automatic closed-loop should provide the necessary logic to change step sizes, bring MVs in and out of test mode, and displays to follow the testing.
    7. Lab sample collection for inferential model: include multiple samples collected at same time to assess sample container differences (reproducibility) and lab repeatability. When multiple sample containers are used, record the container used for each sample.  Coordinate lab sample with collection personnel and record time samples are collected from the process.

    Model Identification

    1. The MPC identification package should automatically handle the required scaling of MVs and CVs and differencing and/or de-trending.
    2. The ability to slice or section out bad data is a necessary feature. Note that each section of data that is excluded requires a re-initialization of the identification algorithm for the next section of good data.
    3. A useful technique for deciding on which MVs and DVs are significant in a model, and should therefore be included, is to compare their contribution to the CV response based on the average move sizes made during the plant test.
    4. Model assessment tools to guide model quality assessments are desirable. These may be error bounds on the step responses or other techniques to grade the model.  A common technique for assessing model errors are bode error plots which express errors as a function of frequency.  They can be useful for modifying the test to improve certain aspects of the model (e.g., reducing errors at low or high frequencies.
    5. Features to assist in the development of nonlinear transformation are desirable. Ideally, the necessary pre- and post-controller calculations to support transformations are a standard option in the MPC.
    6. Features that help document the various model runs and the construction of the final MPC model is a desirable feature.
    7. Even if an MPC includes online options for removing weak degrees of freedom, it is recommended that known consistency relationships be imposed as part of model identification.

     

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

     

    About the Author
     Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly “Control Talk” columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg:
    LinkedIn

     

    • 12 Sep 2018

    How to Calibrate a Thermocouple

    The post How to Calibrate a Thermocouple first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    In the ISA Mentor Program, I am providing guidance for extremely talented individuals from Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Daniel Brewer.

    Daniel Brewer, one of our newest protégés, has over six years of industry experience as an I&E technician. He attended the University of Kansas process instrumentation and control online courses. Daniel’s questions focus on aspects affecting thermocouple accuracy.

    Daniel Brewer’s Question

    How do you calibrate a thermocouple transmitter? How do you simulate a thermocouple? When do you use zero degree reference junction? What if your measuring junction temperature varies?

    Hunter Vegas’ Answer

    Most people use a thermocouple simulator to calibrate temperature transmitters. You can usually set them to generate a wide selection of thermocouple types. Just make sure the thermocouple lead you use to connect the simulator to the transmitter is the right kind of wire.

    “Calibrating” the thermocouple is another matter – because realistically it works or it doesn’t. You can pull it and put it in a bath though very few people actually do that. However if it is critical most will take the time to either put the thermocouple in a bath or dry block or at least cross check the reading against another thermocouple or some other means to check it.

    The zero degree junction is a bit more complicated. Basically any time two dissimilar metals are connected a slight millivolt signal is generated. That is what a thermocouple is – two dissimilar metals welded together which generate varying voltages depending on the temperature at the junction. When you run a thermocouple circuit you try to use the same metals as the thermocouple for the whole circuit – that is you run thermocouple wire that matches the thermocouple and you use special thermocouple terminal blocks that are the same kind. This eliminates any extra junctions – the same metal is always connected to itself.   However at some point you have to hook up to some kind of device that has copper terminal blocks – (transmitter, indicator, etc.) Unfortunately this creates another thermocouple junction where the copper touches the wires.  That junction will impact the reading and will also fluctuate with temperature so the error will be variable.

    To fix this most devices have a cold junction compensation circuit built in that automatically senses the temperature of the terminal block and subtracts the effect from the reading. Nearly every transmitter and read out device has it build in as a standard feature now – only older equipment would lack it.

    Greg McMillan’s Answer

    The error from properly calibrated smart temperature transmitter with the correct span is generally negligible compared to the noise and errors from the sensor and signal wiring and connections. The use of Class 1 special grade instead of Class 2 standard grade thermocouples and extension lead wires enables an accuracy that is 50% better. The use of thermocouple input cards instead of smart transmitters introduces large errors due to the large spans and inability to individualize the calibrations.

    Thermocouple (TC) drift can vary from 1 to 20 degrees Fahrenheit per year and the repeatability can vary from 1 to 8 degrees Fahrenheit depending upon the TC type and application conditions. For critical operations demanding high accuracy, the frequency of sensor calibrations needed is problematic. While a dry block calibrator is faster than a wet batch and can cover a higher temperature range, the removal of the sensor from the process is disruptive to operations and the time required compared to a simple transmitter calibration is still considerable. The best bet is a single point temperature check to compensate for the offset due to drift and manufacturing tolerances.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program.

    In a distillation column application, operations were perplexed and more than annoyed at the terrible column performance when the thermocouple was calibrated or replaced. It turns out operations had homed in on a temperature setpoint that had effectively compensated for the offset in the thermocouple measurement. Even after realizing the need for a new setpoint due to a more accurate thermocouple, it would take months to years to find the best setpoint.

    Temperature is critical for column control because it is an inference of composition. It is also critical for reactor control because the reaction rate determining process capacity and selectivity setting process efficiency and product quality is greatly affected by temperature. In these applications where the operating temperature is below 400 degrees Fahrenheit, a resistance temperature detector (RTD) is a much better choice. Table 1 compares the performance of a thermocouple and RTD.

    Table 1: Temperature Sensor Precision, Accuracy, Signal, Size and Linearity

     

    Stepped thermowells should be specified with an insertion length greater than five times the tip diameter (L/D > 5) to minimize error from heat going from thermowell tip to pipe or equipment connection from thermal conduction and an insertion length less than 20 times the tip diameter (L/D < 20) to minimize vibration from wake frequencies. Calculations by supplier on length should be done to confirm that heat conduction error and vibration damage is not a problem. Stepped thermowells reduce the error and damage and provide a faster response. Spring loaded grounded thermocouples as seen in Figure 1 with minimum annular clearance between sheath and thermowell interior walls provide the fastest response that minimizes errors introduced by the sensor tip temperature lagging the actual process temperature.

     

    Figure 1: Spring loaded compression fitting for sheathed TC or RTD

     

    Thermowell material must provide corrosion resistance and if possible, a thermal conductivity to minimize conduction error or response time, whichever is most important.  The tapered tip of the thermowell must be close to center line of pipe and the tapered portion of the thermowell completely past the equipment wall including any baffles. For columns, the location showing the largest and most symmetrical change in temperature for an increase and decrease in manipulated flow should be used. Simulations can help find this but it is wise to have several connections to confirm by field tests the best location, The tip of the thermowell must see the liquid, which may require a longer extension length or mounting on the opposite side of the downcomer to avoid the tip being in the vapor phase due the drop in level at the downcomer.

    For TCs above 600 degrees Celsius, ensure sheath material is compatible with TC type. For TCs above temperature limit of sheaths, use the ceramic material with best thermal conductivity and design to minimize measurement lag time. For TCs above the temperature limit of sheaths with gaseous contaminants or reducing conditions, use possibly purged primary (outer) and secondary (inner) protection tubes to prevent contamination of TC element and provide a faster response.

    The best location for a thermowell for small diameter pipelines (e.g., less than 12 inch) is in a pipe elbow facing upstream to maximize insertion length in center of the pipe. If abrasion from solids is an issue, the thermowell can be installed in the elbow facing downstream but a greater length is needed to reduce noise from swirling.  If a pipe is half filled, the installation should ensure the narrowed diameter of the stepped thermowell is in liquid and not vapor.

    The location of a thermowell must be sufficiently downstream of a joining of streams or heat exchanger tube side outlet to enable remixing of streams. The location must not be too far downstream due to the increase in transportation delay, which is the residence time for plug flow that is the pipe volume between the outlet or junction and sensor location divided by the pipe flow (volume/flow). For a length that is 25 times the pipe diameter (L/D = 25), the increase in loop deadtime of a few seconds is not as detrimental as a poor signal to noise ratio from poor uniformity. For desuperheaters, to prevent water droplets from creating noise, the thermowell must provide a residence time that is greater 0.3 seconds, which for high gas velocities can be much further than the distance required for liquid heat exchangers.

    For greater reliability and better diagnostics dual isolated sensing elements can be used but the more effective solution is redundant installations of thermowells and transmitters. The middle signal selection of three completely redundant measurements offers best reliability and least effect of drift, noise, repeatability and slow response. The measurement from middle signal selection will be valid for any type of failure of one measurement. There is also considerable knowledge gained to head off problems from comparison of each measurement to middle.

    Drift in the sensor shows up as a different average controller output at the same production rate assuming there is no fouling or change in raw materials. Poor repeatability in the sensor shows up as excessive variability in temperature controller output. For very tight control where the controller gain is high, sensor variability is most apparent in the controller output assuming the controller is tuned properly and the valve has a smooth consistent response.

    For much more on calibration and temperature measurement see the Beamex e-book Calibration Essentials and Rosemount’s The Engineer’s Guide to Industrial Temperature Measurement.

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 12 Sep 2018

    How to Calibrate a Thermocouple

    The post How to Calibrate a Thermocouple first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the
    ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

     

    Daniel Brewer, one of our newest protégés, has over six years of industry experience as an I&E technician. He attended the University of Kansas process instrumentation and control online courses. Daniel’s questions focus on aspects affecting thermocouple accuracy.

     

     

    Daniel Brewer’s Question

    How do you calibrate a thermocouple transmitter? How do you simulate a thermocouple? When do you use zero degree reference junction? What if your measuring junction temperature varies?

    Hunter Vegas’ Answer

    Most people use a thermocouple simulator to calibrate temperature transmitters. You can usually set them to generate a wide selection of thermocouple types. Just make sure the thermocouple lead you use to connect the simulator to the transmitter is the right kind of wire.

    “Calibrating” the thermocouple is another matter – because realistically it works or it doesn’t. You can pull it and put it in a bath though very few people actually do that. However if it is critical most will take the time to either put the thermocouple in a bath or dry block or at least cross check the reading against another thermocouple or some other means to check it.

    The zero degree junction is a bit more complicated. Basically any time two dissimilar metals are connected a slight millivolt signal is generated. That is what a thermocouple is – two dissimilar metals welded together which generate varying voltages depending on the temperature at the junction. When you run a thermocouple circuit you try to use the same metals as the thermocouple for the whole circuit – that is you run thermocouple wire that matches the thermocouple and you use special thermocouple terminal blocks that are the same kind. This eliminates any extra junctions – the same metal is always connected to itself.   However at some point you have to hook up to some kind of device that has copper terminal blocks – (transmitter, indicator, etc.) Unfortunately this creates another thermocouple junction where the copper touches the wires.  That junction will impact the reading and will also fluctuate with temperature so the error will be variable.

    To fix this most devices have a cold junction compensation circuit built in that automatically senses the temperature of the terminal block and subtracts the effect from the reading. Nearly every transmitter and read out device has it build in as a standard feature now – only older equipment would lack it.

     

    Join the ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.

     

    Greg McMillan’s Answer

    The error from properly calibrated smart temperature transmitter with the correct span is generally negligible compared to the noise and errors from the sensor and signal wiring and connections. The use of Class 1 special grade instead of Class 2 standard grade thermocouples and extension lead wires enables an accuracy that is 50% better. The use of thermocouple input cards instead of smart transmitters introduces large errors due to the large spans and inability to individualize the calibrations.

    Thermocouple (TC) drift can vary from 1 to 20 degrees Fahrenheit per year and the repeatability can vary from 1 to 8 degrees Fahrenheit depending upon the TC type and application conditions. For critical operations demanding high accuracy, the frequency of sensor calibrations needed is problematic. While a dry block calibrator is faster than a wet batch and can cover a higher temperature range, the removal of the sensor from the process is disruptive to operations and the time required compared to a simple transmitter calibration is still considerable. The best bet is a single point temperature check to compensate for the offset due to drift and manufacturing tolerances.

    In a distillation column application, operations were perplexed and more than annoyed at the terrible column performance when the thermocouple was calibrated or replaced. It turns out operations had homed in on a temperature setpoint that had effectively compensated for the offset in the thermocouple measurement. Even after realizing the need for a new setpoint due to a more accurate thermocouple, it would take months to years to find the best setpoint.

    Temperature is critical for column control because it is an inference of composition. It is also critical for reactor control because the reaction rate determining process capacity and selectivity setting process efficiency and product quality is greatly affected by temperature. In these applications where the operating temperature is below 400 degrees Fahrenheit, a resistance temperature detector (RTD) is a much better choice. Table 1 compares the performance of a thermocouple and RTD.

     

    Table 1: Temperature Sensor Precision, Accuracy, Signal, Size and Linearity

     

     

    Stepped thermowells should be specified with an insertion length greater than five times the tip diameter (L/D > 5) to minimize error from heat going from thermowell tip to pipe or equipment connection from thermal conduction and an insertion length less than 20 times the tip diameter (L/D < 20) to minimize vibration from wake frequencies. Calculations by supplier on length should be done to confirm that heat conduction error and vibration damage is not a problem. Stepped thermowells reduce the error and damage and provide a faster response. Spring loaded grounded thermocouples as seen in Figure 1 with minimum annular clearance between sheath and thermowell interior walls provide the fastest response that minimizes errors introduced by the sensor tip temperature lagging the actual process temperature.

     

    Figure 1: Spring loaded compression fitting for sheathed TC or RTD

     

    Thermowell material must provide corrosion resistance and if possible, a thermal conductivity to minimize conduction error or response time, whichever is most important.  The tapered tip of the thermowell must be close to center line of pipe and the tapered portion of the thermowell completely past the equipment wall including any baffles. For columns, the location showing the largest and most symmetrical change in temperature for an increase and decrease in manipulated flow should be used. Simulations can help find this but it is wise to have several connections to confirm by field tests the best location, The tip of the thermowell must see the liquid, which may require a longer extension length or mounting on the opposite side of the downcomer to avoid the tip being in the vapor phase due the drop in level at the downcomer.

    For TCs above 600 degrees Celsius, ensure sheath material is compatible with TC type. For TCs above temperature limit of sheaths, use the ceramic material with best thermal conductivity and design to minimize measurement lag time. For TCs above the temperature limit of sheaths with gaseous contaminants or reducing conditions, use possibly purged primary (outer) and secondary (inner) protection tubes to prevent contamination of TC element and provide a faster response.

    The best location for a thermowell for small diameter pipelines (e.g., less than 12 inch) is in a pipe elbow facing upstream to maximize insertion length in center of the pipe. If abrasion from solids is an issue, the thermowell can be installed in the elbow facing downstream but a greater length is needed to reduce noise from swirling.  If a pipe is half filled, the installation should ensure the narrowed diameter of the stepped thermowell is in liquid and not vapor.

    The location of a thermowell must be sufficiently downstream of a joining of streams or heat exchanger tube side outlet to enable remixing of streams. The location must not be too far downstream due to the increase in transportation delay, which is the residence time for plug flow that is the pipe volume between the outlet or junction and sensor location divided by the pipe flow (volume/flow). For a length that is 25 times the pipe diameter (L/D = 25), the increase in loop deadtime of a few seconds is not as detrimental as a poor signal to noise ratio from poor uniformity. For desuperheaters, to prevent water droplets from creating noise, the thermowell must provide a residence time that is greater 0.3 seconds, which for high gas velocities can be much further than the distance required for liquid heat exchangers.

    For greater reliability and better diagnostics dual isolated sensing elements can be used but the more effective solution is redundant installations of thermowells and transmitters. The middle signal selection of three completely redundant measurements offers best reliability and least effect of drift, noise, repeatability and slow response. The measurement from middle signal selection will be valid for any type of failure of one measurement. There is also considerable knowledge gained to head off problems from comparison of each measurement to middle.

    Drift in the sensor shows up as a different average controller output at the same production rate assuming there is no fouling or change in raw materials. Poor repeatability in the sensor shows up as excessive variability in temperature controller output. For very tight control where the controller gain is high, sensor variability is most apparent in the controller output assuming the controller is tuned properly and the valve has a smooth consistent response.

    For much more on calibration and temperature measurement see the Beamex e-book Calibration Essentials and Rosemount’s The Engineer’s Guide to Industrial Temperature Measurement.

     

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

     

     

    About the Author
     Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly “Control Talk” columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg:
    LinkedIn

     

    • 2 Sep 2018

    When is Reducing Variability Wrong?

    The post, When is Reducing Variability Wrong?, first appeared on the ControlGlobal.com Control Talk blog.

    Having the blind wholesale goal of reducing variability can lead to doing the wrong thing that can reduce plant safety and performance.  Here we look at some common mistakes made that users may not realize until they have better concept of what is really going on. We seek to provide some insightful knowledge here to keep you out of trouble.

    Is a smoother data historian plot or a statistical analysis showing less short term variability good or bad? The answer is no for the following situations misleading users and data analytics.

    First of all, the most obvious case is surge tank level control. Here we want to maximize the variation in level to minimize the variation in manipulated flow typically to downstream users. This objective has a positive name of absorption of variability. What this is really indicative of is the principle that control loops do not make variability disappear but transfer variability from a controlled variable to a manipulated variable. Process engineers often have a problem with this concept because they think of setting flows per a Process Flow Diagram (PFD) and are reluctant to let a controller freely move them per some algorithm they do not fully understand. This is seen in predetermined sequential additions of feeds or heating and cooling in a batch operation rather allowing a concentration or temperature controller do what is needed via fed-batch control. No matter how smart a process engineer is, not all of the situations, unknowns and disturbances can be accounted for continuously. This is why fed-batch control is called semi-continuous. I have seen where process engineers, believe or not, sequence air flows and reagent flows to a batch bioreactor rather than going to Dissolved Oxygen or pH control. We need to teach chemical and biochemical engineers process control fundamentals including the transfer of variability.

    The variability of a controlled variable is minimized by maximizing the transfer of variability to the manipulated variable. Unnecessary sharp movements of the manipulated variability can be prevented by a setpoint rate of change limit on analog output blocks for valve positioners or VFDs or directly on other secondary controllers (e.g., flow or coolant temperature) and the use of external-reset feedback (e.g., dynamic reset limit) with fast feedback of the actual manipulated variable (e.g., position, speed, flow, or coolant temperature).  There is no need to retune the primary process variable controller by the use of external-reset feedback.

    Data analytics programs need to use manipulated variables in addition to controlled variables to indicate what is happening. For tight control and infrequent setpoint changes to a process controller, what is really happening is seen in the manipulated variable (e.g., analog output).

    A frequent problem is data compression in a data historian that conceals what is really going on. Hopefully, this is only affecting the trend displays and not the actual variables been used by a controller.

    The next most common problem has been extensively discussed by me so at this point you may want to move on to more pressing needs. This problem is the excessive use of signal filters that may even be more insidious because the controller does not see a developing problem as quickly. A signal filter that is less than the largest time constant in the loop (hopefully in the process) creates dead time. If the signal filter becomes the largest time constant in the loop, the previously largest time constant creates dead time. Since the controller tuning based on largest time constant has no idea where it is, the controller gain can be increased, which combined with the smoother trends can lead one to believe the large filter was beneficial.  The key here is a noticeably increase in the oscillation period particularly if the reset time was not increased. Signal filters become increasingly detrimental as the process loses self-regulation. Integrating processes such as level, gas pressure and batch temperature are particularly sensitive. Extremely dangerous is the use of a large filter on the temperature measurement for a highly exothermic reaction. If the PID gain window (ratio of maximum to minimum PID gain) reduces due to measurement lag to the point of not being able to withstand nonlinearities (e.g., ratio less than 6), there is a significant safety risk.

    A slow thermowell response often due to a sensor that is loose or not touching the bottom of the thermowell causes the same problem as a signal filter. An electrode that is old or coated can have a time constant that is orders of magnitude larger (e.g., 300 sec) than a clean new pH electrode. If the velocity is slightly low (e.g., less than 5 fps), pH electrodes become more likely to foul and if the velocity is very low (e.g., less than 0.5 fps), the electrode time constant can increase by one order of magnitude (e.g., 30 sec) compared to electrode seeing recommended velocity. If the thermowell or electrode is being hidden by a baffle, the response is smoother but not representative of what is actually going on.

    For gas pressure control, any measurement filter including that due to transmitter damping generally needs to be less than 0.2 sec, particularly if volume boosters on a valve positioner output(s) or a variable frequency drive is needed for a faster response.

    Practitioners experienced in doing Model Predictive Control (MPC), want data compression and signal filters to be completely removed so that the noise can be seen and a better identification of process dynamics especially dead time is possible.

    Virtual plants can show how fast the actual process variables should be changing revealing poor analyzer or sensor resolution and response time and excessive filtering. In general, you want measurement lags to total up to being less than 10% of the total loop dead time or less than 5% of reset time. However, you cannot get a good idea of the loop dead time unless you remove the filter and look for the time it takes to see a change in the right direction beyond noise after a controller setpoint or output change.

    For more on the deception causes by a measurement time constant, see the Control Talk Blog “Measurement Attenuation and Deception.”

    • 29 Aug 2018

    Key Insights to Control System Dynamics

    The post Key Insights to Control System Dynamics first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the
    ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

     

    Caroline Cisneros, a recent graduate of the University of Texas who became a protégé about a year ago, is gaining significant experience working with some of the best process control engineers in an advanced control applications group. Caroline asks some questions about dynamics that play such a big role in improving control systems. The questions are basic but have enormous practical implications as seen in answers.

    Caroline Cisneros’ Question

    Is an increase/decrease in process gain, time constant, dead time, controller gain, reset, and rate good or bad in terms of effects on loop performance?

    Greg McMillan’s Answers

    This is an excellent question with widespread significant implications. I offer here some key insights that can lead to better career and system performance. The first obstacle is terminology that over the years has resulted in considerable misconceptions and missing recognition of the source and nature of problems and the solutions needed. To overcome what is preventing a more common and better understanding see the Control Talk Blog Understanding Terminology to Advance Yourself and the Automation Profession. Also, for much more on how all of these dynamic terms affect what you do with your PID and the consequences in loop performance see the ISA Mentor post How to Optimize PID Settings and Options.

    Process Gain

    Increases in process gain can be helpful but challenging.

    In distillation control, the tray that shows the largest temperature change for a change in reflux to feed ratio (largest process gain) in both directions has the best temperature to be used as the controlled variable. This location offers much better control because of the increased sensitivity of temperature that is an inferential measurement of column composition. Tests are done in simulations and in plants to find the best locations for temperature sensors.

     

    Join the ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.

     

    In pH control, a titration curve (plot of pH versus ratio of reagent added to sample volume) with a slope that goes from flat to incredibly steep due to strong acids and strong bases can create an incredibly large and variable process gain. The X axis (abscissa) is converted to a ratio of reagent flow to influent flow taking into account engineering units. The shape stays the same, and if volumetric units are used and concentrations are the same in the lab and plant, the X axis has the same numeric values. The slope of the curve is the process gain. The slope and thus the process gain can theoretically change by a factor of 10 for every pH unit deviation from neutrality for a strong acid and strong base. The straight nearly vertical line at 7 pH seen in a plot of a laboratory titration curve is actually another curve if you zoom in on the neutral region as seen in Figure 1. If only a few data points are provided between 8 and 10 pH (common problem), you will not see the curve. The lab needs to be instructed to dramatically reduce the size of the reagent addition as the titrated pH gets closer to 7pH.

     

    Figure 1: Titration Curve for Strong Acid and Strong Base

     

    The steep slope provides incredible sensitivity to changes in hydrogen ion concentration, but less than ideal mixing will create enormous noise, and any stiction in the control valve will create enormous oscillations. The amplitude from stiction can be larger than 2 pH for even the best control valve. Even if we could have perfect mixing and control valve, we would not appreciate the orders of magnitude improvement in hydrogen ion control because we are only looking at what we measure, which is pH. Thus, for pH control we seek to have weak acids and weak bases and conjugate salts to moderate the slope of the titration curve. There is also a flow ratio gain that occurs for all composition and temperature control loops as detailed in the Control Talk Blog Hidden Factor in our Most Important Control Loops.

    Often the term “process gain” includes the effect of more than the process. The better term is “open loop gain” that is the product of the manipulated variable gain (e.g., valve gain), process gain, and measurement gain (e.g., 100%/span). The valve gain (slope of installed flow characteristic that is the flow change in engineering units per signal change in percent) must not be too small (e.g., large disk or ball valve rotations where installed characteristic is flat) or too large (e.g., quick opening characteristic) because the stiction or backlash expressed as a percent of signal translates to a larger amount of errant flow. Oversized valves cause an even greater problem because of operation near the closed position where stiction is greatest from seal and seat friction. Small measurement spans causing a high measurement gain may be beneficial when accuracy of measurement is a percent of span. The use of thermocouple and RTD input cards, rather than transmitters with spans narrowed to range of interest, introduces too much error. In conclusion, automation systems gains must not be too small or too large. Too small of a valve gain or measurement gain is problematic because of less sensitivity and greater error that reduces the ability to accurately see and completely correct a process change. Too high of a valve gain is also bad from the standpoint of an increase in size of the flow change associated with backlash and stiction. An increase in this flow change accordingly reduces the precision of a correction for a process change and increases the amplitude of oscillations (e.g., limit cycle).

    Process Time Constant

    An increase in the largest (primary) time constant in a self-regulating process (process that reaches steady state in manual for no continuing upsets) is beneficial because it enables a large PID gain. The process time constant also slows down process input disturbances, giving the PID more time to catch up. While this proportionally decreases peak and integrated errors, a large time constant is perceived by some as bad.  The tuning is more challenging, which requires greater patience and time commitment for open loop tests that seek to identify the primary time constant. The time for identification of the dynamics needed to tune the loop can be reduced by 80% or more for some well mixed vessel temperature loops by identifying the dead time and initial ramp rate (treating it like an integrating process). It has been verified by extensive test results that a loop with a process time constant larger than 4 times the deadtime should be classified as near-integrating. Integrating process tuning rules are consequently used to enable more immediate feedback correction to potentially stop a process excursion within 4 dead times. The tuning parameter changes from a closed loop time constant for self-regulating process tuning rules to an arrest time for integrating process tuning rules in order to take advantage of the ability to increase the proportional and integral action to reject load disturbances.

    While the largest time constant is beneficial if it is in the process, the second largest process time constant creates effectively deadtime and is detrimental. It can be largely cancelled by a rate time setting. Going from a single loop to a cascade loop where a secondary loop encloses a process time constant smaller than largest time constant  converts a term with a bad effect (secondary time constant increasing dead time in original single loop) into a term with a good effect (primary time constant slowing down disturbances in secondary loop). The reduction in the dead time also decreases the ultimate period of the primary loop.

    For true integrating and runaway processes, any time constant is detrimental. It becomes more important to cancel the time constant by a rate time equal to or larger than time constant

    Any time constant in the automation system is detrimental. A measurement and control valve time constant slows down the recognition and correction, respectively, of a disturbance. An automation system time constant also effectively creates dead time. Signal filters and transmitter damping settings add time constants. See Figure 1 to help recognize the many time constants in an automation system.

    A measurement time constant larger than process time constant can be deceptive in that for self-regulating processes, it enables a larger PID gain, and the amplitude of oscillations may look less due to filtering action. However, the key to realize is that the actual process error or amplitude in engineering units is larger and the period of the oscillation is larger. All measurement and valve time constants should be less than 10% of the total loop deadtime for the effect on loop performance to be negligible. This objective for a valve time constant is difficult to achieve in liquid flow, pressure control, and compressor surge control because the process dead times in these applications are so small., A valve time constant becomes large for large signal changes (e.g., > 40%) due to stroking time, particularly for large valves. A valve time constant becomes large for small signal changes (e.g., < 0.4%) due to backlash, stiction, and poor positioner and actuator sensitivity. For more on how to identify and fix valve response problems, see the article How to specify valves and positioners that don’t compromise control.

    Dead Time

    Dead time anywhere in the loop is detrimental by creating a delay in the recognition or correction of a change in the process variable. For a setpoint change, dead time in the manipulated variable (e.g., manipulated flow) or process causes a delay in the start of the change in the process, and dead time in the measurement or controller creates additional delays in the appearance as a process variable response to the setpoint change. The minimum possible peak error and integrated error for an unmeasured load disturbance is proportional to the total loop dead time and dead time squared, respectively. The total loop dead time is the sum of all dead times in the loop. The dead time from digital devices and algorithms is ½ the update rate (execution rate or scan time) plus the latency (time required to communicate change in digital output after a change in digital input). Most digital devices have negligible latency. Simulation tests that always have the disturbance arrive immediately before, instead of after, the PID execution do not show the full adverse effect of PID execution rate, which leads to misconceptions as to the adverse effect of execution rate. On the average, the disturbance arrives in the middle of the time interval of PID executions, which is consistent with the dead time being ½ the execution rate for negligible latency. The latency for complex modules with complex calculations may approach the update rate. The latency for most at-line analyzers is the analyzer cycle time since the analysis is not completed until the end of the cycle. The result is a dead time that is 1.5 times the cycle time. Most of a time constant much smaller than the process time constant or in an integrating process can be taken as equivalent dead time. Since dead time is nearly always underestimated, I simply sum up all of the small time constants as being equivalent dead time. The block diagram in Figure 2 shows many but not all the sources of dead time.

     

    Figure 2: Automation System and Process Dynamics in a Control Loop

     

    The dead time from backlash and stiction is insidious in that it does not show up for step changes in signal. The dead time is the dead band or resolution limit divided by the signal rate of change.

    Simulations typically do not have enough dead time because volumes are perfectly mixed and the dead time is missing from transportation delays particularly from dip tubes and piping to sensors or sample lines to analyzers, valve response time, backlash, stiction, sensor lags, thermowell lags, transmitter damping, wireless update times, and analyzer cycle times.

    For pH applications with extremely large and nonlinear process gains due to strong acids and strong bases, there is a particularly great need to minimize the total loop dead time. This reduces the pH excursion on the titration curve, reducing the extent of the operating point nonlinearity seen. Poor mixing, piping design, valve response, and coated, dehydrated or old electrodes can introduce incredibly large dead times, killing a pH loop. My early specialty being pH control sensitized me to making sure the total system design including equipment, agitation, and piping would enable a pH loop to do its job by minimizing dead time. For much more on the implications on total system design from a very experience oriented view see the ISA book Advanced pH Measurement and Control.

    PID Gain

    The proportional mode provides a contribution to the PID output that is the error multiplied by the PID gain. Except for dead time dominant loops, humans tend not to use enough proportional action due to the perceived bad aspects in reasons listed below to decrease PID gain. For more on the missed opportunities see the Control Talk Blog Surprising Gains from PID Gain.

    Reasons to increase PID gain:

    1. Reduce peak and integrated errors from load disturbances.
    2. Add negative feedback action missing in process (e.g., near and true integrating and runaway processes) and provide overshoot of final resting value of PID output needed.
    3. Provide sense of direction since a decrease in error reverses direction of PID output.
    4. Reduce the dead time from dead band (e.g., backlash) and resolution (e.g., stiction).
    5. Reduce limit cycle amplitude from dead band in loops with two integrators.
    6. Eliminate oscillation from poor actuator and positioner sensitivity.
    7. Make setpoint response faster for batch operations potentially reducing cycle time.
    8. Make secondary loop faster in rejecting disturbances and meeting primary loop demands.
    9. Stop slow oscillations in near and true integrating and runaway processes (product of gain and reset time must be greater than twice the inverse of integrating process gain).
    10. Get the right valve open and the PID output approaches split range point.

    Reasons to decrease PID gain:

    1. Reduce abrupt responses in dead time dominant loops and to setpoint changes in all loops upsetting operators and other loops. Setpoint rate limits on analog output or secondary loop setpoint and external-reset feedback (dynamic reset limit) with the manipulated variable as BKCAL_IN can smooth out these changes without needing to retune PID.
    2. Increases resonance and interaction. Making the faster loops faster and eliminating oscillations by better tuning, and valves may alleviate this concern.
    3. Increases in process gain or valve gain and dead time or decreases in primary time constant necessitates a lower PID gain. In general, a gain margin of 6 or more is advised and achieved by a closed loop time constant or arrest time of 3 or more times dead time.
    4. Eliminate overshoot of PID output final resting value in balanced and dead time dominant processes. While this is useful here, overshoot is needed in other processes.
    5. Reduce amplification of noise. Better solution is reducing source of noise and using a judicious filter that is less than 10% of dead time. Note that fluctuations in PID output smaller than resolution or sensitivity limit do not affect the process.
    6. Reduce faltering as process variable approaches setpoint. Too much proportional action will momentarily halt the approach till integral action takes over resuming approach.

    PID Reset Time

    The integral mode provides a contribution to the PID output that is the integral of the error multiplied by the PID gain and divided by the reset time. External-reset feedback (dynamic reset limit) suspends this action (further changes in output from integral mode) when manipulated variable stops changing. Except for dead time dominant loops, humans tend to use too much integral action due to the perceived good aspects in reasons listed below to decrease reset time.

    Reasons to increase PID reset time (decrease reset action):

    1. Reduce lack of sense of direction to reduce continual change for same error sign
    2. Reduce continual movement since reset is never satisfied (error never exactly zero)
    3. Reduce overshoot of setpoint.
    4. Prevent SIS and relief activation from high pressure or high temperature.
    5. Stop slow oscillations in near and true integrating and runaway processes (product of gain and reset time must be greater than twice inverse of integrating process gain).
    6. Get the right valve open and the PID output approaches split range point.

    Reasons to decrease PID reset time (increase reset action):

    1. Reduce integrated errors from load disturbances.
    2. Eliminate offset from setpoint.
    3. Keep a valve from opening till setpoint is reached. Sometimes stated as objective for surge control, but requires a larger margin between PID setpoint and actual surge curve, resulting in less efficient operation till user flows are increased, which closes the surge valve. A better solution is a smaller margin and use PID gain action to preemptively open the surge valve.
    4. Provide a gradual response with less reaction to noise.
    5. You have a dead time dominant process.
    6. You love Internal Model Control.

    PID Rate Time

    The derivative mode provides a contribution to the PID output that is the derivative of the error (PID on error structure) or derivative of the process variable (PI on error and D on PV structure) multiplied by the PID gain and the rate time. It provides an anticipatory action basically projecting a value of the PV one rate time into the future based on the rate of change multiplied by the rate time. Some plants have mistakenly decided not to use derivative action anywhere due to the perceived bad aspects in reasons listed below to decrease rate time. Good tuning software could have prevented this bad practice of only allowing PI control (rate time always zero).

    Reasons to increase PID rate time:

    1. Provide anticipation of approach to setpoint reducing overshoot.
    2. Cancel out effect of secondary time constant.
    3. Reduce the dead time from backlash and stiction.
    4. Prevent runaway reactions.

    Reasons to decrease PID rate time:

    1. Reduce abrupt responses in dead time dominant loops and to setpoint changes in all loops upsetting operators and other loops. Setpoint rate limits on analog output or secondary loop setpoint and external-reset feedback (dynamic reset limit) with the manipulated variable as BKCAL_IN can smooth out these changes without needing to retune PID.
    2. Prevent oscillations from rate time exceeding reset time for ISA Standard Form.
    3. Reduce amplification of noise. Better solution is reducing source of noise and using a judicious filter that is less than 10% of dead time. Note that fluctuations in PID output smaller than resolution or sensitivity limit do not affect the process.
    4. Reduce kick on setpoint change. A better solution is to use PID structure to eliminate derivative action on setpoint change (e.g., PI on error and D on PV).

     

     

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

     

     

    About the Author
     Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly “Control Talk” columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg:
    LinkedIn

     

    • 15 Aug 2018

    How to Measure pH in Ultra-Pure Water Applications

    The post How to Measure pH in Ultra-Pure Water Applications first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the
    ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

     

    Danny Parrott is an instrumentation and controls specialist at Spallation Neutron Source. Danny is a detail-oriented instrumentation and controls professional experienced in the areas of electrical, electronics and controls specification, installation, maintenance, and project planning. Danny’s question is important in dealing with the many challenges for reliable and accurate pH measurement in ultra pure water and more generally in streams with exceptionally low conductivity.

    Danny Parrott’s Question

    What are some opinions, thoughts, or practical experience relating to pH measurements in ultra-pure water applications?

    Greg McMillan’s Answer

    Ultra-pure water applications pose special problems because of the exceptionally low conductivity of the fluid from the absence of ions. The consequences are extreme sensitivity to fluid velocity and spurious ions, unstable reference junction potentials, sample contamination, and loss of electrical continuity between the reference and measurement electrodes. The functional electrical diagram showing resistances and potentials provides insightful view of nearly all of the sources of problems with pH measurements in general. Ultra-pure water and process fluids with an exceptionally small near zero fluid conductivity threaten the continuity of the electrical circuit between the reference and measurement electrode terminals at the transmitter by an extraordinarily large electrical resistance R8 in Figure 1, a pH electrode functional electrical circuit diagram for a combination electrode that is a great way of recognizing the many potential source of errors in a pH measurement.

     

    Figure 1: pH Electrode Functional Electrical Circuit Diagram

     

    The solution for online measurements is to use a flowing junction reference electrode to provide a small fixed liquid junction potential in a low flow assembly for a combination electrode. The combination electrode assembly ensures a short fixed distance path of reference electrolyte to the measurement electrode and a small fixed fluid velocity. The assembly also provides mounting of an electrolyte reservoir that sustains a small fixed reference junction flow as shown in Figure 2. The flow of reference electrode electrolyte reduces the fluid velocity and electrical resistance (R8) in the fluid path and provides a much more constant liquid junction potential (E5) that does not jump or shift due to the appearance of spurious ions. The resistances and potentials in the diagram provide a wealth of information. The flow assembly also has a special cup holder for calibration with buffer solutions. A solution ground connection reduces the effect of ground potentials. Temperature compensation must be accurate and fast.

     

    Figure 2: Low Flow Assembly With Flowing Reference Junction for Low Conductivity pH Applications

     

    The pH measurement calibration needs to be checked and adjusted before installation and periodically thereafter by inserting the electrode(s) in buffer solutions. Making a pH measurement of a sample is very problematic because of contamination from glass beaker ions, absorption of carbon dioxide creating carbonic acid and accumulation of electrolyte ions from flowing junction. The sample volume needs to be large and the measurement made quickly to reduce effect of accumulating ions. A closed plastic sample container is employed to minimize contamination. The same type of electrode(s) in the online measurement should be utilized for sample measurement so that reference junction potentials are consistent. Since these sample pH measurement requirements are rarely satisfied, buffers instead of process samples should be used for calibration checks.

     

    Join the ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.

     

    In exceptionally low conductivity process fluids, there is often not enough water content to keep the glass measurement electrode hydrated. Also, the activity of the hydrogen ion is severely decreased by the lack of water and the extremely different dissociation constant of a non-aqueous solvent can cause a pH range that is outside of the normal 0 to 14 pH range. For these applications, a flowing reference electrode is also needed but an automatically retractable insertion assembly is useful to periodically retract, flush, soak and calibrate the electrodes reducing process exposure time and hydrating/rejuvenating the measurement electrode’s glass surface. For more on the challenges of semi-aqueous pH measurements see the Control Talk article The wild side of pH measurement. For a much more complete view of what is needed for pH applications, see the ISA book Advanced pH Measurement and Control

    For pH measurements used for process control, I recommend three pH assemblies and middle signal selection. Lower lifecycle costs from less and more effective maintenance and better process performance more than pays for cost of the three measurements. Middle signal selection will inherently ignore a single measurement failure of any type and dramatically reduce the effect of spikes, noise, and the consequences of slow or insensitive glass electrodes.  The middle selection also eliminates unnecessary calibration checks and provides much more intelligent knowledge on electrode performance enabling optimum time for calibration and replacement.

     

    To download a free PDF excerpt from Advanced pH Measurement and Control, click here.

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Process Management), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

     

    About the Author
     Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly “Control Talk” columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg:
    LinkedIn

     

    • 16 Jul 2018

    How to Optimize PID Controller Settings and Options

    The post How to Optimize PID Controller Settings and Options first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the
    ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

     

    The following discussion is based on the ISA Mentor Program webinar recordings of the three-part series on PID options and solutions. The webinar is discussed in the Control Talk blog post PID Options and Solutions – Part 1 and the post PID Options and Solutions – Parts 2 and 3. Since the following questions from one of our most knowledgeable recent protégés Adrian Taylor refer to slide numbers, please open or download the presentation slide deck ISA Mentor Program WebEx PID Options and Solutions.

     

    Figure 1: ISA Standard Form (see slide #42 from the presentation PDF)

     

    Figure 1 depicts the only known time domain block diagram for ISA Standard Form with eight different PID structures by setpoint weight factors and the positive feedback implementation of integral mode that enables true external-reset feedback (ERF). Many capabilities, such as deadtime compensation, directional move suppression, and the elimination oscillations from deadband, poor resolution, poor sensitivity, wireless update times, and analyzer sample times, are readily achieved by turning on ERF.

    Adrian Taylor’s Question 1:

    Slide 9 details a Y factor which varies between 0.28 and 0.88 for converting the faster lags to apparent dead time. You mentioned this Y factor can be looked up on charts given by Ziegler and Nichols (Z&N), are you able to provide with a copy of these charts or point me to where I can get a copy?

    Greg McMillan’s Answer 1:

    The chart and equations to compute Y are on page 137 of my Tuning and Control Loop Performance Fourth Edition (Momentum Press 2015). The original source is Ziegler, J. G., and Nichols, N. B., “Process Lags in Automatic Control Circuits,” ASME Transactions, 1943.

    Adrian Taylor’s Question 2:

    On slide 17 you give recommendations for setting of Lambda, I’m presuming these recommendations are for integrating and near integrating systems only and wondered what your recommendations are for setting of Lambda when using the self-regulating rules?

    Greg McMillan’s Answer 2:

    I was focused on near-integrating and integrating processes since these are the more important ones in the type of plants I worked in but the recommendations for Lambda apply to self-regulating as well. Lambda being a fraction of deadtime for extremely aggressive control is of theoretical value only to show how good PID can be if you are publishing a paper. I would never advocate a Lambda less than one deadtime unless you are absolutely confident you exactly know dynamics and that they never change and you can tolerate some oscillation. Lambda being a multiple of deadtime for robust control is of practical value for dealing with changing or unknown dynamics and providing a smooth response.

    Adrian Taylor’s Question 3:

    On slide 17 where the recommended Lambda settings are given, it recommends a Lambda of 3 and 6 respectively for adverse changes in loop dynamics of less than 5 and 10. What do 5 and 10 refer to?

    Greg McMillan’s Answer 3:

    I should have said “factor of 5” and “factor of 10” instead of just “5” and “10”, respectively in statement on robustness. These factors are actually gain margins. I also should not have rounded up to a factor of 10 and instead said a factor of 9 for Lambda 6x deadtime. While this specifically indicates what increase in self-regulating or integrating process gain as a factor of original can occur without the loop going unstable, it can be extended to give an approximate idea of how much other adverse changes in loop dynamics can be tolerated if the process gain is constant. For example, the factor applies roughly to the increase in total loop deadtime for deadtime dominant self-regulating processes and decrease in process time constant for lag dominant processes that would cause instability. This extension assumes Lambda tuning where the Lambda in every case is a factor of deadtime with the reset time being proportional to process time constant for deadtime dominant processes and reset time being proportional to deadtime for lag dominant processes. The reasoning can be seen in the equations for PID gain and reset time on slides 30 and 32 without my minimum limits on reset time.

    Adrian Taylor’s Question 4:

    On slide 17 there is a statement “Adverse changes are multiplicative…”. I didn’t quite understand the context of this statement if you are able to expand a little more? (Probably goes hand in hand with question 3 above).

    Greg McMillan’s Answer 4:

    An increase in process gain by factor of 2 will result in a combined factor of 9 for adverse changes such as a decrease in process time constant for lag dominant self-regulating processes by factor of 4.5 or an increase in loop dead time by a factor of 4.5 for deadtime dominant processes.

     

    Join the ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.

     

    Adrian Taylor’s Question 5:

    On slide 31 when calculating arrest time we use a value Δ% which is described as the maximum allowable level change (%). Just to be sure I understand the value to be used here… If I had a setpoint high limit of 80% and the tank overflow is at 100% then the value of Δ% would be equal to 100-80=20%?

    Greg McMillan’s Answer 5:

    Yes, if the high level alarm is above the high setpoint limit. Δ% is maximum allowable deviation that is often the difference between an operating point and the point where there is an alarm.

    Adrian Taylor’s Question 6:

    On slide 31 when calculating arrest time we use a value Δ% which is described as the maximum allowable PID output change? Is this just simply the difference between the output high and low limits…. So if the output high limit was 100% and the output low limit was 0%, then the value of Δ% would be equal to 100-0=100%?

    Greg McMillan’s Answer 6:

    Yes. This term in the equation is counter intuitive but results from derivation of equation in Tuning and Control Loop Performance Fourth Edition using minimum integrating process gain.

     

     

    Adrian Taylor’s Question 7:

    I am going to purchase a copy of your ‘Tuning and Control Loop Performance’ book shown at the end of the presentation. I am curious if you think it is also worth purchasing the tuning rules pocket book, or if all the content of the pocket book is also contained in the larger book I am already purchasing?

    Greg McMillan’s Answer 7:

    The ‘Tuning and Control Loop Performance’ book is much more complete and explanatory but can be overwhelming. The pocket guide provides a more concise and focused way of knowing what to do.

    Adrian Taylor’s Question 8:

    At the end of webinar a question was posed around tuning loops where it is not possible to put the loops in manual, I am seeking more specifics based on my notes on procedure:

    Greg McMillan’s Answer :

    A closed loop procedure using your notes to give approximate tuning that keeps loop in automatic is as follows if you cannot put loop in manual or do not have software to identify the loop dynamics:

    1. Increase reset time by a factor of 100 or more to effectively disable integral action.
    2. For the worst case where deadtime and process gain is largest (often at lowest production rate), increase the PID gain in increments, at each increment change the setpoint to get things moving, find the value of PID gain which starts to give about 1/4 amplitude oscillations (similar to Z&N ultimate oscillation method of equal amplitude oscillations but with less risk & upset).
    3. Note the value of PID gain which just starts to give oscillations (e.g., ¼ amplitude decay can be goal realizing more decay gives more robustness). Use this value for aggressive control or ½ and ¼ this value to give a smooth response and a gain margin of about 5 and 9, respectively.
    4. Note the period of the damped oscillations, use this value to determine the reset time.
    5. This roughly corresponds to Ziegler Nichols tuning but with a higher reset time to give more phase margin and the option to cut the PID gain by ½ and ¼ to provide more gain margin.
    6. If you add derivative action (e.g., rate time 10% of the period), you can halve the reset time making sure the reset time is greater than 4 times the rate time for the ISA Standard Form.
    7. Test tuning by momentarily putting PID in manual and making a step change in PID output.
    8. To prevent overshoot in setpoint response, use a PID structure of PD on PV and I on error. If you need a faster setpoint response, use a 3 Degrees of Freedom (3DOF) structure starting with a Beta of about 0.5 and Gamma of about 0.2. Increase these values for faster response.

     

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article “Enabling new automation engineers” for candid comments from some of the original program participants. See the Control Talk column “How to effectively get engineering knowledge” with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column “How to succeed at career and project migration” with protégé Bill Thomas on how to make the most out of yourself and your project.Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Process Management), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (Process Control Leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (Senior Instrumentation / Electrical Specialist at D.M.W Instrumentation Consulting Services, Ltd.).
    • 14 Jul 2018

    How to Prevent Acceleration, Jumping and Oscillation of Unstable Processes

    Inverse response, negative resistance, positive feedback and discontinuities can cause processes to jump, accelerate and oscillate confusing the control system and the operator. Not properly addressing these situations can result in equipment damage and plant shutdowns besides the loss of process efficiency. Here we first develop a fundamental understanding of the causes and then quickly move on to the solutions to keep the process safe and productive.

    We can appreciate how positive feedback causes problems with sound systems. We can also appreciate from circuit theory how negative resistance and positive feedback would cause an acceleration of a change in current flow. We can turn this insight into an understanding of how a similar situation develops for compressor, steam-jet ejector, exothermic reactor and parallel heat exchanger control.

    The compressor characteristic curves from the compressor manufacturer that are a plot of compressor pressure rise versus suction flow shows a curve of decreasing pressure rise for each speed or suction vane position whose  slope magnitude increases as the suction flow increases in the normal operating region. The pressure rise consequently decreases more as the flow increases opposing additional increases in compressor flow creating a positive resistance to flow. Not commonly seen is that compressor characteristic curve slope to the left of the surge point becomes zero as you decrease flow, which denotes a point on the surge curve, and then as the flow decreases further, the pressure rise decreases causing a further decrease in compressor flow creating a negative resistance to a decrease in flow. When the flow becomes negative, the slope reverses sign creating a positive resistance with a shape similar to that seen in the normal operating region to the right of the surge point. The compressor flow then increases to a positive flow at which point the slope reverses sign creating negative resistance. The compressor flow jumps in about 0.03 seconds from the start of negative resistance to some point of positive resistance. The result is a jump in 0.03 seconds to negative flow across the negative resistance, a slower transition along positive resistance to zero flow, than a jump in 0.03 seconds across the negative resistance to a positive flow well to the right of the surge curve. If the surge valve is not open far enough, the operating point walks about 0.5 to 0.75 seconds along the positive resistance to the surge point. The whole cycle repeats itself with an oscillation period of 1 to 2 seconds.

    The following plot of a pilot plant compressor characteristic for a single speed shows the path 2 along the curve 1. When the operating point reaches point B, which is where the compressor characteristic curve slope is zero, the operating point jumps to point C due to the negative resistance. This jump corresponds to the precipitous drop in flow that signals the start of the surge cycle and the subsequent reversal of flow (negative acfm).  After this jump to point C, the operating point follows the compressor curve from point C to point D as the plenum volume is emptied due to reverse flow. When the operating point reaches point D, which is where the compressor characteristic slope is zero again, the operating point jumps to point A due to negative resistance. If the surge valve is not opened, the operating point walks again from A to B starting the whole oscillation all over again.

    Compressor Surge Path and Characteristic Curve

    Once a compressor gets into surge, the very rapid jumps and oscillations confuse the PID controller. Even a very fast PID and control valve is not fast enough. Consequently, the oscillation persists until an open loop backup holds open the surge valves until the operating point is sustained well to the right of the surge curve for about 10 seconds at which point there is a bumpless transfer back to PID control. The solution is a very fast valve and PID and an open loop backup that detects a zero slope indicating an approach to surge or a rapid dip in flow indicating an actual surge. The operating point should always be kept well to the right of the surge point.

    The same shape but with much less of a dip in the compressor curve, sometimes occurs just to the right of the surge point. This local dip causes a jumping back and forth called buzzing. While the oscillation is much less severe than surge, the continual buzzing is disruptive to users.

    A similar sort of dip in a curve occurs in a plot of pumping rate versus absolute pressure for a steam-jet ejector. The result is a jumping across the path of negative resistance. The solution here is a different operating pressure or nozzle design, or multiple jets to reduce the operating range so that operation to one side or the other of the dip can be assured. 

    Positive feedback occurs in exothermic reactors when the heat of reaction exceeds the cooling rate causing an accelerating rise in temperature that further increases the heat of reaction. The solution is to always insure the cooling rate is larger than the heat of reaction. However, in polymerization reactions the rate of reaction can accelerate so fast the cooling rate cannot be increased fast enough causing a shutdown or a severe oscillation. For safety and process performance, an aggressively tuned PID is essential where the time constants and dead time associated with heat transfer in cooling surface and thermowell and loop response are much less than the positive feedback time constant. Derivative action must be maximized and integral action must be minimized. In some cases a proportional plus derivative controller is used.

    Positive feedback can also occur when parallel heat exchanges have a common process fluid input each with outlet temperature controller(s) with a setpoint close to the boiling point or temperature resulting in vaporization of a component in the process fluid. Each temperature controller is manipulating a utility stream providing heat input. The control system is stable if the process flow is exactly the same to all exchangers. However, a sudden reduction in one  process flow causes overheating causing bubbles to form expanding back into the exchanger causing an increase in back pressure and hence a further decrease in process flow. The increasing back pressure eventually forces all of the process flow into the colder heat exchanger making it colder. The high velocity in the hot exchanger from boiling and vaporization causes vibration and possibly damage to any discontinuity in its path from slugs of water. When all of the water is pushed out of the hot exchanger, its temperature drops drawing feed from the cold heat exchanger that causes it to overheat repeating the whole cycle. The solution is separate flow controllers and pumps for all streams so that changes in one flow do not affect another and a lower temperature setpoint.

    Acceleration in the response of a process variable is also seen as the pH approaches neutrality in a strong acid and strong base system due to the increase in process gain by a factor of 10 for each pH unit closer to neutrality (e.g., 7 pH). The result is a limit cycle across the steep portions of titration curve to the flat portions where the slope is much smaller (process gain is much smaller). The solution is to use signal characterization, excellent mixing, and very precise reagent valves.

    Inverse response from changes in phase commonly seen in boiler level control occurs when increases in cold feedwater flow causes a collapse of bubbles in the downcomers causing drum level to shrink from liquid in drum falling back down into downcomers. In the opposite direction, decreases in cold feedwater flow causes a formation of bubbles in the downcomers causing drum level to swell from liquid rising up from downcomers into the drum. Preheating the feedwater can greatly reduce shrink and swell. The control solution is normally a feedforward of steam flow to feedforward flow and less feedback action in a setup called three element drum level control. However, the shrink and swell can be so large for drums too small as a result of a misguided attempt to save capital costs or as a result of pushing plant capacity beyond original design. In these cases, the direction is reversed for the very start of a feedforward signal change and is decayed out to be the right direction for the material balance. This counterintuitive action helps prevent a level shutdown but must be very carefully monitored. A warmer feedwater can make this the wrong action.

    Limit cycles also develop from resolution limits typically as a result of stiction in control valves but can also originate from input cards to change speed for variable frequency drives (VFDs). Believe it or not the standard VFD input card used maybe even to this day has a resolution of only about 0.4%. Resolution limits create limit cycles that cannot be stopped unless all integrating action is removed from the process and controllers. The limit cycle period can be reduced by increasing the controller gain but the amplitude is set by the process gain.

    If there are two or more sources of integrating action in a control loop, limit cycles also develop from deadband typically as a result of backlash in control valves but can also originate from deadband settings in the setup of variable frequency drives (VFDs) or configured in the split range point of controllers. The limit cycle period and amplitude can be reduced by increasing the controller gain.

    Many processes have integrating action. Positioners may mistakenly have integral action. Most controllers have integrating action but it can be suspended by an integral deadband or by the use of external-reset feedback (dynamic reset limit) where there is a fast readback of actual valve position or VFD speed. For a near-integrating, true integrating or runaway process response there is a PID gain window where oscillations result from too low besides too high of a PID gain. The low PID gain limit increases as integral action increases (reset time decreases).

    To summarize, to eliminate oscillations, the best solution is a design that eliminates negative resistance, inverse response, positive feedback and discontinuities. When this cannot provide the total solution, operating points may need to be restricted and the controller gain increased and integral action decreased or suspended. Not covered here are the oscillations due to resonance and interaction. In these situations, better pairing of controlled and manipulated variables is the first choice. If this is not possible, see if the faster loop can be made faster and the slower loop made slower so that the closed loop response times of the loops are different by a factor of five or more. The suspension of integral action best done by external-reset feedback can also help. The same rule and solution works for cascade control. If pairing and tuning does not solve the interaction problem, then decoupling via feedforward of one controller output to the other is needed or moving on to model predictive control.

    If this gives you a headache from concerns raised about your applications, suspend thinking about the problems and use creativity and better tuning when you can actually do something.

     

    • 9 Jul 2018

    Webinar Recording: Feedforward and Ratio Control

    The post Webinar Recording: Feedforward and Ratio Control first appeared on the ISA Interchange blog site.

    This educational ISA webinar on control valves was presented by Greg McMillan in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).

     

     

    Feedforward control and ratio control that preemptively correct for load disturbances or changes in leader flow are greatly underutilized due to the lack of understanding of how to configure the control and determine the parameters needed. This webinar provides key insights on how feedforward control often simplifies to ratio control. It also explains how to identify the parameters so that the feedforward or ratio control correction does not arrive too late or too soon and the correction has the right sign and value to cancel out the load disturbance or achieve the right ratio to the leader flow.

     

     

     

     

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.

     

     

     

     

     

    • 12 Jun 2018

    Maximizing Synergy between Engineering and Operations

    The post, Maximizing Synergy between Engineering and Operations, first appeared on ControlGlobal.com's Control Talk blog.

    The operator is by far the most important person in the control room having the most intimate knowledge and “hands on” experience with the process. Engineers who are most successful with process improvements realize they need to sit with and observe what operators are doing to deal with a variety of situations. Process engineers tend to recognize this need more than automation engineers. Improvements in operator interfaces, alarms, measurements, valves, and control systems are best accomplished by a synergy of knowledge gained by meetings between research, design, support, maintenance and operations where each talk about what they think are problems and opportunities. ISA Standards and Virtual Plants can provide a mutual understanding in these discussions.

    The most successful process control improvement initiative at Monsanto and Solutia used such discussions with some preparatory work on what the process is actually doing and capable of doing. An opportunity sizing that detailed gaps between current and potential performance estimated by identifying the best performance found from cost sheets and from a design of experiments (DOE) most often done by a virtual plant due to increasingly greater limitations to such experimentation in an actual plant. After completion of the opportunity sizing, a one or two day opportunity assessment was held led by a process engineer with input sought and given by operations, accounting and management, marketing, maintenance, field and lab analyzer specialists, instrument and electrical engineers, and process control engineers. Marketing provided the perspective and details on how the demand and value of different products is expected to change. This knowledge was crucial for effectively estimating the benefits from increases in process flexibility and capacity.  Opportunities for procedure automation and plantwide ratio control making the transition from one product to another faster and more efficient were consequently identified. Agreement was sought and often reached on the percentage of each gap that could be eliminated by potential PCI proposed and discussed during the meeting. A rough estimate of the cost and time required for each PCI implementation was also listed. The ones with the least cost and time requirements were noted as “Quick Hits”. To take advantage of the knowledge and enthusiasm and momentum, the “Quick Hits” were usually started immediately after the meeting or the following week.

    Synergy can be maximized by exploring a wide spectrum of scenarios in a virtual plant that can run faster than real time and discussed in training sessions. Every engineer, scientist technician, and operator should be involved. If necessary this can be done at luncheons. Any resulting Webinar should be recorded including discussions. See the Control article “Virtual Plant Virtuosity” and ISA Mentor Program Webinar Recordings for this and much more in terms of gaining and using operational, process and automation system knowledge.  

    Webinar recordings should focus on the level of understanding needed and achievable in the plant and not what a supplier would like to promote. The ability of operators to learn the essential aspects and principles of process, equipment, and automation system performance should not be underestimated. We want to ensure the operator knows exactly and quickly what is happening being able to get at the root cause of a problem preemptively preventing poor process performance and SIS activation. Operators need to be aware of the severe adverse effect of deadtime. Fortunately, operators want to learn!

    Finding the real causes of potential abnormal situations is critical for improving HMI, alarm systems, engineering, maintenance and operations.  Ideally there should be a single alarm of elevated importance identifying the root cause (e.g., state based alarm) and the operator should be able in HMI to readily investigate conditions associated with root cause. Maintenance should be able to know what mechanical or automation component to repair or replace. Engineering should design procedure automation (state based control) to automatically deal with the abnormal situation.

    Often the very first abnormal measurement is an indication of root cause. However, the abnormal condition should be upstream and the measurement of the abnormal condition should be faster than the measurement of other problems that occur as a consequence or coincidence. This is a particular concern for temperature because thermowells lags can be 10 to 100 seconds depending upon fit and velocity.  For pH, the electrode lags can range from 5 to 200 seconds depending upon glass age, dehydration, fouling and velocity. There is also the deadtime associated with any transportation delay to the sensor. Finally, an output correlated with an input is not necessarily a cause and effect relationship. I find that process analysis and some form of a fault tree diagram and investigating relevant scenarios in a virtual plant as most useful.

    Sharing useful knowledge is the biggest obstacle to success. The biggest obstacle can become the biggest achievement.

    • 6 Jun 2018

    Webinar Recording: Strange but True Process Control Stories

    The post Webinar Recording: Strange but True Process Control Stories first appeared on the ISA Interchange blog site.

    This educational ISA webinar on control valves was presented by Greg McMillan in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).

     

     

    Greg McMillan presents lessons learned the hard way during his 40-year career, through concise “War Stories” of mistakes made in the field. Many of these mistakes are still being made today with some posing a safety risk, as well as potentially reducing process efficiency or capacity.

     

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.

     

     

     

     

    • 23 Apr 2018

    Webinar Recording: Temperature Measurement and Control

    The post Webinar Recording: Temperature Measurement and Control first appeared on the ISA Interchange blog site.

    This educational ISA webinar on control valves was presented by Greg McMillan in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).

     

    Temperature is the most important common measurement that is critical for process efficiency and capacity because it not only affects energy use but also production rate and quality. Temperature plays a critical role in the formation, separation, and purification of product. Here we see how to get the most accurate and responsive measurements and the best control for key unit operations.

     

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.

     

     

     

     

    • 11 Apr 2018

    When is an Automation System too Slow and too Fast?

    The usual concern is whether an automation system is too slow. There are some applications where an automation system is disruptive by being too fast. Here we look at what determines whether a system should be faster or slower and what are the limiting factors and thus the solution to meeting a speed of response objective. In the process, we will find there are a lot of misconceptions. The good news is that most of corrections needed are within the realm of the automation engineer’s responsibility.

    The more general case with possible safety and process performance consequences is when the final control element (e.g., control valve or variable frequency drive), transportation delay, sensor lag(s), transmitter damping, signal filtering, wireless update rate and PID execution rate is too slow. The question is what are the criteria and priorities in terms of increasing the speed of response.

    The key to understanding the impact of slowness is to realize that the minimum peak error and integrated absolute error are proportional to the deadtime and deadtime squared, respectively. The exception is deadtime dominant loops that basically have a peak error equal to the open loop error (error if the PID is in manual) and thus an integrated error that is proportional to deadtime. It is important to realize that this deadtime is not just the process deadtime but a total loop deadtime that is the summation of all the pure delays and the equivalent deadtime from lags in control loop whether in the process, valve, measurement or controller.

    These minimum errors are only achieved by aggressive tuning seen in the literature but not used in practice because of the inevitable changes and unknowns concerning gains, deadtime, and lags. There is always a tradeoff between minimization of errors and robustness. Less aggressive and more robust tuning while necessary results in a greater impact of deadtime in that the gain margin (ratio of ultimate gain to PID gain) and the phase margin (degrees that a process time constant can decrease) is achieved by setting the tuning to be a greater factor of deadtime. For example, to achieve a gain margin of 6 and a phase margin of 76 degrees, lambda is set as 3 times the deadtime.

    The actual errors get larger as the tuning becomes less aggressive. The actual peak error is inversely proportional to the PID gain. The actual integrated error is proportional to the ratio of the integral time (reset time) to PID gain. Consider the use of lambda integrating process tuning rules for a near integrating process where lambda is an arrest time. If you triple the deadtime used in setting the PID gain and reset to maintain a gain margin of about six and a phase margin of 76 degrees, you decrease the PID gain by about a factor of two times the new deadtime and increase the reset time by about a factor of two times the new deadtime increasing the actual integrated error by a factor of thirty six when the new deadtime is 3 times the original deadtime.

    Consequently, how fast automation system components need to be depends on how much they increase the total loop deadtime. The components to make the loop faster is first chosen based on ease such as decreasing PID and wireless execution rate, signal filtering and transmitter damping assuming these are more than ten percent of  total loop deadtime.  Next you need to decrease the largest source of deadtime that may take more time and money such as a better thermowell or electrode design, location and installation or a more precise and faster valve. The deadtime from PID and wireless update rates is about ½ the time between updates. The deadtimes from transmitter damping or sensor lags increase logarithmically from about 0.28 to 0.88 times the lag as the ratio of the lag to the largest open loop time constant decreases from 1 to 0.01. The deadtime from backlash, stiction and poor sensitivity is the deadband or resolution limit divided by the rate of change of the controller output.   Fortunately, deadtime is generally easier and quicker to identify than the open loop time constant and open loop gain. See the Control Talk Blog “Deadtime, the Simple Easy Key to Better Control.”

    For flow and pressure processes, the process deadtime is often less than one second making by far the control system components the largest source of deadtime. For compressor, liquid pressure and furnace pressure control, the control valve is the largest source of deadtime even when a booster is added. Transmitter damping is generally the next largest source followed by PID execution rate.

    There is a common misconception that the wireless update time should be less than a fraction (e.g., 1/6) of the response time. For the more interesting processes such as temperature and pH, the time constant is much larger than the deadtime. A well-mixed vessel could have a process time constant that is more than 40 times the process deadtime. If you use the criteria of 1/6 the response time assuming the best case scenario of a 63% response time, the increase in deadtime can be as large as 3 times the deadtime from the wireless update rate.  Fortunately, wireless update rates are never that slow. Another reason not to focus on response time is because in integrating processes where there is no  steady state, a response time is irrelevant.

    The remaining question is when is the automation system too fast? The example that most comes to mind is when the faster system causes greater resonance or interaction. You want the most important loops to be able to see an oscillation from less important loops whose period is at least four times its ultimate period to reduce resonance and interaction. Hopefully, this is done by making the more important loop faster but if necessary is done by making the less important loops slower.  A less recognized but very common case of needing to slow down an automation loop is when it creates a load disturbance to other loops (e.g., feed rate change). While step changes are what are analyzed in the literature so far as disturbances, in real applications there are seldom any step changes due the tuning of the PID and the response of the valve. This effect can be approximated by applying a time constant to the load disturbance and realizing that the resulting errors are reduced compared to the step disturbance by a factor that is one minus the base e to the negative power of lambda divided by the disturbance time constant.

    Overshoot of a temperature or pH setpoint is extremely detrimental to bioreactor cell life and productivity. Making the loop response much slower by much less aggressive tuning settings and a PID structure of Integral on Error and Proportional -Derivative on Process Variable (I on E and PD on PV) is greatly needed and permitted because the load disturbances from cell growth rate or production rate are incredibly slow (effective process time constant in days).  In fact, fast disturbances are the result of one loop affecting another (e.g., pH and dissolved oxygen control). 

    In dryer control, the difference between inlet and outlet temperatures that is used as the inferential measurement of dryer moisture is filtered by a large time constant that is greater than the moisture controller’s reset time. This is necessary to prevent a spiraling oscillation from positive feedback.

    Filters on setpoints are used in loops whose setpoint is set by an operator or a valve position controller to change the process operating point or production rate. This filter can provide synchronization in ratio control of reactant flow maintaining the ability of each flow loop to be tuned to deal with supply pressure disturbances and positioner sensitivity limits. However, a filter on a secondary lower loop setpoint in cascade control is generally detrimental because it slows down the ability of the primary loop to react to disturbances.

    Finally, more controversial but potentially useful is a filter on the pH at the outlet of static mixer for a strong acid and base to control in the neutral region. Here the filter acts to average the inevitable extremely large oscillations due to nearly non-existent back mixing and the steep titration curve. The result is a happier valve and operator. The averaged pH setpoint should be corrected by a downstream pH loop that is on a well-mixed vessel that sees a much smoother pH on a much narrower region of the titration curve. A better solution is signal characterization. The static mixer controlled variable becomes the abscissa of the titration curve (reagent demand) rather than the ordinate (pH). This linearization greatly reduces the oscillations from the steep portion of the titration curve and enables a larger PID gain to be used. The titration curve must not be very accurate but must include the effect of absorption of carbon dioxide from exposure to air and the change in dissociation constants and consequently actual solution pH with temperature not addressed by a standard temperature compensator that is simply addressing the temperature effect in the Nernst equation. You need to be also aware that the pH of process samples and consequently the shape of the titration curve can change due to changes in sample liquid phase composition from reaction, evaporation, absorption and dissolution. The longer the time is between the sample being taken and titrated, the more problematic are these changes.

    • 9 Mar 2018

    Solutions to Prevent Harmful Feedforwards

    The post, Solutions to Prevent Harmful Feedforwards, originally appeared on the ControlGlobal.com Control Talk blog.

    Here we looks at applications where feedforward can do more harm than good and what to do to prevent this situation. This problem is more common than one might think. In the literature we mostly hear how beneficial feedforward can be for measured load disturbances. Statements are made that the limitation is the accuracy of the feedforward and that consequently an error of 2% can still result in a 50:1 improvement in control. This optimistic view does not take into account process, load and valve dynamics. The feedforward correction needs to arrive in the process at the same point and the same time as the load disturbance. This is traditionally achieved by passing the feedforward (FF) through a deadtime block and lead-lag block. The FF deadtime is set equal the load path deadtime minus the correction path deadtime. The FF lead time is set equal to the correction path lag time. The FF lag time is set equal to the load path lag time. If the FF arrives too soon, we create inverse response and if the FF arrives too late, we create a second disturbance. Setting up tuning software to identify and compute the FF dynamic can be challenging. Even more problematic are the following feedforward applications that do more harm than good despite dynamic compensation.

    (1) Inverse response from the manipulated flow causes excessive reaction in the opposite direction of load. The inverse response from a feedwater change can be so large as to cause a boiler drum high or low level trip, a situation that particularly occurs for undersized drums and missing feedwater heaters due misguided attempts to save on capital costs. The solution here is to use a traditional three element drum level control but added to the traditional feedforward is an unconventional feedforward with the opposite sign that is decayed out over the period of the inverse response. In other words, for a step increase in steam flow, there would be initially a step decrease in boiler feedwater feedforward added to the three element drum level controller output that is trying to increase feedwater flow. This prevents shrink and a low level trip from bubbles collapsing in the downcomers from an increase in cold feedwater. For a step decrease in steam flow, there would be a step increase in boiler feedwater feedforward added to the three element drum level controller output that is trying to decrease feedwater flow. This prevents swell and a high level trip from bubbles forming in the downcomers from a decrease in cold feedwater. A severe problem of inverse response can occur in furnace pressure control when the scale is a few inches of water column and the incoming air manipulated is not sufficiently heated. The inverse response from the ideal gas law can cause a pressure trip. An increase in cold air flow causes a decrease in gas temperature and consequently a relatively large decrease in gas pressure at the furnace pressure sensor.  A decrease in cold air flow causes an increase in gas temperature and consequently a relatively large increase in gas pressure at the furnace pressure sensor.

    (2) Deadtime in correction path is greater than the deadtime in the load path. The result is a feedforward that arrives too late creating a second disturbance and worse control than if there was no feedforward. This occurs whenever the correction path is longer than the load path. An example is a distillation column control when the feed load upset stream is closer to the temperature control tray than the corrective change in reflux flow.  The solution is to generate the feedforward signal for ratio control based on a setpoint change that is then delayed before being used by the feed flow controller. The delay is equal to the correction path deadtime minus the load path deadtime. The same problem can occur for a reagent injection delay that often occurs due to conventional sized dip tubes and small reagent flows. The same solution applies in terms of using an influent flow controller setpoint for feedforward ratio control of reagent and delaying the setpoint used by the influent flow controller.

    (3) Feedforward correction makes response from an unmeasured disturbance worse. This occurs in unit operations such as distillation columns and neutralizers where the unmeasured disturbance from a feed composition change is made worse by a feedforward correction based on feed flow. Often feed composition is not measured and is large due to parallel unit operations and a combination of flows that become the feed flow. For pH, the nonlinearity of titration curve increases the sensitivity to feed composition. Even if the influent pH is measured, the pH electrode error or uncertainty of the titration curve makes feedforward correction for feed pH to do more harm than good for setpoints on the steep part of the curve.  If the feed composition change requires a decrease in manipulated flow and there is a coincidental increase in feed flow that corresponds to an increase in manipulated flow or vice versa, the feedforward does more harm than good. The solution is to compute the required rate of change of manipulated from the unmeasured disturbance and subtract this from the computed rate of change for the feedforward correction needed paying attention to the signs of the rate of change. If the unmeasured disturbance rate of change of manipulated flow is in the same direction and exceeds the computed feedforward correction rate of change in the manipulated flow, the feedforward rate of change is clamped at zero to prevent making control worse. If the rates of change for the manipulated are in the same direction, the magnitude of the feedforward rate of change is correspondingly increased.

    I am trying to see how all this applies in my responses to known and unknown upsets to my spouse.