*Posts on this page are from the Control Talk blog, which is one of the ControlGlobal.com blogs for process automation and instrumentation professionals and Greg McMillan’s contributions to the ISA Interchange blog.

Tips for New Process Automation Folks
    • 11 Feb 2019

    Webinar Recording: Simple Loop Tuning Methods and PID Features to Prevent Oscillations

    The post Webinar Recording: Simple Loop Tuning Methods and PID Features to Prevent Oscillations first appeared on the ISA Interchange blog site.

    This educational ISA webinar was presented by Greg McMillan in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).

    Part 3 (the final part) describes simple tuning methods and the PID features that can be used to prevent the oscillations that plague our most important loops and to achieve the desired degree of tightness or looseness in level control. A general procedure is offered and a block diagram of the most effective PID structure, not shown anywhere else, is given followed by questions and answers.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program.

    About the Presenter
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 18 Jan 2019

    Missed Opportunities in Process Control - Part 1

    The post, Missed Opportunities in Process Control - Part 1, first appeared on the ControlGlobal.com Control Talk blog.

    I had an awakening as to the much greater than realized disconnect between what is said in the literature and courses and what we need to know as practitioners as I was giving guest lectures and labs to chemical engineering students on PID control. We are increasingly messed up. The disparity between theory and practice is exponentially growing because of leaders in process control leaving the stage and users today not given the time to explore and innovate and the freedom to publish. Much of what is out there is a distraction at best.  I decided to make a decisive pitch not holding back for sake of diplomacy. Here is the start of a point blank decisive comprehensive list in a six part series.

    Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail. If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts.

    1. Recognizing and addressing actual load disturbance location. Most of the literature unfortunately shows disturbances entering the process output when in reality disturbances enter mostly as process inputs (e.g., feed flow, composition and temperature changes) passing through the primary process time constant. Thinking of disturbances on the process output leads to many wrong conclusions and mistakes, such as large primary  time constants are bad, tuning can be done primarily for setpoint changes,  feedforward and ratio control is not important, and algorithms like Internal Model Control are good alternatives to PID control.
    2. Tuning and tests to first achieve good load disturbance rejection and then good setpoint response. While most of the literature focuses on setpoint response tuning and testing, the first objective should be good load disturbance rejection particularly in chemical processes. Such tuning generally requires more aggressive proportional action. Testing is simply done by momentarily putting the PID in manual, changing the PID output and putting the PID back in auto. Tuning should minimize peak and integrated error from load disturbances taking into account needs to minimize resonance. To prevent overshoot in the setpoint response, a setpoint lead-lag can be used with lag time equal to reset time or a PID structure of proportional and derivative action on PV and integral action on error (PD on PV and I on E) can be used. If a faster setpoint response is needed, setpoint lead can be increased to ¼ lag time or a 2 Degrees of Freedom (2DOF) PID structure used with setpoint weight factors for the  proportional and derivative modes equal to 0.5 and 0.25, respectively. Rapid changes in signals to valves or secondary loops upsetting other loops from higher PID gain setting can be smoothed by setpoint rate limits on analog output blocks and secondary PIDs and turning on external-reset feedback (ERF). We will note the many other advantages of ERF and its facilitation of directional move suppression to intelligently slow down changes of manipulated flows in a disruptive direction in subsequent months (hope you can wait). In Model Predictive Control move suppression plays a key role. Here we can enable it with additional intelligence of direction without retuning PID.
    3. Minimum possible peak error is proportional to dead time and actual peak error is inversely proportional to PID gain. Peak error is important to prevent relief, alarm and SIS activation and environmental violation. The ultimate limit to what you can achieve in minimizing peak error is proportional to the total loop dead time. The practical limit as to what you actually achieve is inversely proportional to the product of the PID gain and open loop process gain. The maximum PID gain is inversely proportional to the total loop dead time. These relationships hold best for near-integrating, true integrating and runaway processes.
    4. Minimum possible integrated error is proportional to dead time squared and actual peak error is proportional to reset time and inversely proportional to PID gain. The integrated absolute error is the most common criteria sited in literature. It does provide a measure of the amount of process material that is off-spec. The ultimate limit to what you can achieve in minimizing integrated error is proportional to the total loop dead time squared. The practical limit as to what you actually achieve is proportional to reset time and inversely proportional to the product of the PID gain and open loop process gain. The minimum reset time is proportional and the maximum PID gain is inversely proportional to the total loop dead time.  These      relationships hold best for near-integrating, true integrating and runaway processes.
    5. Detuning a PID can be evaluated as an increase in implied dead time. The relationships cited in items 3 and 4 above can be understood by realizing that a larger than actual total loop dead time is the effect  on loop performance of a smaller PID gain and larger reset time setting than needed to prevent oscillations. This implied dead time is basically ½ and ¼ the summation of Lambda plus the actual dead time, for self-regulating and integrating processes, respectively.
    6. The effect of analyzer cycle time and wireless update rate depends on implied dead time and consequently tuning. You can prove almost any point you want to make about whether the effect of a discontinuous update is important or not by how you tune the PID. The dead time from an analyzer cycle time is 1½ times the cycle time. The dead time from a wireless device update or PID execution rate or sample rate is ½ the time interval between updates assuming no latency. How important this  additional dead time is seen in how big it is relative to the implied dead time. The conventional rule of thumb is that the dead time from discontinuous updates should be less than 10% of the total loop dead time (wireless update rates and PID execution rates less than 20% of dead time). This is only really true if you are pursing aggressive control where the implied dead time is near the actual dead time. A better recommendation would be a wireless update rate or PID execution rate less      than 20% of “original” implied dead time. I use the work “original” to remind us not to spiral into slowing down update and execution rates by increasing implied dead time and then further slowing down update and execution rates.
    7. The product of the PID gain and reset time must be greater than the inverse of the integrating process gain. Violation of this rule cause very large and very slow oscillations that are slightly damped taking hours to days to die out for vessels and columns, respectively. This is a common problem because in control theory courses we learned that high controller gain causes oscillations and the actual PID gain permitted for near integrating, true integrating and runaway processes is quite large (e.g., > 100). Most don’t think such a high PID gain is possible and don’t like sudden large movements in valves. Furthermore, integral action  provides the gradual action that will always be in a direction consistent  with error sign and will seek to exactly match up PV and SP meeting common expectations. The result is a reset time frequently set that is orders of magnitude too small making the product of PID gain and reset time less than the inverse of the integrating process gain causing confusing slow oscillations.
    8. The effective rate time should be less than ¼ the effective reset time. While PID controllers with a Series Form effectively prevented this due to interaction factors in the time domain, this is not the case for the other PID Forms. Not enforcing this limit is a common problem in migration projects since older controllers had the Series Form and most modern controllers use the ISA Standard Form. The result is erratic fast oscillations.
    9. Automation system dynamics affect the performance of most loops. This should be good news for us since this is much more under the control of the automation engineer and easier and cheaper to fix than process or equipment dynamics. Flow, pressure, inline temperature and composition (e.g., static mixer), and fluidized bed reactors are affected by sensor response time and final control element (e.g., valve and VFD) response  time. Pressure and surge control loops are also affected by PID execution rate.
    10. Reserve feedforward multiplier and ratio controller ratio correction for sheet lines and plug flow systems.  The conventional rule that on a plot of manipulated variable versus feedforward variable, a change in slope demands a feedforward multiplier and a change in intercept demands a feedforward summer is not really relevant. A feedforward multiplier introduces a change in controller gain that is counteracts the change in process gain. However, this is only useful for sheet lines and plug flow (e.g., static mixers and extruders) because for vessels and columns, the effect of back mixing from agitation and reflux or recirculation creates a process time constant that is proportional to the residence time. For decreases in feed flow the increase in process time constant from an increase in residence time negates the increase in process gain. Also, the most important error is often a bias error in the measurements. Span errors are smitten by a large span showing up mostly as a change in process gain much less than the other sources of changes in process  gain.  Also, the scaling and filtering of a feedforward summer signal and its correction is much easier.     
    • 14 Jan 2019

    How to Get Started with Effective Use of OPC

    The post How to Get Started with Effective Use of OPC first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    Encouraged to ask general questions that would help share knowledge, Nikki Escamillas provided several questions on OPC. Initially, the OPC standard was restricted to the Windows operating system with the acronym originally designating OLE (object linking and embedding) for process control.  OPC is the acronym for open platform communications that is much more widely used playing a key role in automation systems. We are fortunate to have answers to Nikki’s questions from a knowledgeable expert in higher level automation system communications, Tom Freiberger, product manager for industrial Ethernet in R&D engineering for Emerson Automation Solutions.

    Nikki Escamillas is a recently added protégé in the ISA Mentor Program. Nikki is an Automation Process Engineer for Republic Cement and Building Materials – Batangas Plant. Nikki specializes in process optimization and automation control, committed in minimizing cost and product quality improvement through effective time management and efficient use of resources and data analytics. Nikki has an excellent knowledge and experience of advanced process control principles and its application to different plant processes more specifically on cement and building materials manufacturing.

    Nikki Escamillas’ First Question

    How does OPC work?

    Tom Freiberger’s Answer

    OPC is a client/server protocol. The server has a list of data points (normally in a tree structure) that it provides. A client can connect to a server and pick a set of data points it wishes to use. The client can then read or write to those data points.  OPC is meant to be a common language for integrating products from multiple vendors. The OPC Foundation has a good introduction of OPC DA and UA at their website.

    Nikki Escamillas’ Second Question

    Does configuration of OPC DA differs from OPC UA?

    Tom Freiberger’s Answer

    Yes and no. The core concept of client/server and working with a set of data points remains consistent between the two, but the details of how to configure differ. The security configuration is the primary difference. OPC DA is based off of Microsoft’s DCOM technology, which means the security settings in the operating system are used. OPC UA runs on many operating systems and therefore the security settings are embedded into the configuration of the OPC application. OPC UA applications should use common terminology in their configuration, to ease integration between multiple vendors

    Nikki Escamillas’ Third Question

    Do we have any guidelines to follow when installing and configuring one OPC based upon its type?

    Tom Freiberger’s Answer

    Installation and configuration guidelines are going to be specific to the products being used. Some products are going to be limited on the number of data points that can be exchanged by a license or other application limitation. Some products may have performance limits. All of these details should be supplied in the documentation of the product.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program.

    Nikki Escamillas’ Fourth Question

    Could I directly make one computer to become OPC capable?

    Tom Freiberger’s Answer

    An OPC server or client by itself is just a means to transfer data. OPC is not very interesting without another application behind it to supply information. The computer you are attempting to add OPC to would need some other application to provide data. The vendor of that application would need to build OPC into their product. If the application with the data supports some other protocol to exchange data (like Modbus TCP, Ethernet/IP, or PROFINET) an OPC protocol converter could be used to interface with other OPC applications. If the application with the data has no means of extracting the information, there is nothing an OPC server or client can do.

    Nikki Escamillas’ Fifth Question

    Is it also possible to create a server to server communication between two OPC?

    Tom Freiberger’s Answer

    I believe there are options for this in the OPC protocol specification, but the details would be specific to the product being used. If it allows server to server connections, it should be listed in its documentation.

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 19 Dec 2018

    Webinar Recording: PID and Loop Tuning Options and Solutions for Industrial Applications

    The post Webinar Recording: PID and Loop Tuning Options and Solutions for Industrial Applications first appeared on the ISA Interchange blog site.

    This educational ISA webinar was presented by Greg McMillan in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).

    This is Part 1 of a series on the benefits of knowing your process and PID capability. Part 1 focuses on process behavior, the many loop objectives and different worlds of industrial applications, and the loop component’s contribution to the dynamic response.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program.

    About the Presenter
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 19 Dec 2018

    Webinar Recording: PID and Loop Tuning Options and Solutions for Industrial Applications

    The post Webinar Recording: PID and Loop Tuning Options and Solutions for Industrial Applications first appeared on the ISA Interchange blog site.

    This educational ISA webinar was presented by Greg McMillan in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).

    This is Part 1 of a series on the benefits of knowing your process and PID capability. Part 1 focuses on process behavior, the many loop objectives and different worlds of industrial applications, and the loop component’s contribution to the dynamic response.

    Join the ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about how you can join the ISA Mentor Program.

    About the Presenter
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 17 Dec 2018

    How to Improve Loop Performance for Dead Time Dominant Systems

    The post How to Improve Loop Performance for Dead Time Dominant Systems first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    Dead time is the source of the ultimate limit to control loop performance. The peak error is proportional to the dead time and the integrated error is dead time squared for load disturbances. If there was no dead time and no noise or interaction, perfect control would be theoretically possible. When the total loop dead time is larger than the open loop time constant, the loop is said to be dead time dominant and solutions are sought to deal with the problem.

    Anuj Narang is an advanced process control engineer at Spartan Controls Limited. He has more than 11 years of experience in the academics and the industry with a PhD in process control. He has designed and implemented large scale industrial control and optimization solutions to achieve sustainable and profitable process and control performance improvements for the customers in the oil and gas, oil sands, power and mining industry. He is a registered Professional Engineer with the Association of Professional Engineers and Geoscientists of Alberta, Canada.

    Anuj’s Question

    Is there any other control algorithm available to improve loop performance for dead time dominant systems other than using Smith predictor or model predictive control (MPC), both of which requires identification of process model?

    Greg McMillan’s Answer

    The solution cited for deadtime dominant loops is often a Smith predictor deadtime compensator (DTC) or model predictive control. There are many counter-intuitive aspects in these solutions. Not realized is that the improvement by the DTC or MPC is less for deadtime dominant systems than for lag dominant systems. Much more problematic is that both DTC and MPC are extremely sensitive to a mismatch between the compensator and model deadtime versus the actual total loop deadtime for a decrease besides an increase in the deadtime. Surprisingly, the consequences for the DTC and MPC are much greater for a decrease in plant dead time. For a conventional PID, a decrease in the deadtime just results in more robustness and slower control. For a DTC and MPC, a decrease in plant deadtime by as little as 25 percent can cause a big increase in integrated error and an erratic response.

    Of course, the best solution is to decrease the many source of dead time in the process and automation system (e.g., reduce transportation and mixing delays and use online analyzers with probes in the process rather than at-line analyzers with a sample transportation delay and an analysis delay that is 1.5 times the cycle time). An algorithmic mitigation of consequences of dead time first advocated by Shinskey and now particularly by me is to simply insert a deadtime block in the PID external-reset feedback path (BKCAL) with the deadtime updated to be always be slightly less than the actual total loop deadtime. Turning on external-reset feedback (e.g., dynamic reset limit) on and off enables and disables the deadtime compensation. Note that for transportation delays, this means updating the deadtime as the total feed rate or volume changes. This PID+TD implementation does not require the identification of the open loop gain and open loop time constant for inclusion as is required for a DTC or MPC. Please note that the external-reset feedback should be the result of a positive feedback implementation of integral action as described in the ISA Mentor Program webinar PID Options and Solutions – Part 3.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program.

    There will be no improvement from a deadtime compensator if the PID tuning settings are left the same as they were before the DTC or by a deadtime block in external-reset feedback (PID+TD).  In fact the performance can be slightly worse for even an accurate deadtime. You need to greatly decrease the PID integral time toward a limit of the execution time plus any error in deadtime. The PID gain should also be increased. The equation for predicting integrated error as a function of PID gain and reset time settings is no longer applicable because it predicts an error less than the ultimate limit that is not possible. The integrated error cannot be less than the peak error multiplied by the deadtime. The ultimate limit is still present because we are not making deadtime disappear.

    If the deadtime is due to analyzer cycle time or wireless update rate, we can use an enhanced PID (e.g., PIDPlus) to effectively prevent the PID from responding between updates. If the open loop response is deadtime dominant mostly due to the analyzer or wireless device, the effect of a new error upon update results in a correction proportional to the PID gain multiplied by the open loop error. If the PID gain is set equal to the inverse of the open loop gain for a self-regulating process, the correction is perfect and takes care of the step disturbance in a single execution after an update in the PID process variable.

    The integral time should be set smaller than expected (about equal to the total loop deadtime that ends up being the PID execution time interval) and the positive feedback implementation of integral action must be used with external reset feedback enabled. The enhanced PID greatly simplifies tuning besides putting the integrated error close to its ultimate limit. Note that you do not see the true error that could’/ have started at any time in between updates but only see the error measured after the update.

    For more on the sensitivity to both increases and decrease in the total loop deadtime and open loop time constant, see the ISA books Models Unleashed: A Virtual Plant and Predictive Control Applications (pages 56-70 for MPC) and Good Tuning: A Pocket Guide 4th Edition (pages 118-122 for DTC). For more on the enhanced PID, see the ISA blog post How to Overcome Challenges of PID Control and Analyzer Applications via Wireless Measurements and the Control Talk blog post, Batch and Continuous Control with At-Line and Offline Analyzers Tips.

    The following figures from Models Unleashed shows how a MPC with two controlled variables (CV1 and CV2)  and two manipulated variables for a matrix with condition number three (CN = 3) responds to a doubling and a halving of the plant dead time (delay) when the total loop dead time is greater than the open loop time constant.

    Figure 1: Dead Time Dominant MPC Test for Doubled Plant Delay

     

    Figure 2: Dead Time Dominant MPC Test for Halved Plant Delay

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 17 Dec 2018

    How to Improve Loop Performance for Dead Time Dominant Systems

    The post How to Improve Loop Performance for Dead Time Dominant Systems first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    Dead time is the source of the ultimate limit to control loop performance. The peak error is proportional to the dead time and the integrated error is dead time squared for load disturbances. If there was no dead time and no noise or interaction, perfect control would be theoretically possible. When the total loop dead time is larger than the open loop time constant, the loop is said to be dead time dominant and solutions are sought to deal with the problem.

    Anuj Narang is an advanced process control engineer at Spartan Controls Limited. He has more than 11 years of experience in the academics and the industry with a PhD in process control. He has designed and implemented large scale industrial control and optimization solutions to achieve sustainable and profitable process and control performance improvements for the customers in the oil and gas, oil sands, power and mining industry. He is a registered Professional Engineer with the Association of Professional Engineers and Geoscientists of Alberta, Canada.

    Anuj’s Question

    Is there any other control algorithm available to improve loop performance for dead time dominant systems other than using Smith predictor or model predictive control (MPC), both of which requires identification of process model?

    Greg McMillan’s Answer

    The solution cited for deadtime dominant loops is often a Smith predictor deadtime compensator (DTC) or model predictive control. There are many counter-intuitive aspects in these solutions. Not realized is that the improvement by the DTC or MPC is less for deadtime dominant systems than for lag dominant systems. Much more problematic is that both DTC and MPC are extremely sensitive to a mismatch between the compensator and model deadtime versus the actual total loop deadtime for a decrease besides an increase in the deadtime. Surprisingly, the consequences for the DTC and MPC are much greater for a decrease in plant dead time. For a conventional PID, a decrease in the deadtime just results in more robustness and slower control. For a DTC and MPC, a decrease in plant deadtime by as little as 25 percent can cause a big increase in integrated error and an erratic response.

    Of course, the best solution is to decrease the many source of dead time in the process and automation system (e.g., reduce transportation and mixing delays and use online analyzers with probes in the process rather than at-line analyzers with a sample transportation delay and an analysis delay that is 1.5 times the cycle time). An algorithmic mitigation of consequences of dead time first advocated by Shinskey and now particularly by me is to simply insert a deadtime block in the PID external-reset feedback path (BKCAL) with the deadtime updated to be always be slightly less than the actual total loop deadtime. Turning on external-reset feedback (e.g., dynamic reset limit) on and off enables and disables the deadtime compensation. Note that for transportation delays, this means updating the deadtime as the total feed rate or volume changes. This PID+TD implementation does not require the identification of the open loop gain and open loop time constant for inclusion as is required for a DTC or MPC. Please note that the external-reset feedback should be the result of a positive feedback implementation of integral action as described in the ISA Mentor Program webinar PID Options and Solutions – Part 3.

    Join the ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about how you can join the ISA Mentor Program.

    There will be no improvement from a deadtime compensator if the PID tuning settings are left the same as they were before the DTC or by a deadtime block in external-reset feedback (PID+TD).  In fact the performance can be slightly worse for even an accurate deadtime. You need to greatly decrease the PID integral time toward a limit of the execution time plus any error in deadtime. The PID gain should also be increased. The equation for predicting integrated error as a function of PID gain and reset time settings is no longer applicable because it predicts an error less than the ultimate limit that is not possible. The integrated error cannot be less than the peak error multiplied by the deadtime. The ultimate limit is still present because we are not making deadtime disappear.

    If the deadtime is due to analyzer cycle time or wireless update rate, we can use an enhanced PID (e.g., PIDPlus) to effectively prevent the PID from responding between updates. If the open loop response is deadtime dominant mostly due to the analyzer or wireless device, the effect of a new error upon update results in a correction proportional to the PID gain multiplied by the open loop error. If the PID gain is set equal to the inverse of the open loop gain for a self-regulating process, the correction is perfect and takes care of the step disturbance in a single execution after an update in the PID process variable.

    The integral time should be set smaller than expected (about equal to the total loop deadtime that ends up being the PID execution time interval) and the positive feedback implementation of integral action must be used with external reset feedback enabled. The enhanced PID greatly simplifies tuning besides putting the integrated error close to its ultimate limit. Note that you do not see the true error that could’/ have started at any time in between updates but only see the error measured after the update.

    For more on the sensitivity to both increases and decrease in the total loop deadtime and open loop time constant, see the ISA books Models Unleashed: A Virtual Plant and Predictive Control Applications (pages 56-70 for MPC) and Good Tuning: A Pocket Guide 4th Edition (pages 118-122 for DTC). For more on the enhanced PID, see the ISA blog post How to Overcome Challenges of PID Control and Analyzer Applications via Wireless Measurements and the Control Talk blog post, Batch and Continuous Control with At-Line and Offline Analyzers Tips.

    The following figures from Models Unleashed shows how a MPC with two controlled variables (CV1 and CV2)  and two manipulated variables for a matrix with condition number three (CN = 3) responds to a doubling and a halving of the plant dead time (delay) when the total loop dead time is greater than the open loop time constant.

    Figure 1: Dead Time Dominant MPC Test for Doubled Plant Delay

     

    Figure 2: Dead Time Dominant MPC Test for Halved Plant Delay

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 21 Nov 2018

    How to Setup and Identify Process Models for Model Predictive Control

    The post How to Setup and Identify Process Models for Model Predictive Control first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    Luis Navas is an ISA Certified Automation Professional and electronic engineer with more than 11 years of experience in process control systems, industrial instrumentation and safety instrumented systems. Luis’ questions on evaporator control are important to improve evaporator concentration control and minimize steam consumption

    Luis Navas’ Introduction

    The process depicted in Figure 1 shows a concentrator with its process inputs and outputs. I have the following questions regarding the process testing in order to generate process models for a MPC in the correct way. I know that MPC process inputs must be perturbed to allow an identification and modeling of each process input and output relationship.  

    Figure 1: Variables for model predictive control of a concentrator

     

    Luis Navas’ First Question

    Before I start perturbing the feed flow or steam flow, should the disturbance be avoided or at least minimized? Or simply let it be as usual in the process since this disturbance is always present?

    Mark Darby’s Answer

    If it is not difficult, you can try to suppress the disturbance. That can help the model identification for the feed and steam. To get a model to the disturbance, you will want movement of the disturbance outside the noise level (best is four to five times). If possible, this may require making changes upstream (for example, LIC.SP or FIC.SP).

    Luis Navas’ Second Question

    What about the steam flow? Should it be maintained a fix flow, (FIC in MAN with a fix % open FCV), while perturbing the feed flow and in the same way when perturbing the steam flow, should the feed flow be fixed? I know some MPC software packages excite its outputs in a PRBS (Pseudo Random Binary Sequence) practically at the same time while the process testing is being executed, and through mathematics catches the input and output relationships, finally generating the model.  

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program.

    Mark Darby’s Answer

    Because the steam and feed setpoints are manipulated variables, it is best to keep them both in auto for the entire test. PRBS is an option, but it will take more setup effort to get the magnitudes and the average switching interval right. An option is to start with a manual test and switch to PRBS after you’ve got a feel for the process and the right step sizes. Note: a pretest should have already been conducted to identify instrument issues, control issues, tuning, etc. Much more detail is offered in my Section 9.3 on in the McGraw-Hill handbook Process/Industrial Instruments and Control Sixth Edition.

    Luis Navas’s Last Questions

    What are the pros & cons for process testing  if the manipulated variables are perturbed through FIC Setpoints, (closed loop), or through FIC Outputs, (open loop)? Or simply: should it be done according with the MPC design? What are the pros & cons if in the final design the FCVs are directly manipulated by the MPC block or through FICs, as MPC’s downstream blocks? I know in this case the FICs will be faster than MPC, so I expect a good approach is to retain them.

    Mark Darby’s Answers

    Correct – do according to the MPC design. Note sometimes the design will need to change during a step test as you learn more about the process. Flow controllers should normally be retained unless they often saturate. This is the same idea for justifying a cascade – to have the inner loop manage the higher frequency disturbances (so the slower executing MPC doesn’t have to). The faster executing inner loop also helps with linearization (for example, valve position to flow).  

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 21 Nov 2018

    How to Setup and Identify Process Models for Model Predictive Control

    The post How to Setup and Identify Process Models for Model Predictive Control first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    Luis Navas is an ISA Certified Automation Professional and electronic engineer with more than 11 years of experience in process control systems, industrial instrumentation and safety instrumented systems. Luis’ questions on evaporator control are important to improve evaporator concentration control and minimize steam consumption

    Luis Navas’ Introduction

    The process depicted in Figure 1 shows a concentrator with its process inputs and outputs. I have the following questions regarding the process testing in order to generate process models for a MPC in the correct way. I know that MPC process inputs must be perturbed to allow an identification and modeling of each process input and output relationship.  

    Figure 1: Variables for model predictive control of a concentrator

     

    Luis Navas’ First Question

    Before I start perturbing the feed flow or steam flow, should the disturbance be avoided or at least minimized? Or simply let it be as usual in the process since this disturbance is always present?

    Mark Darby’s Answer

    If it is not difficult, you can try to suppress the disturbance. That can help the model identification for the feed and steam. To get a model to the disturbance, you will want movement of the disturbance outside the noise level (best is four to five times). If possible, this may require making changes upstream (for example, LIC.SP or FIC.SP).

    Luis Navas’ Second Question

    What about the steam flow? Should it be maintained a fix flow, (FIC in MAN with a fix % open FCV), while perturbing the feed flow and in the same way when perturbing the steam flow, should the feed flow be fixed? I know some MPC software packages excite its outputs in a PRBS (Pseudo Random Binary Sequence) practically at the same time while the process testing is being executed, and through mathematics catches the input and output relationships, finally generating the model.  

    Join the ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about how you can join the ISA Mentor Program.

    Mark Darby’s Answer

    Because the steam and feed setpoints are manipulated variables, it is best to keep them both in auto for the entire test. PRBS is an option, but it will take more setup effort to get the magnitudes and the average switching interval right. An option is to start with a manual test and switch to PRBS after you’ve got a feel for the process and the right step sizes. Note: a pretest should have already been conducted to identify instrument issues, control issues, tuning, etc. Much more detail is offered in my Section 9.3 on in the McGraw-Hill handbook Process/Industrial Instruments and Control Sixth Edition.

    Luis Navas’s Last Questions

    What are the pros & cons for process testing  if the manipulated variables are perturbed through FIC Setpoints, (closed loop), or through FIC Outputs, (open loop)? Or simply: should it be done according with the MPC design? What are the pros & cons if in the final design the FCVs are directly manipulated by the MPC block or through FICs, as MPC’s downstream blocks? I know in this case the FICs will be faster than MPC, so I expect a good approach is to retain them.

    Mark Darby’s Answers

    Correct – do according to the MPC design. Note sometimes the design will need to change during a step test as you learn more about the process. Flow controllers should normally be retained unless they often saturate. This is the same idea for justifying a cascade – to have the inner loop manage the higher frequency disturbances (so the slower executing MPC doesn’t have to). The faster executing inner loop also helps with linearization (for example, valve position to flow).  

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 12 Nov 2018

    Webinar Recording: How to Use Modern Process Control to Maintain Batch-To-Batch Quality

    The post Webinar Recording: How to Use Modern Process Control to Maintain Batch-To-Batch Quality first appeared on the ISA Interchange blog site.

    This educational ISA webinar was presented by Greg McMillan. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).

    Understanding the difficulties of batch processing and the new technologies and techniques offered can lead to solutions by better automation and control that offer much greater increases in efficiency and capacity than usually obtained for continuous process. Industry veteran and author Greg McMillan discusses analyzing batch data, elevating the role of the operator, tuning key control loops, and setting up simple control strategies to optimize batch operations. The presentation concludes with an extensive list of best practices.

    About the Presenter
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 12 Nov 2018

    Webinar Recording: How to Use Modern Process Control to Maintain Batch-To-Batch Quality

    The post Webinar Recording: How to Use Modern Process Control to Maintain Batch-To-Batch Quality first appeared on the ISA Interchange blog site.

    This educational ISA webinar was presented by Greg McMillan. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).

    Understanding the difficulties of batch processing and the new technologies and techniques offered can lead to solutions by better automation and control that offer much greater increases in efficiency and capacity than usually obtained for continuous process. Industry veteran and author Greg McMillan discusses analyzing batch data, elevating the role of the operator, tuning key control loops, and setting up simple control strategies to optimize batch operations. The presentation concludes with an extensive list of best practices.

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 29 Oct 2018

    What Types of Process Control Models are Best?

    The post What Types of Process Control Models are Best? first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the U.S. This question comes from Daniel Rodrigues.

    Daniel Rodrigues is one of our newest protégés in the ISA Mentor Program. Daniel has been working in research & development for Norsk Hydro Brazil since 2016 specializing in:

    • Development of a greener, safer, more accurate, and cheaper analytical method
    • Cost reduction, efficiency enhancement opportunities identification
    • Process modelling and advanced control logic development and assessment
    • Research methodology development, execution, and planning
    • Statistical analysis of process variables and test results

    Daniel Rodrigues’ Question

    What is your take on process control based on phenomenological models (using first-principle models to guide the predictive part of controllers)? I am aware of the exponential growth of complexity in these, but I’d also like to have an experienced opinion regarding the reward/effort of these.

    Greg McMillan’s Answer

    I prefer first principle models to gain a deeper understanding of cause and effects, process relationships, process gains, and the response to abnormal situations. Most of my control system improvements start with first principle models. The incorporation of the actual control system (digital twin) to form a virtual plant has made these models a more powerful tool.  However, most first principle models use perfectly mixed volumes neglecting mixing delays and are missing transportation delays and automation system dynamics. For pH systems, including all of the non-ideal dynamics from piping and vessel design, control valves or variable speed pumps, and electrodes is particularly essential, I have consequently partitioned the total vessel volume into a series of plug flow and perfectly back mixed volumes to model the mixing dead times that originate from agitation pattern and the relative location of input and output streams. I add a transportation delay for reagent piping and dip tubes due to gravity flow or blending. For extremely low reagent flows (e.g., gph), I also add an equilibration time in dip tube after closure of a reagent valve associated with migration of the reagent into the process followed by migration of process fluid back up into the dip tube. I add a transportation delay to electrodes in piping. I use a variable dead time block and time constant blocks in series to show the effect of velocity, coating, age, buffering and direction of pH change on electrode response. I use a backlash-stiction and a variable dead time block to show the resolution and response time of control valves. The important goal is to get the total loop dead time and secondary lag right.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program.

    By having the much more complete model in a virtual plant, the true dynamic behavior of the system can be investigated and the best control system performance achieved by exploring, discovering, prototyping, testing, tuning, justifying, deploying, commissioning, maintaining and continuously improving, as described in the Control magazine feature article Virtual Plant Virtuosity.

     

    Figure 1: Virtual Plant that includes Automation System Dynamics and Digital Twin Controller

     

    Model predictive control is much better at ensuring you have the actual total dynamics including dead time, lags and lead times at a particular operating point. However, the models do not include the effect of backlash-stiction or actuator and positioner design on valve response time and consequentially on total loop dead time because by design the steps are made several times larger than the deadband and resolution or sensitivity limits of the control valve. Also, the models identified are for a particular operating point and normal operation. To cover different modes of operation and production rates, multiple models must be used requiring logic for a smooth transition or recently developed adaptive capabilities. I see an opportunity to use the results from the identification software used by MPC to provide a more accurate dead time, lag time and lead time by inserting these in blocks on the measurement of the process variable in first principle models. The identification software would be run for different operating points and operating conditions enabling the addition of supplemental dynamics in the first principle models. This addresses the fundamental deficiency of dead times, lag times and lead times being too small in first principle models.

    Statistical models are great at identifying unsuspected relationships, disturbances and variability in the process and measurements. However, these are correlations and not necessarily cause and effect. Also, continuous processes require dynamic compensation of each process input so that it matches the dynamic response timewise of each process output being studied. This is often not stated in the literature and is a formidable task. Some methods propose using a dead time on the input but for large time constants, the dynamic response of the predicted output is in error during a transient. These models are more designed for steady state operation but this is often an ideal situation not realized due to disturbances originating from the control system due to interactions, resonance, tuning, and limit cycles from stiction as discussed in the Control Talk Blog The most disturbing disturbances are self-inflicted. Batch processes do not require dynamic compensation of inputs making data analytics much more useful in predicting batch end points.

    I think there is a synergy to be gained by using MPC to find missing dynamics and statistical process control to help track down missing disturbances and relationships that are subsequently added to the first principle models. Recent advances in MPC capability (e.g., Aspen DMC3) to automatically identify changes in process gain, dead time and time constant including the ability to compute and update them online based on first principals has opened the door to increased benefits from the using MPC to improve first principle models and vice versa. Multivariable control and optimization where there are significant interactions and multiple controlled, manipulated and constraint variables are best handled by MPC. The exception is very fast systems where the PID controller is directly manipulating control valves or variable frequency drives for pressure control. Batch end point prediction might also be better implemented by data analytics. However, in all cases the first principle model should be accordingly improved and used to test the actual configuration and implementation of the MPC and analytics and to provide training of operators extended to all engineers and technicians supporting plant operation.

    I would think for research and development, the ability to gain a deeper and wider understanding of different process relationships for different operating conditions would be extremely important. This knowledge can lead to process improvements and to better equipment and control system design. For pH and biological control systems, this capability is essential.

    For a greater perspective on the capability of various modeling and control methodologies, see the ISA Mentor Program post with questions by protégé Danaca Jordan and answers by Hunter Vegas and I: What are the New Technologies and Approaches for Batch and Continuous Control?

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 29 Oct 2018

    What Types of Process Control Models are Best?

    The post What Types of Process Control Models are Best? first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the U.S. This question comes from Daniel Rodrigues.

    Daniel Rodrigues is one of our newest protégés in the ISA Mentor Program. Daniel has been working in research & development for Norsk Hydro Brazil since 2016 specializing in:

    • Development of a greener, safer, more accurate, and cheaper analytical method
    • Cost reduction, efficiency enhancement opportunities identification
    • Process modelling and advanced control logic development and assessment
    • Research methodology development, execution, and planning
    • Statistical analysis of process variables and test results

    Daniel Rodrigues’ Question

    What is your take on process control based on phenomenological models (using first-principle models to guide the predictive part of controllers)? I am aware of the exponential growth of complexity in these, but I’d also like to have an experienced opinion regarding the reward/effort of these.

    Greg McMillan’s Answer

    I prefer first principle models to gain a deeper understanding of cause and effects, process relationships, process gains, and the response to abnormal situations. Most of my control system improvements start with first principle models. The incorporation of the actual control system (digital twin) to form a virtual plant has made these models a more powerful tool.  However, most first principle models use perfectly mixed volumes neglecting mixing delays and are missing transportation delays and automation system dynamics. For pH systems, including all of the non-ideal dynamics from piping and vessel design, control valves or variable speed pumps, and electrodes is particularly essential, I have consequently partitioned the total vessel volume into a series of plug flow and perfectly back mixed volumes to model the mixing dead times that originate from agitation pattern and the relative location of input and output streams. I add a transportation delay for reagent piping and dip tubes due to gravity flow or blending. For extremely low reagent flows (e.g., gph), I also add an equilibration time in dip tube after closure of a reagent valve associated with migration of the reagent into the process followed by migration of process fluid back up into the dip tube. I add a transportation delay to electrodes in piping. I use a variable dead time block and time constant blocks in series to show the effect of velocity, coating, age, buffering and direction of pH change on electrode response. I use a backlash-stiction and a variable dead time block to show the resolution and response time of control valves. The important goal is to get the total loop dead time and secondary lag right.

    Join the ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about how you can join the ISA Mentor Program.

    By having the much more complete model in a virtual plant, the true dynamic behavior of the system can be investigated and the best control system performance achieved by exploring, discovering, prototyping, testing, tuning, justifying, deploying, commissioning, maintaining and continuously improving, as described in the Control magazine feature article Virtual Plant Virtuosity.

     

    Figure 1: Virtual Plant that includes Automation System Dynamics and Digital Twin Controller

     

    Model predictive control is much better at ensuring you have the actual total dynamics including dead time, lags and lead times at a particular operating point. However, the models do not include the effect of backlash-stiction or actuator and positioner design on valve response time and consequentially on total loop dead time because by design the steps are made several times larger than the deadband and resolution or sensitivity limits of the control valve. Also, the models identified are for a particular operating point and normal operation. To cover different modes of operation and production rates, multiple models must be used requiring logic for a smooth transition or recently developed adaptive capabilities. I see an opportunity to use the results from the identification software used by MPC to provide a more accurate dead time, lag time and lead time by inserting these in blocks on the measurement of the process variable in first principle models. The identification software would be run for different operating points and operating conditions enabling the addition of supplemental dynamics in the first principle models. This addresses the fundamental deficiency of dead times, lag times and lead times being too small in first principle models.

    Statistical models are great at identifying unsuspected relationships, disturbances and variability in the process and measurements. However, these are correlations and not necessarily cause and effect. Also, continuous processes require dynamic compensation of each process input so that it matches the dynamic response timewise of each process output being studied. This is often not stated in the literature and is a formidable task. Some methods propose using a dead time on the input but for large time constants, the dynamic response of the predicted output is in error during a transient. These models are more designed for steady state operation but this is often an ideal situation not realized due to disturbances originating from the control system due to interactions, resonance, tuning, and limit cycles from stiction as discussed in the Control Talk Blog The most disturbing disturbances are self-inflicted. Batch processes do not require dynamic compensation of inputs making data analytics much more useful in predicting batch end points.

    I think there is a synergy to be gained by using MPC to find missing dynamics and statistical process control to help track down missing disturbances and relationships that are subsequently added to the first principle models. Recent advances in MPC capability (e.g., Aspen DMC3) to automatically identify changes in process gain, dead time and time constant including the ability to compute and update them online based on first principals has opened the door to increased benefits from the using MPC to improve first principle models and vice versa. Multivariable control and optimization where there are significant interactions and multiple controlled, manipulated and constraint variables are best handled by MPC. The exception is very fast systems where the PID controller is directly manipulating control valves or variable frequency drives for pressure control. Batch end point prediction might also be better implemented by data analytics. However, in all cases the first principle model should be accordingly improved and used to test the actual configuration and implementation of the MPC and analytics and to provide training of operators extended to all engineers and technicians supporting plant operation.

    I would think for research and development, the ability to gain a deeper and wider understanding of different process relationships for different operating conditions would be extremely important. This knowledge can lead to process improvements and to better equipment and control system design. For pH and biological control systems, this capability is essential.

    For a greater perspective on the capability of various modeling and control methodologies, see the ISA Mentor Program post with questions by protégé Danaca Jordan and answers by Hunter Vegas and I: What are the New Technologies and Approaches for Batch and Continuous Control?

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 23 Oct 2018

    Many Objectives, Many Worlds of Process Control

    The post, Many Objectives, Many Worlds of Process Control first appeared on ControlGlobal.com's Control Talk blog.

    In many publications on process control, the common metric you see is integrated absolute error for a step disturbance on the process output. In many tests for tuning, setpoint changes are made and the most important criteria becomes overshoot of setpoint. Increasingly, oscillations of any type are looked at as inherently bad. What is really important varies because of the different loops and types of processes. Here we seek to open minds and develop a better understanding of what is important.

    Many Objectives

    • Minimum PV peak error in load response to prevent:

    –        Compressor surge, SIS activation, relief activation, undesirable reactions, poor cell health

    • Minimum PV integrated error in load or setpoint response to minimize:

    –        total amount of off-spec product to enable closer operation to optimum setpoint

    • Minimum PV overshoot of SP in setpoint response to prevent:

    –        Compressor surge, SIS activation, relief activation, undesirable reactions, poor cell health

    • Minimum Out overshoot of FRV* in setpoint response to prevent:

    –        Interaction with heat integration and recycle loops in hydrocarbon gas unit operations

    • Minimum PV time to reach SP in setpoint response to minimize:

    –        Batch cycle time, startup time, transition time to new products and operating rates

    • Minimum split range point crossings to prevent:

    –        Wasted energy-reactants-reagents, poor cell health (high osmotic pressure)

    • Maximum absorption of variability in level control (e.g. surge tank) to prevent:

    –        Passing of changes in input flows to output flows upsetting downstream unit ops

    • Optimum transfer of variability from controlled variable to manipulated variable to prevent:

    –        Resonance, interaction and propagation of disturbances to other loops

    * FRV is the Final Resting Value of PID output. Overshoot of FRV is necessary for setpoint and load response for integrating and runaway processes. However for self-regulating processes not involving highly mixed vessels (e.g., heat exchangers and plug flow reactors),  aggressive action in terms of PID output can upset other loops and unit operations that are affected by the flow manipulated by the PID. Not recognized in the literature is that external-reset feedback of the manipulated flow enables setpoint rate limits to smooth out changes in manipulated flows without affecting the PID tuning.

    Many Worlds

    • Hydrocarbon processes and other gas unit operations with plug flow, heat integration & recycle streams (e.g. crackers, furnaces, reformers)

    –        Fast self-regulating responses, interactions and complex secondary responses with sensitivity to SP and FRV overshoot, split range crossings and utility interactions.

    • Chemical batch and continuous processes with vessels and columns

    –        Important loops tend to have slow near or true integrating and runaway responses with minimizing peak and integrated errors and rise time as key objectives.

    • Utility systems (e.g., boilers, steam headers, chillers, compressors)

    –        Important loops tend to have fast near or true integrating responses with minimizing peak and integrated errors and interactions as key objectives.

    • Pulp, paper, food and polymer inline, extrusion and sheet processes

    –        Fast self-regulating responses and interactions with propagation of variability into product (little to no attenuation of oscillations by back mixed volumes) with extreme sensitive to variability and resonance. Loops (particularly for sheets) can be dead time dominant due to transportation delays unless there are heat transfer lags.

    • Biological vessels (e.g., fermenters and bioreactors)

    –        Most important loops tend have slow near or true integrating responses with extreme sensitivity to SP and FRV overshoot, split range crossings and utility interactions. Load disturbances originating from cells are incredibly slow and therefore not an issue.

    A critical insight is that most disturbances are on the process input not the process output and are not step changes. The fastest disturbances are generally flow or liquid pressure but even these have an 86% response time of at least several seconds because of the 86% response time of valves and the tuning of PID controllers. The fastest and most disruptive disturbances are often manual actions by an operator or setpoint changes by a batch sequence. Setpoint rate limits and a 2 Degrees of Freedom (2DOF) PID structure with Beta and Gamma approaching zero can eliminate much of the disruption from setpoint changes by slowing down changes in the PID output from proportional and derivative action. A disturbance to a loop can be considered to be fast if it has a 86% response time less than the loop deadtime.

    If you would like to hear more on this, checkout the ISA Mentor Program Webinar Recording: PID Options and Solutions Part1

    If you want to be able to explain this to young engineers, check out the dictionary for translation of slang terms in the Control Talk Column “Hands-on Labs build real skills.”

    • 15 Oct 2018

    How to Get Rid of Level Oscillations in Industrial Processes

    The post How to Get Rid of Level Oscillations in Industrial Processes first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the U.S. This question comes from Luis Navas.

    Luis Navas is an ISA Certified Automation Professional and electronic engineer with more than 11 years of experience in process control systems, industrial instrumentation and safety instrumented systems. Luis’ questions on evaporator control are important to improve evaporator concentration control and minimize steam consumption

    Luis Navas’ Questions

    For an MPC application I need to build a smoothed moving mean from a batch level to use as a controlled variable for my MPC, so the simple moving average is done as depicted below. However, I need to smooth the signal, due there is some signal ripple still. I tried with a low-pass filter achieving some improvement as seen in Figure 1. But perhaps you know a better way to do it, or I simply need to increase the filter time.

    Figure 1: Old Level Oscillations (blue: actual level and green: level with simple moving mean followed by simple moving mean + first order filter)

    Greg McMillan’s Initial Answer

    I use rate limiting when a ripple is significantly faster than a true change in the process variable. The velocity limit would be the maximum possible rate of change of the level. The velocity limit should be turned off when maintenance is being done and possibly during startup or shutdown. The standard velocity limit block should offer this option. A properl Save & Exit y set velocity limit introduces no measurement lag. A level system (any integrator) is very sensitive to a lag anywhere.

    If the oscillation stops when the controller is in manual, the oscillation could be from backlash or stiction. In your case, the controller appears to be in auto with a slow rolling oscillation possibly due to a PID reset time being too small.

    I did a Control Talk Blog that discusses good signal filtering tips from various experts besides my intelligent velocity limit.

    Mark Darby’s Initial Answer

    In many cases, I’ve seen signals overly filtered. Often, if the filtered signal looks good to your eye, it’s too much filtering. As Michel Ruel states: If period is known, moving average (sum of most recent N values divided by N) will nearly completely remove a uniform periodic cycle. So the issue is how much lag is introduced. Depending on the MPC, one may be able to specify variable CV weights as a function of the magnitude error, which will decrease the amount of MV movement when the CV weight is low; or the level signal could be brought in as a CV twice with different tuning or filtering applied to each.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program.

    Greg McMillan’s Follow-Up Answer

    Since the oscillation is uniform in period and amplitude, the moving average as described my Michel Ruel is best as a starting point. Any subsequent noise from non-uniformity can be removed by an additional filter but nearly all of this filter time becomes equivalent dead time in near and true integrating processes. You need to be careful that the reset time is not too small as you decrease the controller gain either due to filtering or to absorb variability. The product of PID gain and reset time should be greater than twice the inverse of the integrating process gain (1/sec) to prevent the slow rolling oscillations that decay gradually. Slide 29 of the ISA webinar on PID options and solutions give the equations for the window of allowable PID gains. Slide 15 shows how to estimate the attenuation of an oscillation by a filter. The webinar presentation and discussion is in the ISA Mentor Program post How to optimize PID controller settings.

    If you need to minimize dead time introduced by filtering, you could develop a smarter statistical filter such as cumulative sum of measured values (CUSUM). For an excellent review of how to remove unwanted data signal components, see the InTech magazine article Data filtering in process automation systems.

    Mark Darby’s Follow-Up Answer

    My experience is that most times a cycle in a disturbance flow is already causing cycling in other variables (due to the multivariable nature of the process).  And advanced control, including MPC, will not significantly improve the situation and may make it worse.  So it is best to fix the cycle before proceeding with advanced control.  Making a measured cyclic disturbance a feedforward to MPC likely won’t help much.  MPC normally assumes the current value of the feedforward variables stays constant over the prediction horizon. What you’d want is to have the future prediction include the cycle.  Unfortunately this is not easily done with the MPC packages today.

    Often, levels are controlled by a PID loop, not in the MPC.  The exception can be if there are multiple MVs that must be used to control the level (e.g., multiple outlet flows), or the manipulated flow is useful for alleviating a constraint (see the handbook).  Another exception is if there is significant dead time between the flow and the level.

    Luis Navas’ Follow-up Response

    Thank you for the support. I think the ISA Mentor Program resources are a truly elite support team, by the way, I have already read the blogs about signal filtering.

    My comments and clarifications:

    1. The signal corresponds to a tank level in a batch process, due that it has an oscillating behavior (without noise).
    2. The downstream process is continuous, (evaporator) and the idea is control the Feed tank level with MPC (using the moving average), through evaporator flow input. The feed tank level is critical for the evaporator works fine.
    3. I have applied the Michel Ruel statement: If period is known, moving average (sum of most recent N values divided by N) will nearly completely remove a periodic cycle. Now the moving average is better as seen in Figure 2.

     

    Figure 2: New Level Oscillations (blue: actual level and green: level with Ruel moving average)

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 15 Oct 2018

    How to Get Rid of Level Oscillations in Industrial Processes

    The post How to Get Rid of Level Oscillations in Industrial Processes first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    Luis Navas is an ISA Certified Automation Professional and electronic engineer with more than 11 years of experience in process control systems, industrial instrumentation and safety instrumented systems. Luis’ questions on effectively reducing evaporator level oscillations from an upstream batch operation so that the level controller can see the true level trajectory represent a widespread concern in chemical plants where the front end for conversion has batch operations and back end for separation has continuous operations.

     Luis Navas’ Questions

    For the MPC application I need to build a smoothed moving mean from a batch level to use as a controlled variable for my MPC, so the simple moving average is done as depicted below. However, I need to smooth the signal, (due there is some signal ripple still), I tried with a low-pass filter achieving some improvement as seen in Figure 1. But perhaps you know a better way to do it, or I simply need to increase the filter time.

     

    Figure 1: Old Level Oscillations (blue: actual level and green: level with simple moving mean followed by simple moving mean + first order filter)

     

    Greg McMillan’s Initial Answer

    I use rate limiting when a ripple is significantly faster than a true change in the process variable. The velocity limit would be the maximum possible rate of change of the level. The velocity limit should be turned off when maintenance is being done and possibly during startup or shutdown. The standard velocity limit block should offer this option. A properly set velocity limit introduces no measurement lag. A level system (any integrator) is very sensitive to a lag anywhere.

    If the oscillation stops when the controller is in manual, the oscillation could be from backlash or stiction. In your case, the controller appears to be in auto with a slow rolling oscillation possibly due to a PID reset time being too small.

    I did a Control Talk Blog that discusses What are good signal filtering tips from various experts besides my intelligent velocity limit.

    Mark Darby’s Initial Answer

    In many cases, I’ve seen signals overly filtered.  Often, if the filtered signal looks good to your eye, it’s too much filtering. As Michel Ruel states: If period is known, moving average (sum of most recent N values divided by N) will nearly completely remove a uniform periodic cycle. So the issue is how much lag is introduced. Depending on the MPC, one may be able to specify variable CV weights as a function of the magnitude error, which will decrease the amount of MV movement when the CV weight is low; or the level signal could be brought in as a CV twice with different tuning or filtering applied to each.

    Join the ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about how you can join the ISA Mentor Program.

    Greg McMillan’s Follow-up Answer

    Since the oscillation is uniform in period and amplitude, the moving average as described my Michel Ruel is best as a starting point. Any subsequent noise from nonuniformity can be removed by an additional filter but nearly all of this filter time becomes equivalent dead time in near and true integrating processes. You need to be careful that the reset time is not too small as you decrease the controller gain either due to filtering or to absorb variability. The product of PID gain and reset time should be greater than twice the inverse of the integrating process gain (1/sec) to prevent the slow rolling oscillations that decay gradually. Slide 29 of the ISA WebEx on PID Options and Solutions give the equations for the window of allowable PID gains. Slide 15 shows how to estimate the attenuation of an oscillation by a filter. The WebEx presentation and discussion is in the ISA Mentor Program post How to optimize PID controller settings.

    If you need to minimize dead time introduced by filtering, you could develop a smarter statistical filter such as cumulative sum of measured values (CUSUM). For an excellent review of how to remove unwanted data signal components, see the InTech magazine article Data filtering in process automation systems.

    Mark Darby’s Follow-up Answer

    My experience is that most times a cycle in a disturbance flow is already causing cycling in other variables (due to the multivariable nature of the process).  And advanced control, including MPC, will not significantly improve the situation and may make it worse.  So it is best to fix the cycle before proceeding with advanced control.  Making a measured cyclic disturbance a feedforward to MPC likely won’t help much.  MPC normally assumes the current value of the feedforward variables stays constant over the prediction horizon. What you’d want is to have the future prediction include the cycle.  Unfortunately this is not easily done with the MPC packages today.

    Often, levels are controlled by a PID loop, not in the MPC.  The exception can be if there are multiple MVs that must be used to control the level (e.g., multiple outlet flows), or the manipulated flow is useful for alleviating a constraint (see the handbook).  Another exception is if there is significant dead time between the flow and the level.

    Luis Navas’ Follow-up Response

    Thank you for the support. I think the ISA Mentor Program resources are a truly Elite support team, by the way,I have already read the blogs about signal filtering.

    My comments and clarifications:

    1. The signal corresponds to a tank level in a batch process, due that it has an oscillating behavior, (without noise).
    2. The downstream process is continuous, (evaporator) and the idea is control the Feed tank level with MPC, (using the Moving average), through evaporator flow input. The feed tank level is critical for the evaporator works fine.
    3. I have applied the “Michel Ruel statement: If period is known, moving average (sum of most recent N values divided by N) will nearly completely remove a periodic cycle”, and now the moving average is better as seen in Figure 2.

     

    Figure 2: New Level Oscillations (blue: actual level and green: level with Ruel moving average)

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 1 Oct 2018

    Webinar Recording: Loop Tuning and Optimization

    The post Webinar Recording: Loop Tuning and Optimization first appeared on the ISA Interchange blog site.

    This educational ISA webinar was presented by Greg McMillan in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).

    In this ISA Mentor Program presentation, Michel Ruel, a process control expert and consultant, provides insight and guidance as to the importance of optimization and how to achieve it through better PID control.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program.

    About the Presenter
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 1 Oct 2018

    Webinar Recording: Loop Tuning and Optimization

    The post Webinar Recording: Loop Tuning and Optimization first appeared on the ISA Interchange blog site.

    This educational ISA webinar was presented by Greg McMillan in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).

    In this ISA Mentor Program presentation, Michel Ruel, a process control expert and consultant, provides insight and guidance as to the importance of optimization and how to achieve it through better PID control.

    Join the ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about how you can join the ISA Mentor Program.

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg:

    LinkedIn

    • 26 Sep 2018

    How to Optimize Industrial Evaporators

    The post How to Optimize Industrial Evaporators first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    In the ISA Mentor Program, I am providing guidance for extremely talented individuals from Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Luis Navas.

    Luis Navas is an ISA Certified Automation Professional and electronic engineer with more than 11 years of experience in process control systems, industrial instrumentation and safety instrumented systems. Luis’ questions on evaporator control are important to improve evaporator concentration control and minimize steam consumption

    Luis Navas’ Questions

    Which criteria should I follow to define the final control strategy with model predictive control (MPC) in an existing PID strategy? Only one MPC for all existing PIDs? Or may be 1MPC + 1PID or 1MPC + 2 PIDs? What are the criteria to make the correct decision? What is the step by step procedure to deploy the advanced control in the real process in the safest way? Which are your hints, tips, advice and experiences regarding MPC implementations?

    Greg McMillan’s Initial Answer

    In general you try to include all of the controlled variables (CV), manipulated variables (MV), disturbance variables (DC), and constraint variables (QC) in the same MPC unless the equipment are not related, there is a great difference in time horizons or there is a cascade control opportunity like we see with Kiln MPC control where a slower MPC with more important controlled variables send setpoints to a secondary MPC for faster controlled variables. For your evaporator control, this does not appear to be the case.

    We first discuss advanced PID control and its common limitations before moving into a MPC.

    For optimization, a PID valve position controller could maximize production rate by pushing the steam valve to its furthest effective throttle position. So far as increasing efficiency in terms of minimizing steam use, this would be generally be achieved by tight concentration control that allows you to operate closer to minimum concentration spec. The level and concentration response would be true and near integrating. In both cases, PID integrating process tuning rules should be used. Do not decrease the PID gain computed by these rules without proportionally increasing the PID reset time. The product of the PID gain and reset time must be greater than the inverse of the integrating process gain to prevent slow rolling oscillations, a very common problem. Often the reset time is two or more orders of magnitude too small because user decreased the PID gain due to noise or thinking oscillations are caused by too high a PID gain.

    I don’t see constraint control for a simple evaporator but if there were constraints, an override controller would be setup for each. However, only one constraint would be effectively governing operation at a given time via signal selection. Also, the proper tuning of override controllers and valve position controllers is not well known. Furthermore, the identification of dynamics for feedback and particularly feedforward control typically requires the expertise by a specialist. Often comparisons are done showing how much better Model Predictive Control is than PID control without good identification and tuning of feedback and feedforward control parameters.

    While optimization limitations and typical errors in identification and tuning push your case toward the use of MPC, here are the best practices for PID control of evaporators.

    1. Measure product concentration by a Coriolis meter on evaporator system discharge.
    2. Control product concentration by manipulation of the heat input to product flow ratio.
    3. Use evaporator level measurements with an excellent sensitivity and signal noise ratio.
    4. When possible, use radar instead of capillary systems to reduce level noise, drift, and lag
    5. Control product concentration by changing heat input to feed rate ratio. If production rate is set by discharge flow, use PID to manipulate heat input. If production rate is set by heat input, use PID to manipulate product flow rate.
    6. Use near integrator rules maximizing rate action in PID concentration controller tuning.
    7. Use a flow feedforward of product flow rate to set feed rate to minimize the response time for production rate or product concentration control.
    8. For feed concentration disturbances, use feedforward to correct the heat input based on feed solids concentration computed from density measured by a feed Coriolis meter.
    9. The actual heat to feed ratio must be displayed and manual adjustment of desired ratio be provided to operations for startup and abnormal operation.
    10. To provide faster concentration control for small disturbances, use a PD controller to manipulate a small bypass flow whose bias is about 50% of maximum bypass flow.

    The use of model predictive control software often does a good job of identifying the dynamics and automatically incorporating them into the controller. Also, it can simultaneously handle multiple constraints with predictive capability as to violation of constraints. Furthermore, a linear program or other optimizer built into MPC can find and achieve the optimum intersection of the minimum and maximum values of controlled, constraint, and manipulated variables plotted on a common axis of the manipulated variables.

    I have asked for more detailed advice on MPC by Mark Darby, a great new resource, who wrote the MPC Sections for the McGraw-Hill Handbook Hunter and I just finished.

    Mark Darby’s Initial Answer

    It is normally best to keep PID controls in place for basic regulatory control if they perform well, which may require re-tuning or reconfiguration of the strategy.  Your case is getting into advanced control and optimization where the advantage shifts to MPC. Multiple interactions and measured disturbances are best done by MPC compared to PID decoupling and feedforward control. First principle models should be used to compute smarter disturbance variables such as solids feed flow rather than separate feed flow and feed concentration disturbance variables. Override control and valve position control schemes are better handled by MPC.  More general optimization is also better done with an MPC. Remember to include PID outputs to valves as constraint variables if they can saturate in normal operation.  If a valve is operated close to a limit (e.g., 5% or 95%), it may be better to have the MPC manipulate the valve signal directly using signal characterization as needed using installed flow characteristic to linearize response.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program.

    Here are some MPC best practices from Process/Industrial Instruments and Controls Handbook Sixth Edition, by Gregory K. McMillan and Hunter Vegas (co-editors), and scheduled to be published in early 2019. This sixth edition is revolutionary in having nearly 50 industry experts provide a focus on the steps needed for all aspects to achieve a successful automation project to maximize the return on investment.

     

    MPC Project Best Practices

    1. Project team members should include not only control engineers, but also process engineers and operations personnel.
    2. First level support of MPC requires staff with knowledge of both the MPC and the process.  Site staff needs to have sufficient understanding to troubleshoot and answer the questions of operations.  Larger companies often have central teams for second level support and to participate in projects.
    3. Even in companies with experienced teams, it is not unusual to use outside MPC consultants. The right level of usage of outside consultants is rarely 0% or 100%.
    4. It may be tempting to avoid the benefit estimation and/or post audit, especially when a company has previous successful history with MPC. But doing so carries a risk.  New management may not have experience or understand the value of MPC, leading to the inevitable question: “What is MPC doing for me today?”
    5. The other temptation is to forgo needed instrumentation or hardware repairs and proceed directly with an MPC project, arguing that MPC can compensate for such deficiencies.  This carries the risk of not meeting expectations and MPC getting a bad reputation, which will be difficult to erase.
    6. Regular reporting of relevant KPIs and benefits is seen as the best way of keeping the organization in the know and motivating additional MPC applications.

    MPC Design Best Practices

    1. Develop a functional design with input from operations, process engineering, economics staff, and instrument techs.  Update the design as the project progresses, and after the project is completed to reflect the as built MPC.
    2. Not all MPC variables must be determined up front in the project.  Most important is identifying the MVs.  The final selection of CVs and DVs can be made after plant testing, assuming data for these variables was collected.
    3. The use of a dynamic simulation can be useful for testing a new regulatory control strategy.  It can also be used to test and demonstrate an MPC, which can be quite illustrative and educational, particularly if MPC is being applied for the first time in a facility.
    4. If filtering of a CV or DV for MPC is required, it needs to be done at the DCS or PLC level.  The faster scan times allow effective filtering (usually on the order of seconds) without significantly affecting the underlying dynamics of the signal.  In addition, filters associated with the PVs of PID loops should be reviewed to ensure excessive filtering is not being used to mask other problems.
    5. The use of a steady-state or dynamic simulation can be useful for determining thermo-physical equation parameters for PID calculated control variables (e.g., duty or PCT) and MPC CVs, estimating process gains, and evaluating possible inferential predictors.
    6. With most MPC products, adding MVs, CVs, and DVs is a straightforward task once models are identified.  This allows starting with a smaller MPC on one part of the unit, and later increasing the scope as experience and confidence is gained.
    7. Inferential models can be developed ahead of the plant test, which allows the model to be evaluated and adjustments made.  For data driven regression based inferentials, one needs to have at least confirmed that measurements exist that correlate with the analyzed valve.  Final determination of model inputs can be made during the modeling phase.
    8. A challenge with lab-based inferentials is accurately knowing when a lab sample is collected.  A technique for automating this is to install a thermocouple directly in the line of the sample point. A spike in the temperature measurement is used to detect a sample collection.
    9. When implementing a steady-state inferential model online, it is often useful to filter inputs to the calculation to remove phantom effects such as overshoot or inverse response.

    MPC Model Development Best Practices

    Plant Testing

    1. A test plan should be developed with operations and process engineering. It will need to be flexible to accommodate the needs of the modeling as well as operational issues that may arise.
    2. Data collected for model identification should not use data compression. A separate data collection is recommended to minimize the likelihood of latency effects such as PVs exhibiting changes before SPs.
    3. The data collection should include all pertinent tags for the units being tested. This can allows integrity checks to be made, and models to be identified for new CVs and DVs that may be added later to the MPC.
    4. Model identification runs should be done frequently, typically at least once per day. This allows the testing to be modified to emphasize MV-CV models that are insufficiently identified.
    5. The plant test is an opportunity to answer operational or process questions of which there are differing opinions, such as the effect of a recycle on recovery or on a constraint. This can help to develop consensus on a new strategy.
    6. MPC products that include automatic closed-loop should provide the necessary logic to change step sizes, bring MVs in and out of test mode, and displays to follow the testing.
    7. Lab sample collection for inferential model: include multiple samples collected at same time to assess sample container differences (reproducibility) and lab repeatability. When multiple sample containers are used, record the container used for each sample.  Coordinate lab sample with collection personnel and record time samples are collected from the process.

    Model Identification

    1. The MPC identification package should automatically handle the required scaling of MVs and CVs and differencing and/or de-trending.
    2. The ability to slice or section out bad data is a necessary feature. Note that each section of data that is excluded requires a re-initialization of the identification algorithm for the next section of good data.
    3. A useful technique for deciding on which MVs and DVs are significant in a model, and should therefore be included, is to compare their contribution to the CV response based on the average move sizes made during the plant test.
    4. Model assessment tools to guide model quality assessments are desirable. These may be error bounds on the step responses or other techniques to grade the model.  A common technique for assessing model errors are bode error plots which express errors as a function of frequency.  They can be useful for modifying the test to improve certain aspects of the model (e.g., reducing errors at low or high frequencies.
    5. Features to assist in the development of nonlinear transformation are desirable. Ideally, the necessary pre- and post-controller calculations to support transformations are a standard option in the MPC.
    6. Features that help document the various model runs and the construction of the final MPC model is a desirable feature.
    7. Even if an MPC includes online options for removing weak degrees of freedom, it is recommended that known consistency relationships be imposed as part of model identification.

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 26 Sep 2018

    How to Optimize Industrial Evaporators

    The post How to Optimize Industrial Evaporators first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the
    ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

     

    Luis Navas

    Luis Navas is an ISA Certified Automation Professional and electronic engineer with more than 11 years of experience in process control systems, industrial instrumentation and safety instrumented systems. Luis’ questions on evaporator control are important to improve evaporator concentration control and minimize steam consumption

     

     

    Luis Navas’ Questions

    Which criteria should I follow to define the final control strategy with model predictive control (MPC) in an existing PID strategy? Only one MPC for all existing PIDs? Or may be 1MPC + 1PID or 1MPC + 2 PIDs? What are the criteria to make the correct decision? What is the step by step procedure to deploy the advanced control in the real process in the safest way? Which are your hints, tips, advice and experiences regarding MPC implementations?

     

    PID control of a double-effect evaporator

     

    Greg McMillan’s Initial Answer

    In general you try to include all of the controlled variables (CV), manipulated variables (MV), disturbance variables (DC), and constraint variables (QC) in the same MPC unless the equipment are not related, there is a great difference in time horizons or there is a cascade control opportunity like we see with Kiln MPC control where a slower MPC with more important controlled variables send setpoints to a secondary MPC for faster controlled variables. For your evaporator control, this does not appear to be the case.

    We first discuss advanced PID control and its common limitations before moving into a MPC.

    For optimization, a PID valve position controller could maximize production rate by pushing the steam valve to its furthest effective throttle position. So far as increasing efficiency in terms of minimizing steam use, this would be generally be achieved by tight concentration control that allows you to operate closer to minimum concentration spec. The level and concentration response would be true and near integrating. In both cases, PID integrating process tuning rules should be used. Do not decrease the PID gain computed by these rules without proportionally increasing the PID reset time. The product of the PID gain and reset time must be greater than the inverse of the integrating process gain to prevent slow rolling oscillations, a very common problem. Often the reset time is two or more orders of magnitude too small because user decreased the PID gain due to noise or thinking oscillations are caused by too high a PID gain.

    I don’t see constraint control for a simple evaporator but if there were constraints, an override controller would be setup for each. However, only one constraint would be effectively governing operation at a given time via signal selection. Also, the proper tuning of override controllers and valve position controllers is not well known. Furthermore, the identification of dynamics for feedback and particularly feedforward control typically requires the expertise by a specialist. Often comparisons are done showing how much better Model Predictive Control is than PID control without good identification and tuning of feedback and feedforward control parameters.

    While optimization limitations and typical errors in identification and tuning push your case toward the use of MPC, here are the best practices for PID control of evaporators.

    1. Measure product concentration by a Coriolis meter on evaporator system discharge.
    2. Control product concentration by manipulation of the heat input to product flow ratio.
    3. Use evaporator level measurements with an excellent sensitivity and signal noise ratio.
    4. When possible, use radar instead of capillary systems to reduce level noise, drift, and lag
    5. Control product concentration by changing heat input to feed rate ratio. If production rate is set by discharge flow, use PID to manipulate heat input. If production rate is set by heat input, use PID to manipulate product flow rate.
    6. Use near integrator rules maximizing rate action in PID concentration controller tuning.
    7. Use a flow feedforward of product flow rate to set feed rate to minimize the response time for production rate or product concentration control.
    8. For feed concentration disturbances, use feedforward to correct the heat input based on feed solids concentration computed from density measured by a feed Coriolis meter.
    9. The actual heat to feed ratio must be displayed and manual adjustment of desired ratio be provided to operations for startup and abnormal operation.
    10. To provide faster concentration control for small disturbances, use a PD controller to manipulate a small bypass flow whose bias is about 50% of maximum bypass flow.

    The use of model predictive control software often does a good job of identifying the dynamics and automatically incorporating them into the controller. Also, it can simultaneously handle multiple constraints with predictive capability as to violation of constraints. Furthermore, a linear program or other optimizer built into MPC can find and achieve the optimum intersection of the minimum and maximum values of controlled, constraint, and manipulated variables plotted on a common axis of the manipulated variables.

    I have asked for more detailed advice on MPC by Mark Darby, a great new resource, who wrote the MPC Sections for the McGraw-Hill Handbook Hunter and I just finished.

    Mark Darby’s Initial Answer

    It is normally best to keep PID controls in place for basic regulatory control if they perform well, which may require re-tuning or reconfiguration of the strategy.  Your case is getting into advanced control and optimization where the advantage shifts to MPC. Multiple interactions and measured disturbances are best done by MPC compared to PID decoupling and feedforward control. First principle models should be used to compute smarter disturbance variables such as solids feed flow rather than separate feed flow and feed concentration disturbance variables. Override control and valve position control schemes are better handled by MPC.  More general optimization is also better done with an MPC. Remember to include PID outputs to valves as constraint variables if they can saturate in normal operation.  If a valve is operated close to a limit (e.g., 5% or 95%), it may be better to have the MPC manipulate the valve signal directly using signal characterization as needed using installed flow characteristic to linearize response.

     

    Join the ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.

     

    Here are some MPC best practices from Process/Industrial Instruments and Controls Handbook Sixth Edition, by Gregory K. McMillan and Hunter Vegas (co-editors), and scheduled to be published in early 2019. This sixth edition is revolutionary in having nearly 50 industry experts provide a focus on the steps needed for all aspects to achieve a successful automation project to maximize the return on investment.

     MPC Project Best Practices

    1. Project team members should include not only control engineers, but also process engineers and operations personnel.
    2. First level support of MPC requires staff with knowledge of both the MPC and the process.  Site staff needs to have sufficient understanding to troubleshoot and answer the questions of operations.  Larger companies often have central teams for second level support and to participate in projects.
    3. Even in companies with experienced teams, it is not unusual to use outside MPC consultants. The right level of usage of outside consultants is rarely 0% or 100%.
    4. It may be tempting to avoid the benefit estimation and/or post audit, especially when a company has previous successful history with MPC. But doing so carries a risk.  New management may not have experience or understand the value of MPC, leading to the inevitable question: “What is MPC doing for me today?”
    5. The other temptation is to forgo needed instrumentation or hardware repairs and proceed directly with an MPC project, arguing that MPC can compensate for such deficiencies.  This carries the risk of not meeting expectations and MPC getting a bad reputation, which will be difficult to erase.
    6. Regular reporting of relevant KPIs and benefits is seen as the best way of keeping the organization in the know and motivating additional MPC applications.

    MPC Design Best Practices

    1. Develop a functional design with input from operations, process engineering, economics staff, and instrument techs.  Update the design as the project progresses, and after the project is completed to reflect the as built MPC.
    2. Not all MPC variables must be determined up front in the project.  Most important is identifying the MVs.  The final selection of CVs and DVs can be made after plant testing, assuming data for these variables was collected.
    3. The use of a dynamic simulation can be useful for testing a new regulatory control strategy.  It can also be used to test and demonstrate an MPC, which can be quite illustrative and educational, particularly if MPC is being applied for the first time in a facility.
    4. If filtering of a CV or DV for MPC is required, it needs to be done at the DCS or PLC level.  The faster scan times allow effective filtering (usually on the order of seconds) without significantly affecting the underlying dynamics of the signal.  In addition, filters associated with the PVs of PID loops should be reviewed to ensure excessive filtering is not being used to mask other problems.
    5. The use of a steady-state or dynamic simulation can be useful for determining thermo-physical equation parameters for PID calculated control variables (e.g., duty or PCT) and MPC CVs, estimating process gains, and evaluating possible inferential predictors.
    6. With most MPC products, adding MVs, CVs, and DVs is a straightforward task once models are identified.  This allows starting with a smaller MPC on one part of the unit, and later increasing the scope as experience and confidence is gained.
    7. Inferential models can be developed ahead of the plant test, which allows the model to be evaluated and adjustments made.  For data driven regression based inferentials, one needs to have at least confirmed that measurements exist that correlate with the analyzed valve.  Final determination of model inputs can be made during the modeling phase.
    8. A challenge with lab-based inferentials is accurately knowing when a lab sample is collected.  A technique for automating this is to install a thermocouple directly in the line of the sample point. A spike in the temperature measurement is used to detect a sample collection.
    9. When implementing a steady-state inferential model online, it is often useful to filter inputs to the calculation to remove phantom effects such as overshoot or inverse response.

    MPC Model Development Best Practices

    Plant Testing

    1. A test plan should be developed with operations and process engineering. It will need to be flexible to accommodate the needs of the modeling as well as operational issues that may arise.
    2. Data collected for model identification should not use data compression. A separate data collection is recommended to minimize the likelihood of latency effects such as PVs exhibiting changes before SPs.
    3. The data collection should include all pertinent tags for the units being tested. This can allows integrity checks to be made, and models to be identified for new CVs and DVs that may be added later to the MPC.
    4. Model identification runs should be done frequently, typically at least once per day. This allows the testing to be modified to emphasize MV-CV models that are insufficiently identified.
    5. The plant test is an opportunity to answer operational or process questions of which there are differing opinions, such as the effect of a recycle on recovery or on a constraint. This can help to develop consensus on a new strategy.
    6. MPC products that include automatic closed-loop should provide the necessary logic to change step sizes, bring MVs in and out of test mode, and displays to follow the testing.
    7. Lab sample collection for inferential model: include multiple samples collected at same time to assess sample container differences (reproducibility) and lab repeatability. When multiple sample containers are used, record the container used for each sample.  Coordinate lab sample with collection personnel and record time samples are collected from the process.

    Model Identification

    1. The MPC identification package should automatically handle the required scaling of MVs and CVs and differencing and/or de-trending.
    2. The ability to slice or section out bad data is a necessary feature. Note that each section of data that is excluded requires a re-initialization of the identification algorithm for the next section of good data.
    3. A useful technique for deciding on which MVs and DVs are significant in a model, and should therefore be included, is to compare their contribution to the CV response based on the average move sizes made during the plant test.
    4. Model assessment tools to guide model quality assessments are desirable. These may be error bounds on the step responses or other techniques to grade the model.  A common technique for assessing model errors are bode error plots which express errors as a function of frequency.  They can be useful for modifying the test to improve certain aspects of the model (e.g., reducing errors at low or high frequencies.
    5. Features to assist in the development of nonlinear transformation are desirable. Ideally, the necessary pre- and post-controller calculations to support transformations are a standard option in the MPC.
    6. Features that help document the various model runs and the construction of the final MPC model is a desirable feature.
    7. Even if an MPC includes online options for removing weak degrees of freedom, it is recommended that known consistency relationships be imposed as part of model identification.

     

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

     

    About the Author
     Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly “Control Talk” columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg:
    LinkedIn

     

    • 12 Sep 2018

    How to Calibrate a Thermocouple

    The post How to Calibrate a Thermocouple first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    In the ISA Mentor Program, I am providing guidance for extremely talented individuals from Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Daniel Brewer.

    Daniel Brewer, one of our newest protégés, has over six years of industry experience as an I&E technician. He attended the University of Kansas process instrumentation and control online courses. Daniel’s questions focus on aspects affecting thermocouple accuracy.

    Daniel Brewer’s Question

    How do you calibrate a thermocouple transmitter? How do you simulate a thermocouple? When do you use zero degree reference junction? What if your measuring junction temperature varies?

    Hunter Vegas’ Answer

    Most people use a thermocouple simulator to calibrate temperature transmitters. You can usually set them to generate a wide selection of thermocouple types. Just make sure the thermocouple lead you use to connect the simulator to the transmitter is the right kind of wire.

    “Calibrating” the thermocouple is another matter – because realistically it works or it doesn’t. You can pull it and put it in a bath though very few people actually do that. However if it is critical most will take the time to either put the thermocouple in a bath or dry block or at least cross check the reading against another thermocouple or some other means to check it.

    The zero degree junction is a bit more complicated. Basically any time two dissimilar metals are connected a slight millivolt signal is generated. That is what a thermocouple is – two dissimilar metals welded together which generate varying voltages depending on the temperature at the junction. When you run a thermocouple circuit you try to use the same metals as the thermocouple for the whole circuit – that is you run thermocouple wire that matches the thermocouple and you use special thermocouple terminal blocks that are the same kind. This eliminates any extra junctions – the same metal is always connected to itself.   However at some point you have to hook up to some kind of device that has copper terminal blocks – (transmitter, indicator, etc.) Unfortunately this creates another thermocouple junction where the copper touches the wires.  That junction will impact the reading and will also fluctuate with temperature so the error will be variable.

    To fix this most devices have a cold junction compensation circuit built in that automatically senses the temperature of the terminal block and subtracts the effect from the reading. Nearly every transmitter and read out device has it build in as a standard feature now – only older equipment would lack it.

    Greg McMillan’s Answer

    The error from properly calibrated smart temperature transmitter with the correct span is generally negligible compared to the noise and errors from the sensor and signal wiring and connections. The use of Class 1 special grade instead of Class 2 standard grade thermocouples and extension lead wires enables an accuracy that is 50% better. The use of thermocouple input cards instead of smart transmitters introduces large errors due to the large spans and inability to individualize the calibrations.

    Thermocouple (TC) drift can vary from 1 to 20 degrees Fahrenheit per year and the repeatability can vary from 1 to 8 degrees Fahrenheit depending upon the TC type and application conditions. For critical operations demanding high accuracy, the frequency of sensor calibrations needed is problematic. While a dry block calibrator is faster than a wet batch and can cover a higher temperature range, the removal of the sensor from the process is disruptive to operations and the time required compared to a simple transmitter calibration is still considerable. The best bet is a single point temperature check to compensate for the offset due to drift and manufacturing tolerances.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program.

    In a distillation column application, operations were perplexed and more than annoyed at the terrible column performance when the thermocouple was calibrated or replaced. It turns out operations had homed in on a temperature setpoint that had effectively compensated for the offset in the thermocouple measurement. Even after realizing the need for a new setpoint due to a more accurate thermocouple, it would take months to years to find the best setpoint.

    Temperature is critical for column control because it is an inference of composition. It is also critical for reactor control because the reaction rate determining process capacity and selectivity setting process efficiency and product quality is greatly affected by temperature. In these applications where the operating temperature is below 400 degrees Fahrenheit, a resistance temperature detector (RTD) is a much better choice. Table 1 compares the performance of a thermocouple and RTD.

    Table 1: Temperature Sensor Precision, Accuracy, Signal, Size and Linearity

     

    Stepped thermowells should be specified with an insertion length greater than five times the tip diameter (L/D > 5) to minimize error from heat going from thermowell tip to pipe or equipment connection from thermal conduction and an insertion length less than 20 times the tip diameter (L/D < 20) to minimize vibration from wake frequencies. Calculations by supplier on length should be done to confirm that heat conduction error and vibration damage is not a problem. Stepped thermowells reduce the error and damage and provide a faster response. Spring loaded grounded thermocouples as seen in Figure 1 with minimum annular clearance between sheath and thermowell interior walls provide the fastest response that minimizes errors introduced by the sensor tip temperature lagging the actual process temperature.

     

    Figure 1: Spring loaded compression fitting for sheathed TC or RTD

     

    Thermowell material must provide corrosion resistance and if possible, a thermal conductivity to minimize conduction error or response time, whichever is most important.  The tapered tip of the thermowell must be close to center line of pipe and the tapered portion of the thermowell completely past the equipment wall including any baffles. For columns, the location showing the largest and most symmetrical change in temperature for an increase and decrease in manipulated flow should be used. Simulations can help find this but it is wise to have several connections to confirm by field tests the best location, The tip of the thermowell must see the liquid, which may require a longer extension length or mounting on the opposite side of the downcomer to avoid the tip being in the vapor phase due the drop in level at the downcomer.

    For TCs above 600 degrees Celsius, ensure sheath material is compatible with TC type. For TCs above temperature limit of sheaths, use the ceramic material with best thermal conductivity and design to minimize measurement lag time. For TCs above the temperature limit of sheaths with gaseous contaminants or reducing conditions, use possibly purged primary (outer) and secondary (inner) protection tubes to prevent contamination of TC element and provide a faster response.

    The best location for a thermowell for small diameter pipelines (e.g., less than 12 inch) is in a pipe elbow facing upstream to maximize insertion length in center of the pipe. If abrasion from solids is an issue, the thermowell can be installed in the elbow facing downstream but a greater length is needed to reduce noise from swirling.  If a pipe is half filled, the installation should ensure the narrowed diameter of the stepped thermowell is in liquid and not vapor.

    The location of a thermowell must be sufficiently downstream of a joining of streams or heat exchanger tube side outlet to enable remixing of streams. The location must not be too far downstream due to the increase in transportation delay, which is the residence time for plug flow that is the pipe volume between the outlet or junction and sensor location divided by the pipe flow (volume/flow). For a length that is 25 times the pipe diameter (L/D = 25), the increase in loop deadtime of a few seconds is not as detrimental as a poor signal to noise ratio from poor uniformity. For desuperheaters, to prevent water droplets from creating noise, the thermowell must provide a residence time that is greater 0.3 seconds, which for high gas velocities can be much further than the distance required for liquid heat exchangers.

    For greater reliability and better diagnostics dual isolated sensing elements can be used but the more effective solution is redundant installations of thermowells and transmitters. The middle signal selection of three completely redundant measurements offers best reliability and least effect of drift, noise, repeatability and slow response. The measurement from middle signal selection will be valid for any type of failure of one measurement. There is also considerable knowledge gained to head off problems from comparison of each measurement to middle.

    Drift in the sensor shows up as a different average controller output at the same production rate assuming there is no fouling or change in raw materials. Poor repeatability in the sensor shows up as excessive variability in temperature controller output. For very tight control where the controller gain is high, sensor variability is most apparent in the controller output assuming the controller is tuned properly and the valve has a smooth consistent response.

    For much more on calibration and temperature measurement see the Beamex e-book Calibration Essentials and Rosemount’s The Engineer’s Guide to Industrial Temperature Measurement.

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 12 Sep 2018

    How to Calibrate a Thermocouple

    The post How to Calibrate a Thermocouple first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the
    ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

     

    Daniel Brewer, one of our newest protégés, has over six years of industry experience as an I&E technician. He attended the University of Kansas process instrumentation and control online courses. Daniel’s questions focus on aspects affecting thermocouple accuracy.

     

     

    Daniel Brewer’s Question

    How do you calibrate a thermocouple transmitter? How do you simulate a thermocouple? When do you use zero degree reference junction? What if your measuring junction temperature varies?

    Hunter Vegas’ Answer

    Most people use a thermocouple simulator to calibrate temperature transmitters. You can usually set them to generate a wide selection of thermocouple types. Just make sure the thermocouple lead you use to connect the simulator to the transmitter is the right kind of wire.

    “Calibrating” the thermocouple is another matter – because realistically it works or it doesn’t. You can pull it and put it in a bath though very few people actually do that. However if it is critical most will take the time to either put the thermocouple in a bath or dry block or at least cross check the reading against another thermocouple or some other means to check it.

    The zero degree junction is a bit more complicated. Basically any time two dissimilar metals are connected a slight millivolt signal is generated. That is what a thermocouple is – two dissimilar metals welded together which generate varying voltages depending on the temperature at the junction. When you run a thermocouple circuit you try to use the same metals as the thermocouple for the whole circuit – that is you run thermocouple wire that matches the thermocouple and you use special thermocouple terminal blocks that are the same kind. This eliminates any extra junctions – the same metal is always connected to itself.   However at some point you have to hook up to some kind of device that has copper terminal blocks – (transmitter, indicator, etc.) Unfortunately this creates another thermocouple junction where the copper touches the wires.  That junction will impact the reading and will also fluctuate with temperature so the error will be variable.

    To fix this most devices have a cold junction compensation circuit built in that automatically senses the temperature of the terminal block and subtracts the effect from the reading. Nearly every transmitter and read out device has it build in as a standard feature now – only older equipment would lack it.

     

    Join the ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.

     

    Greg McMillan’s Answer

    The error from properly calibrated smart temperature transmitter with the correct span is generally negligible compared to the noise and errors from the sensor and signal wiring and connections. The use of Class 1 special grade instead of Class 2 standard grade thermocouples and extension lead wires enables an accuracy that is 50% better. The use of thermocouple input cards instead of smart transmitters introduces large errors due to the large spans and inability to individualize the calibrations.

    Thermocouple (TC) drift can vary from 1 to 20 degrees Fahrenheit per year and the repeatability can vary from 1 to 8 degrees Fahrenheit depending upon the TC type and application conditions. For critical operations demanding high accuracy, the frequency of sensor calibrations needed is problematic. While a dry block calibrator is faster than a wet batch and can cover a higher temperature range, the removal of the sensor from the process is disruptive to operations and the time required compared to a simple transmitter calibration is still considerable. The best bet is a single point temperature check to compensate for the offset due to drift and manufacturing tolerances.

    In a distillation column application, operations were perplexed and more than annoyed at the terrible column performance when the thermocouple was calibrated or replaced. It turns out operations had homed in on a temperature setpoint that had effectively compensated for the offset in the thermocouple measurement. Even after realizing the need for a new setpoint due to a more accurate thermocouple, it would take months to years to find the best setpoint.

    Temperature is critical for column control because it is an inference of composition. It is also critical for reactor control because the reaction rate determining process capacity and selectivity setting process efficiency and product quality is greatly affected by temperature. In these applications where the operating temperature is below 400 degrees Fahrenheit, a resistance temperature detector (RTD) is a much better choice. Table 1 compares the performance of a thermocouple and RTD.

     

    Table 1: Temperature Sensor Precision, Accuracy, Signal, Size and Linearity

     

     

    Stepped thermowells should be specified with an insertion length greater than five times the tip diameter (L/D > 5) to minimize error from heat going from thermowell tip to pipe or equipment connection from thermal conduction and an insertion length less than 20 times the tip diameter (L/D < 20) to minimize vibration from wake frequencies. Calculations by supplier on length should be done to confirm that heat conduction error and vibration damage is not a problem. Stepped thermowells reduce the error and damage and provide a faster response. Spring loaded grounded thermocouples as seen in Figure 1 with minimum annular clearance between sheath and thermowell interior walls provide the fastest response that minimizes errors introduced by the sensor tip temperature lagging the actual process temperature.

     

    Figure 1: Spring loaded compression fitting for sheathed TC or RTD

     

    Thermowell material must provide corrosion resistance and if possible, a thermal conductivity to minimize conduction error or response time, whichever is most important.  The tapered tip of the thermowell must be close to center line of pipe and the tapered portion of the thermowell completely past the equipment wall including any baffles. For columns, the location showing the largest and most symmetrical change in temperature for an increase and decrease in manipulated flow should be used. Simulations can help find this but it is wise to have several connections to confirm by field tests the best location, The tip of the thermowell must see the liquid, which may require a longer extension length or mounting on the opposite side of the downcomer to avoid the tip being in the vapor phase due the drop in level at the downcomer.

    For TCs above 600 degrees Celsius, ensure sheath material is compatible with TC type. For TCs above temperature limit of sheaths, use the ceramic material with best thermal conductivity and design to minimize measurement lag time. For TCs above the temperature limit of sheaths with gaseous contaminants or reducing conditions, use possibly purged primary (outer) and secondary (inner) protection tubes to prevent contamination of TC element and provide a faster response.

    The best location for a thermowell for small diameter pipelines (e.g., less than 12 inch) is in a pipe elbow facing upstream to maximize insertion length in center of the pipe. If abrasion from solids is an issue, the thermowell can be installed in the elbow facing downstream but a greater length is needed to reduce noise from swirling.  If a pipe is half filled, the installation should ensure the narrowed diameter of the stepped thermowell is in liquid and not vapor.

    The location of a thermowell must be sufficiently downstream of a joining of streams or heat exchanger tube side outlet to enable remixing of streams. The location must not be too far downstream due to the increase in transportation delay, which is the residence time for plug flow that is the pipe volume between the outlet or junction and sensor location divided by the pipe flow (volume/flow). For a length that is 25 times the pipe diameter (L/D = 25), the increase in loop deadtime of a few seconds is not as detrimental as a poor signal to noise ratio from poor uniformity. For desuperheaters, to prevent water droplets from creating noise, the thermowell must provide a residence time that is greater 0.3 seconds, which for high gas velocities can be much further than the distance required for liquid heat exchangers.

    For greater reliability and better diagnostics dual isolated sensing elements can be used but the more effective solution is redundant installations of thermowells and transmitters. The middle signal selection of three completely redundant measurements offers best reliability and least effect of drift, noise, repeatability and slow response. The measurement from middle signal selection will be valid for any type of failure of one measurement. There is also considerable knowledge gained to head off problems from comparison of each measurement to middle.

    Drift in the sensor shows up as a different average controller output at the same production rate assuming there is no fouling or change in raw materials. Poor repeatability in the sensor shows up as excessive variability in temperature controller output. For very tight control where the controller gain is high, sensor variability is most apparent in the controller output assuming the controller is tuned properly and the valve has a smooth consistent response.

    For much more on calibration and temperature measurement see the Beamex e-book Calibration Essentials and Rosemount’s The Engineer’s Guide to Industrial Temperature Measurement.

     

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

     

     

    About the Author
     Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly “Control Talk” columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg:
    LinkedIn

     

    • 2 Sep 2018

    When is Reducing Variability Wrong?

    The post, When is Reducing Variability Wrong?, first appeared on the ControlGlobal.com Control Talk blog.

    Having the blind wholesale goal of reducing variability can lead to doing the wrong thing that can reduce plant safety and performance.  Here we look at some common mistakes made that users may not realize until they have better concept of what is really going on. We seek to provide some insightful knowledge here to keep you out of trouble.

    Is a smoother data historian plot or a statistical analysis showing less short term variability good or bad? The answer is no for the following situations misleading users and data analytics.

    First of all, the most obvious case is surge tank level control. Here we want to maximize the variation in level to minimize the variation in manipulated flow typically to downstream users. This objective has a positive name of absorption of variability. What this is really indicative of is the principle that control loops do not make variability disappear but transfer variability from a controlled variable to a manipulated variable. Process engineers often have a problem with this concept because they think of setting flows per a Process Flow Diagram (PFD) and are reluctant to let a controller freely move them per some algorithm they do not fully understand. This is seen in predetermined sequential additions of feeds or heating and cooling in a batch operation rather allowing a concentration or temperature controller do what is needed via fed-batch control. No matter how smart a process engineer is, not all of the situations, unknowns and disturbances can be accounted for continuously. This is why fed-batch control is called semi-continuous. I have seen where process engineers, believe or not, sequence air flows and reagent flows to a batch bioreactor rather than going to Dissolved Oxygen or pH control. We need to teach chemical and biochemical engineers process control fundamentals including the transfer of variability.

    The variability of a controlled variable is minimized by maximizing the transfer of variability to the manipulated variable. Unnecessary sharp movements of the manipulated variability can be prevented by a setpoint rate of change limit on analog output blocks for valve positioners or VFDs or directly on other secondary controllers (e.g., flow or coolant temperature) and the use of external-reset feedback (e.g., dynamic reset limit) with fast feedback of the actual manipulated variable (e.g., position, speed, flow, or coolant temperature).  There is no need to retune the primary process variable controller by the use of external-reset feedback.

    Data analytics programs need to use manipulated variables in addition to controlled variables to indicate what is happening. For tight control and infrequent setpoint changes to a process controller, what is really happening is seen in the manipulated variable (e.g., analog output).

    A frequent problem is data compression in a data historian that conceals what is really going on. Hopefully, this is only affecting the trend displays and not the actual variables been used by a controller.

    The next most common problem has been extensively discussed by me so at this point you may want to move on to more pressing needs. This problem is the excessive use of signal filters that may even be more insidious because the controller does not see a developing problem as quickly. A signal filter that is less than the largest time constant in the loop (hopefully in the process) creates dead time. If the signal filter becomes the largest time constant in the loop, the previously largest time constant creates dead time. Since the controller tuning based on largest time constant has no idea where it is, the controller gain can be increased, which combined with the smoother trends can lead one to believe the large filter was beneficial.  The key here is a noticeably increase in the oscillation period particularly if the reset time was not increased. Signal filters become increasingly detrimental as the process loses self-regulation. Integrating processes such as level, gas pressure and batch temperature are particularly sensitive. Extremely dangerous is the use of a large filter on the temperature measurement for a highly exothermic reaction. If the PID gain window (ratio of maximum to minimum PID gain) reduces due to measurement lag to the point of not being able to withstand nonlinearities (e.g., ratio less than 6), there is a significant safety risk.

    A slow thermowell response often due to a sensor that is loose or not touching the bottom of the thermowell causes the same problem as a signal filter. An electrode that is old or coated can have a time constant that is orders of magnitude larger (e.g., 300 sec) than a clean new pH electrode. If the velocity is slightly low (e.g., less than 5 fps), pH electrodes become more likely to foul and if the velocity is very low (e.g., less than 0.5 fps), the electrode time constant can increase by one order of magnitude (e.g., 30 sec) compared to electrode seeing recommended velocity. If the thermowell or electrode is being hidden by a baffle, the response is smoother but not representative of what is actually going on.

    For gas pressure control, any measurement filter including that due to transmitter damping generally needs to be less than 0.2 sec, particularly if volume boosters on a valve positioner output(s) or a variable frequency drive is needed for a faster response.

    Practitioners experienced in doing Model Predictive Control (MPC), want data compression and signal filters to be completely removed so that the noise can be seen and a better identification of process dynamics especially dead time is possible.

    Virtual plants can show how fast the actual process variables should be changing revealing poor analyzer or sensor resolution and response time and excessive filtering. In general, you want measurement lags to total up to being less than 10% of the total loop dead time or less than 5% of reset time. However, you cannot get a good idea of the loop dead time unless you remove the filter and look for the time it takes to see a change in the right direction beyond noise after a controller setpoint or output change.

    For more on the deception causes by a measurement time constant, see the Control Talk Blog “Measurement Attenuation and Deception.”

    • 29 Aug 2018

    Key Insights to Control System Dynamics

    The post Key Insights to Control System Dynamics first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the
    ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

     

    Caroline Cisneros, a recent graduate of the University of Texas who became a protégé about a year ago, is gaining significant experience working with some of the best process control engineers in an advanced control applications group. Caroline asks some questions about dynamics that play such a big role in improving control systems. The questions are basic but have enormous practical implications as seen in answers.

    Caroline Cisneros’ Question

    Is an increase/decrease in process gain, time constant, dead time, controller gain, reset, and rate good or bad in terms of effects on loop performance?

    Greg McMillan’s Answers

    This is an excellent question with widespread significant implications. I offer here some key insights that can lead to better career and system performance. The first obstacle is terminology that over the years has resulted in considerable misconceptions and missing recognition of the source and nature of problems and the solutions needed. To overcome what is preventing a more common and better understanding see the Control Talk Blog Understanding Terminology to Advance Yourself and the Automation Profession. Also, for much more on how all of these dynamic terms affect what you do with your PID and the consequences in loop performance see the ISA Mentor post How to Optimize PID Settings and Options.

    Process Gain

    Increases in process gain can be helpful but challenging.

    In distillation control, the tray that shows the largest temperature change for a change in reflux to feed ratio (largest process gain) in both directions has the best temperature to be used as the controlled variable. This location offers much better control because of the increased sensitivity of temperature that is an inferential measurement of column composition. Tests are done in simulations and in plants to find the best locations for temperature sensors.

     

    Join the ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.

     

    In pH control, a titration curve (plot of pH versus ratio of reagent added to sample volume) with a slope that goes from flat to incredibly steep due to strong acids and strong bases can create an incredibly large and variable process gain. The X axis (abscissa) is converted to a ratio of reagent flow to influent flow taking into account engineering units. The shape stays the same, and if volumetric units are used and concentrations are the same in the lab and plant, the X axis has the same numeric values. The slope of the curve is the process gain. The slope and thus the process gain can theoretically change by a factor of 10 for every pH unit deviation from neutrality for a strong acid and strong base. The straight nearly vertical line at 7 pH seen in a plot of a laboratory titration curve is actually another curve if you zoom in on the neutral region as seen in Figure 1. If only a few data points are provided between 8 and 10 pH (common problem), you will not see the curve. The lab needs to be instructed to dramatically reduce the size of the reagent addition as the titrated pH gets closer to 7pH.

     

    Figure 1: Titration Curve for Strong Acid and Strong Base

     

    The steep slope provides incredible sensitivity to changes in hydrogen ion concentration, but less than ideal mixing will create enormous noise, and any stiction in the control valve will create enormous oscillations. The amplitude from stiction can be larger than 2 pH for even the best control valve. Even if we could have perfect mixing and control valve, we would not appreciate the orders of magnitude improvement in hydrogen ion control because we are only looking at what we measure, which is pH. Thus, for pH control we seek to have weak acids and weak bases and conjugate salts to moderate the slope of the titration curve. There is also a flow ratio gain that occurs for all composition and temperature control loops as detailed in the Control Talk Blog Hidden Factor in our Most Important Control Loops.

    Often the term “process gain” includes the effect of more than the process. The better term is “open loop gain” that is the product of the manipulated variable gain (e.g., valve gain), process gain, and measurement gain (e.g., 100%/span). The valve gain (slope of installed flow characteristic that is the flow change in engineering units per signal change in percent) must not be too small (e.g., large disk or ball valve rotations where installed characteristic is flat) or too large (e.g., quick opening characteristic) because the stiction or backlash expressed as a percent of signal translates to a larger amount of errant flow. Oversized valves cause an even greater problem because of operation near the closed position where stiction is greatest from seal and seat friction. Small measurement spans causing a high measurement gain may be beneficial when accuracy of measurement is a percent of span. The use of thermocouple and RTD input cards, rather than transmitters with spans narrowed to range of interest, introduces too much error. In conclusion, automation systems gains must not be too small or too large. Too small of a valve gain or measurement gain is problematic because of less sensitivity and greater error that reduces the ability to accurately see and completely correct a process change. Too high of a valve gain is also bad from the standpoint of an increase in size of the flow change associated with backlash and stiction. An increase in this flow change accordingly reduces the precision of a correction for a process change and increases the amplitude of oscillations (e.g., limit cycle).

    Process Time Constant

    An increase in the largest (primary) time constant in a self-regulating process (process that reaches steady state in manual for no continuing upsets) is beneficial because it enables a large PID gain. The process time constant also slows down process input disturbances, giving the PID more time to catch up. While this proportionally decreases peak and integrated errors, a large time constant is perceived by some as bad.  The tuning is more challenging, which requires greater patience and time commitment for open loop tests that seek to identify the primary time constant. The time for identification of the dynamics needed to tune the loop can be reduced by 80% or more for some well mixed vessel temperature loops by identifying the dead time and initial ramp rate (treating it like an integrating process). It has been verified by extensive test results that a loop with a process time constant larger than 4 times the deadtime should be classified as near-integrating. Integrating process tuning rules are consequently used to enable more immediate feedback correction to potentially stop a process excursion within 4 dead times. The tuning parameter changes from a closed loop time constant for self-regulating process tuning rules to an arrest time for integrating process tuning rules in order to take advantage of the ability to increase the proportional and integral action to reject load disturbances.

    While the largest time constant is beneficial if it is in the process, the second largest process time constant creates effectively deadtime and is detrimental. It can be largely cancelled by a rate time setting. Going from a single loop to a cascade loop where a secondary loop encloses a process time constant smaller than largest time constant  converts a term with a bad effect (secondary time constant increasing dead time in original single loop) into a term with a good effect (primary time constant slowing down disturbances in secondary loop). The reduction in the dead time also decreases the ultimate period of the primary loop.

    For true integrating and runaway processes, any time constant is detrimental. It becomes more important to cancel the time constant by a rate time equal to or larger than time constant

    Any time constant in the automation system is detrimental. A measurement and control valve time constant slows down the recognition and correction, respectively, of a disturbance. An automation system time constant also effectively creates dead time. Signal filters and transmitter damping settings add time constants. See Figure 1 to help recognize the many time constants in an automation system.

    A measurement time constant larger than process time constant can be deceptive in that for self-regulating processes, it enables a larger PID gain, and the amplitude of oscillations may look less due to filtering action. However, the key to realize is that the actual process error or amplitude in engineering units is larger and the period of the oscillation is larger. All measurement and valve time constants should be less than 10% of the total loop deadtime for the effect on loop performance to be negligible. This objective for a valve time constant is difficult to achieve in liquid flow, pressure control, and compressor surge control because the process dead times in these applications are so small., A valve time constant becomes large for large signal changes (e.g., > 40%) due to stroking time, particularly for large valves. A valve time constant becomes large for small signal changes (e.g., < 0.4%) due to backlash, stiction, and poor positioner and actuator sensitivity. For more on how to identify and fix valve response problems, see the article How to specify valves and positioners that don’t compromise control.

    Dead Time

    Dead time anywhere in the loop is detrimental by creating a delay in the recognition or correction of a change in the process variable. For a setpoint change, dead time in the manipulated variable (e.g., manipulated flow) or process causes a delay in the start of the change in the process, and dead time in the measurement or controller creates additional delays in the appearance as a process variable response to the setpoint change. The minimum possible peak error and integrated error for an unmeasured load disturbance is proportional to the total loop dead time and dead time squared, respectively. The total loop dead time is the sum of all dead times in the loop. The dead time from digital devices and algorithms is ½ the update rate (execution rate or scan time) plus the latency (time required to communicate change in digital output after a change in digital input). Most digital devices have negligible latency. Simulation tests that always have the disturbance arrive immediately before, instead of after, the PID execution do not show the full adverse effect of PID execution rate, which leads to misconceptions as to the adverse effect of execution rate. On the average, the disturbance arrives in the middle of the time interval of PID executions, which is consistent with the dead time being ½ the execution rate for negligible latency. The latency for complex modules with complex calculations may approach the update rate. The latency for most at-line analyzers is the analyzer cycle time since the analysis is not completed until the end of the cycle. The result is a dead time that is 1.5 times the cycle time. Most of a time constant much smaller than the process time constant or in an integrating process can be taken as equivalent dead time. Since dead time is nearly always underestimated, I simply sum up all of the small time constants as being equivalent dead time. The block diagram in Figure 2 shows many but not all the sources of dead time.

     

    Figure 2: Automation System and Process Dynamics in a Control Loop

     

    The dead time from backlash and stiction is insidious in that it does not show up for step changes in signal. The dead time is the dead band or resolution limit divided by the signal rate of change.

    Simulations typically do not have enough dead time because volumes are perfectly mixed and the dead time is missing from transportation delays particularly from dip tubes and piping to sensors or sample lines to analyzers, valve response time, backlash, stiction, sensor lags, thermowell lags, transmitter damping, wireless update times, and analyzer cycle times.

    For pH applications with extremely large and nonlinear process gains due to strong acids and strong bases, there is a particularly great need to minimize the total loop dead time. This reduces the pH excursion on the titration curve, reducing the extent of the operating point nonlinearity seen. Poor mixing, piping design, valve response, and coated, dehydrated or old electrodes can introduce incredibly large dead times, killing a pH loop. My early specialty being pH control sensitized me to making sure the total system design including equipment, agitation, and piping would enable a pH loop to do its job by minimizing dead time. For much more on the implications on total system design from a very experience oriented view see the ISA book Advanced pH Measurement and Control.

    PID Gain

    The proportional mode provides a contribution to the PID output that is the error multiplied by the PID gain. Except for dead time dominant loops, humans tend not to use enough proportional action due to the perceived bad aspects in reasons listed below to decrease PID gain. For more on the missed opportunities see the Control Talk Blog Surprising Gains from PID Gain.

    Reasons to increase PID gain:

    1. Reduce peak and integrated errors from load disturbances.
    2. Add negative feedback action missing in process (e.g., near and true integrating and runaway processes) and provide overshoot of final resting value of PID output needed.
    3. Provide sense of direction since a decrease in error reverses direction of PID output.
    4. Reduce the dead time from dead band (e.g., backlash) and resolution (e.g., stiction).
    5. Reduce limit cycle amplitude from dead band in loops with two integrators.
    6. Eliminate oscillation from poor actuator and positioner sensitivity.
    7. Make setpoint response faster for batch operations potentially reducing cycle time.
    8. Make secondary loop faster in rejecting disturbances and meeting primary loop demands.
    9. Stop slow oscillations in near and true integrating and runaway processes (product of gain and reset time must be greater than twice the inverse of integrating process gain).
    10. Get the right valve open and the PID output approaches split range point.

    Reasons to decrease PID gain:

    1. Reduce abrupt responses in dead time dominant loops and to setpoint changes in all loops upsetting operators and other loops. Setpoint rate limits on analog output or secondary loop setpoint and external-reset feedback (dynamic reset limit) with the manipulated variable as BKCAL_IN can smooth out these changes without needing to retune PID.
    2. Increases resonance and interaction. Making the faster loops faster and eliminating oscillations by better tuning, and valves may alleviate this concern.
    3. Increases in process gain or valve gain and dead time or decreases in primary time constant necessitates a lower PID gain. In general, a gain margin of 6 or more is advised and achieved by a closed loop time constant or arrest time of 3 or more times dead time.
    4. Eliminate overshoot of PID output final resting value in balanced and dead time dominant processes. While this is useful here, overshoot is needed in other processes.
    5. Reduce amplification of noise. Better solution is reducing source of noise and using a judicious filter that is less than 10% of dead time. Note that fluctuations in PID output smaller than resolution or sensitivity limit do not affect the process.
    6. Reduce faltering as process variable approaches setpoint. Too much proportional action will momentarily halt the approach till integral action takes over resuming approach.

    PID Reset Time

    The integral mode provides a contribution to the PID output that is the integral of the error multiplied by the PID gain and divided by the reset time. External-reset feedback (dynamic reset limit) suspends this action (further changes in output from integral mode) when manipulated variable stops changing. Except for dead time dominant loops, humans tend to use too much integral action due to the perceived good aspects in reasons listed below to decrease reset time.

    Reasons to increase PID reset time (decrease reset action):

    1. Reduce lack of sense of direction to reduce continual change for same error sign
    2. Reduce continual movement since reset is never satisfied (error never exactly zero)
    3. Reduce overshoot of setpoint.
    4. Prevent SIS and relief activation from high pressure or high temperature.
    5. Stop slow oscillations in near and true integrating and runaway processes (product of gain and reset time must be greater than twice inverse of integrating process gain).
    6. Get the right valve open and the PID output approaches split range point.

    Reasons to decrease PID reset time (increase reset action):

    1. Reduce integrated errors from load disturbances.
    2. Eliminate offset from setpoint.
    3. Keep a valve from opening till setpoint is reached. Sometimes stated as objective for surge control, but requires a larger margin between PID setpoint and actual surge curve, resulting in less efficient operation till user flows are increased, which closes the surge valve. A better solution is a smaller margin and use PID gain action to preemptively open the surge valve.
    4. Provide a gradual response with less reaction to noise.
    5. You have a dead time dominant process.
    6. You love Internal Model Control.

    PID Rate Time

    The derivative mode provides a contribution to the PID output that is the derivative of the error (PID on error structure) or derivative of the process variable (PI on error and D on PV structure) multiplied by the PID gain and the rate time. It provides an anticipatory action basically projecting a value of the PV one rate time into the future based on the rate of change multiplied by the rate time. Some plants have mistakenly decided not to use derivative action anywhere due to the perceived bad aspects in reasons listed below to decrease rate time. Good tuning software could have prevented this bad practice of only allowing PI control (rate time always zero).

    Reasons to increase PID rate time:

    1. Provide anticipation of approach to setpoint reducing overshoot.
    2. Cancel out effect of secondary time constant.
    3. Reduce the dead time from backlash and stiction.
    4. Prevent runaway reactions.

    Reasons to decrease PID rate time:

    1. Reduce abrupt responses in dead time dominant loops and to setpoint changes in all loops upsetting operators and other loops. Setpoint rate limits on analog output or secondary loop setpoint and external-reset feedback (dynamic reset limit) with the manipulated variable as BKCAL_IN can smooth out these changes without needing to retune PID.
    2. Prevent oscillations from rate time exceeding reset time for ISA Standard Form.
    3. Reduce amplification of noise. Better solution is reducing source of noise and using a judicious filter that is less than 10% of dead time. Note that fluctuations in PID output smaller than resolution or sensitivity limit do not affect the process.
    4. Reduce kick on setpoint change. A better solution is to use PID structure to eliminate derivative action on setpoint change (e.g., PI on error and D on PV).

     

     

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

     

     

    About the Author
     Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly “Control Talk” columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg:
    LinkedIn

     

    • 15 Aug 2018

    How to Measure pH in Ultra-Pure Water Applications

    The post How to Measure pH in Ultra-Pure Water Applications first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the
    ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

     

    Danny Parrott is an instrumentation and controls specialist at Spallation Neutron Source. Danny is a detail-oriented instrumentation and controls professional experienced in the areas of electrical, electronics and controls specification, installation, maintenance, and project planning. Danny’s question is important in dealing with the many challenges for reliable and accurate pH measurement in ultra pure water and more generally in streams with exceptionally low conductivity.

    Danny Parrott’s Question

    What are some opinions, thoughts, or practical experience relating to pH measurements in ultra-pure water applications?

    Greg McMillan’s Answer

    Ultra-pure water applications pose special problems because of the exceptionally low conductivity of the fluid from the absence of ions. The consequences are extreme sensitivity to fluid velocity and spurious ions, unstable reference junction potentials, sample contamination, and loss of electrical continuity between the reference and measurement electrodes. The functional electrical diagram showing resistances and potentials provides insightful view of nearly all of the sources of problems with pH measurements in general. Ultra-pure water and process fluids with an exceptionally small near zero fluid conductivity threaten the continuity of the electrical circuit between the reference and measurement electrode terminals at the transmitter by an extraordinarily large electrical resistance R8 in Figure 1, a pH electrode functional electrical circuit diagram for a combination electrode that is a great way of recognizing the many potential source of errors in a pH measurement.

     

    Figure 1: pH Electrode Functional Electrical Circuit Diagram

     

    The solution for online measurements is to use a flowing junction reference electrode to provide a small fixed liquid junction potential in a low flow assembly for a combination electrode. The combination electrode assembly ensures a short fixed distance path of reference electrolyte to the measurement electrode and a small fixed fluid velocity. The assembly also provides mounting of an electrolyte reservoir that sustains a small fixed reference junction flow as shown in Figure 2. The flow of reference electrode electrolyte reduces the fluid velocity and electrical resistance (R8) in the fluid path and provides a much more constant liquid junction potential (E5) that does not jump or shift due to the appearance of spurious ions. The resistances and potentials in the diagram provide a wealth of information. The flow assembly also has a special cup holder for calibration with buffer solutions. A solution ground connection reduces the effect of ground potentials. Temperature compensation must be accurate and fast.

     

    Figure 2: Low Flow Assembly With Flowing Reference Junction for Low Conductivity pH Applications

     

    The pH measurement calibration needs to be checked and adjusted before installation and periodically thereafter by inserting the electrode(s) in buffer solutions. Making a pH measurement of a sample is very problematic because of contamination from glass beaker ions, absorption of carbon dioxide creating carbonic acid and accumulation of electrolyte ions from flowing junction. The sample volume needs to be large and the measurement made quickly to reduce effect of accumulating ions. A closed plastic sample container is employed to minimize contamination. The same type of electrode(s) in the online measurement should be utilized for sample measurement so that reference junction potentials are consistent. Since these sample pH measurement requirements are rarely satisfied, buffers instead of process samples should be used for calibration checks.

     

    Join the ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.

     

    In exceptionally low conductivity process fluids, there is often not enough water content to keep the glass measurement electrode hydrated. Also, the activity of the hydrogen ion is severely decreased by the lack of water and the extremely different dissociation constant of a non-aqueous solvent can cause a pH range that is outside of the normal 0 to 14 pH range. For these applications, a flowing reference electrode is also needed but an automatically retractable insertion assembly is useful to periodically retract, flush, soak and calibrate the electrodes reducing process exposure time and hydrating/rejuvenating the measurement electrode’s glass surface. For more on the challenges of semi-aqueous pH measurements see the Control Talk article The wild side of pH measurement. For a much more complete view of what is needed for pH applications, see the ISA book Advanced pH Measurement and Control

    For pH measurements used for process control, I recommend three pH assemblies and middle signal selection. Lower lifecycle costs from less and more effective maintenance and better process performance more than pays for cost of the three measurements. Middle signal selection will inherently ignore a single measurement failure of any type and dramatically reduce the effect of spikes, noise, and the consequences of slow or insensitive glass electrodes.  The middle selection also eliminates unnecessary calibration checks and provides much more intelligent knowledge on electrode performance enabling optimum time for calibration and replacement.

     

    To download a free PDF excerpt from Advanced pH Measurement and Control, click here.

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Process Management), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

     

    About the Author
     Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly “Control Talk” columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg:
    LinkedIn