*Posts on this page are from the Control Talk blog, which is one of the ControlGlobal.com blogs for process automation and instrumentation professionals and Greg McMillan’s contributions to the ISA Interchange blog.

Tips for New Process Automation Folks
  • What Skill Sets Do You Need to Excel at IIoT Applications in an Automation Industry Career?

    The post What Skill Sets Do You Need to Excel at IIoT Applications in an Automation Industry Career? first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. In the  ISA Mentor Program , I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Angela Valdes . The Industrial Internet of Things (IIoT) is the hot topic as seen in the many feature articles published. The much greater availability of data is hoped to provide the knowledge needed to sustain and improve plant safety, reliability and performance. Here we look at what are some of the practical issues and resources in achieving the expected IIoT benefits. Angela Valdes is a recently added resource in the ISA Mentor Program. Angela is the automation manager of the Toronto office for SNC-Lavalin . She has over 12 years of experience in project leadership and execution, framed under PMI, lean, agile and stage-gate methodologies. Angela seeks to apply her knowledge in process control and automation in different industries such as pharmaceutical, food and beverage, consumer packaged products and chemicals. Angela’s question What skill sets and ISA standards shall I start building/referencing in order to grow in the IIoT space and work field? Nick Sands’ answer The ISA communication division is forming a technical interest group in IIoT. The division has had presentations on the topic for several years at conferences. The leader will be announced in InTech magazine . The ISA95 standard committee is working on updating the enterprise – control system communication to better support IIoT concepts. Jim Cahill’s answer One tremendous resource would be to read most of Jonas Berge’s LinkedIn blog posts. He writes about IIoT and digital communications and the impact they can have on reliability, safety, efficiency and production. I recommend you send him a connection request to see when he has new things to post. One other person to connect with includes Terrance O’Hanlon of ReliabilityWeb.com . Searching on the #IIoT hashtag in Twitter and LinkedIn is also a very good way to discover new articles and influencers in these areas. Greg McMillan’s answer One of the things we need to be careful about is to make sure there are people with the expertise to use the data and associated software, such as data analytics. There was a misrepresentation in a feature article that IIoT would make the automation engineer obsolete when in fact the opposite is true. We need more process control engineers besides process analytical technology and IIoT experts to make the most out of the data. The data by itself can be overwhelming as seen in the series of articles “Drowning in Data; Starving for Information”: Part 1 , Part 2 , Part 3 , and Part 4 . Process control engineers with a fundamental knowledge of the process and the automation system need to intelligently analyze and make the associated improvements in instrumentation, valves, setpoints, tuning, control strategies, and use of controller features whether PID or MPC. Often lacking is the recognition of the importance of dynamics in the process and particularly the automation system. The process inputs must be synchronized with the process outputs for continuous processes before true correlations can be identified. Knowledge of process first principles is also needed to determine whether correlations are really cause and effect. While the solution would seem to be employing expert rules to the IIoT results, a word of caution here is that the attempts to develop and use real time expert systems in the 1980s and 1990s were largely failures wasting an incredible amount of time and money. Deficiencies in conditions, interrelationships and knowledge in the rules of logic implemented plus lack of visibility of interplay between rules and ability to troubleshoot rules led to a lot of false alerts resulting in the systems being turned off and eventually abandoned. Hunter Vegas’ answer There have been multiple “data revolutions” over the years, and I consider IIoT to be just another wave where new information is made available that wasn’t available before. Unfortunately the problem that bedeviled the previous data revolutions still remains today. More data is not necessarily useful unless the right information is delivered at the right time to a person who can act on it.  In many cases the operators have too much information now – when something goes wrong they get 1000 alarms and have to wade through the noise to try to figure out what went wrong and how to fix it.   IIoT data can undoubtedly be useful, but it takes a huge amount of time and effort to create an interface than can effectively present that information and still more time and effort to keep it up. All too often management reads a few trendy articles and thinks IIoT is something you buy or install and savings should just appear. Unfortunately most fail to appreciate the effort required to implement such a system and keep it working and adding value. Usually money is spent, people celebrate the glorious new system, then it falls out of favor and use and gets eliminated a short time later.  ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program. As far as I know there aren’t any specific standards associated with IIoT.  I do think that there are several skill sets that can you help you implement it: Knowledge the latest alarm standards will help you understand how to identify alarm information/data that IS useful and how to make sure the operators get the important information in a timely fashion and not get buried with useless alarm data that doesn’t matter. Knowledge of some of the new HMI design standards are useful to learn how to present the information in a meaningful way that lets the operator quickly understand a situation and correctly react to it. Knowledge of getting the information into the system. That particular topic will depend upon your particular control system and how data flows into it.  It might come in via OPC, wireless, Hart, Modbus, Ethernet, or any number of other paths.  Each communication type will have its own challenges and security issues that must be addressed. Knowledge of what matters to your plant. In an aging acid plant corrosion can be a big issue.  If you can add a handful of small wireless pipe thickness gauges in a few key spots that might have significant value.  If you have environmental problems and sumps located all over your facility it might be possible to add wireless analyzers to detect solvent spills and quickly react to them rather than having a spill hit the river outfall before you detect it. The key to all of this is to understand the plant’s ‘pain points’ and then determine a way to address it.  IIoT may offer an answer or it may be as simple as retuning a controller or replacing a poorly specified control valve with a better one.  Regardless, if calling it an “IIoT Project” gets you funding and you solve a problem then you are a hero regardless. Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • Webinar Recording: Practical Limits to Control Loop Performance

    The post Webinar Recording: Practical Limits to Control Loop Performance first appeared on the ISA Interchange blog site. This educational ISA webinar was presented by  Greg McMillan  in conjunction with the  ISA Mentor Program . Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now  Eastman Chemical ). Part 2 provides a quick review of Part 1 and then discusses the contribution of each PID mode, why reset time is orders of magnitude too small for most composition and temperature loops, the ultimate and practical limits to control loop performance, the critical role of dead time, and when PID gain that is too high or too low causes more oscillation. ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program. About the Presenter Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • Webinar Recording: Simple Loop Tuning Methods and PID Features to Prevent Oscillations

    The post Webinar Recording: Simple Loop Tuning Methods and PID Features to Prevent Oscillations first appeared on the ISA Interchange blog site. This educational ISA webinar was presented by  Greg McMillan  in conjunction with the  ISA Mentor Program . Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now  Eastman Chemical ). Part 3 (the final part) describes simple tuning methods and the PID features that can be used to prevent the oscillations that plague our most important loops and to achieve the desired degree of tightness or looseness in level control. A general procedure is offered and a block diagram of the most effective PID structure, not shown anywhere else, is given followed by questions and answers. ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program. About the Presenter Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • Missed Opportunities in Process Control - Part 1

    The post, Missed Opportunities in Process Control - Part 1 , first appeared on the ControlGlobal.com Control Talk blog . I had an awakening as to the much greater than realized disconnect between what is said in the literature and courses and what we need to know as practitioners as I was giving guest lectures and labs to chemical engineering students on PID control. We are increasingly messed up. The disparity between theory and practice is exponentially growing because of leaders in process control leaving the stage and users today not given the time to explore and innovate and the freedom to publish. Much of what is out there is a distraction at best. I decided to make a decisive pitch not holding back for sake of diplomacy. Here is the start of a point blank decisive comprehensive list in a six part series. Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail. If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts . Recognizing and addressing actual load disturbance location. Most of the literature unfortunately shows disturbances entering the process output when in reality disturbances enter mostly as process inputs (e.g., feed flow, composition and temperature changes) passing through the primary process time constant. Thinking of disturbances on the process output leads to many wrong conclusions and mistakes, such as large primary time constants are bad, tuning can be done primarily for setpoint changes, feedforward and ratio control is not important, and algorithms like Internal Model Control are good alternatives to PID control. Tuning and tests to first achieve good load disturbance rejection and then good setpoint response. While most of the literature focuses on setpoint response tuning and testing, the first objective should be good load disturbance rejection particularly in chemical processes. Such tuning generally requires more aggressive proportional action. Testing is simply done by momentarily putting the PID in manual, changing the PID output and putting the PID back in auto. Tuning should minimize peak and integrated error from load disturbances taking into account needs to minimize resonance. To prevent overshoot in the setpoint response, a setpoint lead-lag can be used with lag time equal to reset time or a PID structure of proportional and derivative action on PV and integral action on error (PD on PV and I on E) can be used. If a faster setpoint response is needed, setpoint lead can be increased to ¼ lag time or a 2 Degrees of Freedom (2DOF) PID structure used with setpoint weight factors for the proportional and derivative modes equal to 0.5 and 0.25, respectively. Rapid changes in signals to valves or secondary loops upsetting other loops from higher PID gain setting can be smoothed by setpoint rate limits on analog output blocks and secondary PIDs and turning on external-reset feedback (ERF). We will note the many other advantages of ERF and its facilitation of directional move suppression to intelligently slow down changes of manipulated flows in a disruptive direction in subsequent months (hope you can wait). In Model Predictive Control move suppression plays a key role. Here we can enable it with additional intelligence of direction without retuning PID. Minimum possible peak error is proportional to dead time and actual peak error is inversely proportional to PID gain. Peak error is important to prevent relief, alarm and SIS activation and environmental violation. The ultimate limit to what you can achieve in minimizing peak error is proportional to the total loop dead time. The practical limit as to what you actually achieve is inversely proportional to the product of the PID gain and open loop process gain. The maximum PID gain is inversely proportional to the total loop dead time. These relationships hold best for near-integrating, true integrating and runaway processes. Minimum possible integrated error is proportional to dead time squared and actual peak error is proportional to reset time and inversely proportional to PID gain. The integrated absolute error is the most common criteria sited in literature. It does provide a measure of the amount of process material that is off-spec. The ultimate limit to what you can achieve in minimizing integrated error is proportional to the total loop dead time squared. The practical limit as to what you actually achieve is proportional to reset time and inversely proportional to the product of the PID gain and open loop process gain. The minimum reset time is proportional and the maximum PID gain is inversely proportional to the total loop dead time. These relationships hold best for near-integrating, true integrating and runaway processes. Detuning a PID can be evaluated as an increase in implied dead time. The relationships cited in items 3 and 4 above can be understood by realizing that a larger than actual total loop dead time is the effect on loop performance of a smaller PID gain and larger reset time setting than needed to prevent oscillations. This implied dead time is basically ½ and ¼ the summation of Lambda plus the actual dead time, for self-regulating and integrating processes, respectively. The effect of analyzer cycle time and wireless update rate depends on implied dead time and consequently tuning. You can prove almost any point you want to make about whether the effect of a discontinuous update is important or not by how you tune the PID. The dead time from an analyzer cycle time is 1½ times the cycle time. The dead time from a wireless device update or PID execution rate or sample rate is ½ the time interval between updates assuming no latency. How important this additional dead time is seen in how big it is relative to the implied dead time. The conventional rule of thumb is that the dead time from discontinuous updates should be less than 10% of the total loop dead time (wireless update rates and PID execution rates less than 20% of dead time). This is only really true if you are pursing aggressive control where the implied dead time is near the actual dead time. A better recommendation would be a wireless update rate or PID execution rate less than 20% of “original” implied dead time. I use the work “original” to remind us not to spiral into slowing down update and execution rates by increasing implied dead time and then further slowing down update and execution rates. The product of the PID gain and reset time must be greater than the inverse of the integrating process gain. Violation of this rule cause very large and very slow oscillations that are slightly damped taking hours to days to die out for vessels and columns, respectively. This is a common problem because in control theory courses we learned that high controller gain causes oscillations and the actual PID gain permitted for near integrating, true integrating and runaway processes is quite large (e.g., > 100). Most don’t think such a high PID gain is possible and don’t like sudden large movements in valves. Furthermore, integral action provides the gradual action that will always be in a direction consistent with error sign and will seek to exactly match up PV and SP meeting common expectations. The result is a reset time frequently set that is orders of magnitude too small making the product of PID gain and reset time less than the inverse of the integrating process gain causing confusing slow oscillations. The effective rate time should be less than ¼ the effective reset time. While PID controllers with a Series Form effectively prevented this due to interaction factors in the time domain, this is not the case for the other PID Forms. Not enforcing this limit is a common problem in migration projects since older controllers had the Series Form and most modern controllers use the ISA Standard Form. The result is erratic fast oscillations. Automation system dynamics affect the performance of most loops. This should be good news for us since this is much more under the control of the automation engineer and easier and cheaper to fix than process or equipment dynamics. Flow, pressure, inline temperature and composition (e.g., static mixer), and fluidized bed reactors are affected by sensor response time and final control element (e.g., valve and VFD) response time. Pressure and surge control loops are also affected by PID execution rate. Reserve feedforward multiplier and ratio controller ratio correction for sheet lines and plug flow systems. The conventional rule that on a plot of manipulated variable versus feedforward variable, a change in slope demands a feedforward multiplier and a change in intercept demands a feedforward summer is not really relevant. A feedforward multiplier introduces a change in controller gain that is counteracts the change in process gain. However, this is only useful for sheet lines and plug flow (e.g., static mixers and extruders) because for vessels and columns, the effect of back mixing from agitation and reflux or recirculation creates a process time constant that is proportional to the residence time. For decreases in feed flow the increase in process time constant from an increase in residence time negates the increase in process gain. Also, the most important error is often a bias error in the measurements. Span errors are smitten by a large span showing up mostly as a change in process gain much less than the other sources of changes in process gain. Also, the scaling and filtering of a feedforward summer signal and its correction is much easier.
  • How to Get Started with Effective Use of OPC

    The post How to Get Started with Effective Use of OPC first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. Encouraged to ask general questions that would help share knowledge, Nikki Escamillas provided several questions on OPC. Initially, the OPC standard was restricted to the Windows operating system with the acronym originally designating OLE (object linking and embedding) for process control.  OPC is the acronym for open platform communications that is much more widely used playing a key role in automation systems. We are fortunate to have answers to Nikki’s questions from a knowledgeable expert in higher level automation system communications, Tom Freiberger , product manager for industrial Ethernet in R&D engineering for Emerson Automation Solutions . Nikki Escamillas is a recently added protégé in the ISA Mentor Program. Nikki is an Automation Process Engineer for Republic Cement and Building Materials – Batangas Plant. Nikki specializes in process optimization and automation control, committed in minimizing cost and product quality improvement through effective time management and efficient use of resources and data analytics. Nikki has an excellent knowledge and experience of advanced process control principles and its application to different plant processes more specifically on cement and building materials manufacturing. Nikki Escamillas’ First Question How does OPC work? Tom Freiberger’s Answer OPC is a client/server protocol. The server has a list of data points (normally in a tree structure) that it provides. A client can connect to a server and pick a set of data points it wishes to use. The client can then read or write to those data points.  OPC is meant to be a common language for integrating products from multiple vendors. The OPC Foundation has a good introduction of OPC DA and UA at their website . Nikki Escamillas’ Second Question Does configuration of OPC DA differs from OPC UA? Tom Freiberger’s Answer Yes and no. The core concept of client/server and working with a set of data points remains consistent between the two, but the details of how to configure differ. The security configuration is the primary difference. OPC DA is based off of Microsoft’s DCOM technology, which means the security settings in the operating system are used. OPC UA runs on many operating systems and therefore the security settings are embedded into the configuration of the OPC application. OPC UA applications should use common terminology in their configuration, to ease integration between multiple vendors Nikki Escamillas’ Third Question Do we have any guidelines to follow when installing and configuring one OPC based upon its type? Tom Freiberger’s Answer Installation and configuration guidelines are going to be specific to the products being used. Some products are going to be limited on the number of data points that can be exchanged by a license or other application limitation. Some products may have performance limits. All of these details should be supplied in the documentation of the product. ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program. Nikki Escamillas’ Fourth Question Could I directly make one computer to become OPC capable? Tom Freiberger’s Answer An OPC server or client by itself is just a means to transfer data. OPC is not very interesting without another application behind it to supply information. The computer you are attempting to add OPC to would need some other application to provide data. The vendor of that application would need to build OPC into their product. If the application with the data supports some other protocol to exchange data (like Modbus TCP, Ethernet/IP, or PROFINET) an OPC protocol converter could be used to interface with other OPC applications. If the application with the data has no means of extracting the information, there is nothing an OPC server or client can do. Nikki Escamillas’ Fifth Question Is it also possible to create a server to server communication between two OPC? Tom Freiberger’s Answer I believe there are options for this in the OPC protocol specification, but the details would be specific to the product being used. If it allows server to server connections, it should be listed in its documentation. Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • Webinar Recording: PID and Loop Tuning Options and Solutions for Industrial Applications

    The post Webinar Recording: PID and Loop Tuning Options and Solutions for Industrial Applications first appeared on the ISA Interchange blog site. This educational ISA webinar was presented by  Greg McMillan  in conjunction with the  ISA Mentor Program . Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now  Eastman Chemical ). This is Part 1 of a series on the benefits of knowing your process and PID capability. Part 1 focuses on process behavior, the many loop objectives and different worlds of industrial applications, and the loop component’s contribution to the dynamic response. ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program. About the Presenter Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • Webinar Recording: PID and Loop Tuning Options and Solutions for Industrial Applications

    The post Webinar Recording: PID and Loop Tuning Options and Solutions for Industrial Applications first appeared on the ISA Interchange blog site. This educational ISA webinar was presented by  Greg McMillan  in conjunction with the  ISA Mentor Program . Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now  Eastman Chemical ). This is Part 1 of a series on the benefits of knowing your process and PID capability. Part 1 focuses on process behavior, the many loop objectives and different worlds of industrial applications, and the loop component’s contribution to the dynamic response. Join the ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about how you can join the ISA Mentor Program. About the Presenter Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • How to Improve Loop Performance for Dead Time Dominant Systems

    The post How to Improve Loop Performance for Dead Time Dominant Systems first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. Dead time is the source of the ultimate limit to control loop performance. The peak error is proportional to the dead time and the integrated error is dead time squared for load disturbances. If there was no dead time and no noise or interaction, perfect control would be theoretically possible. When the total loop dead time is larger than the open loop time constant, the loop is said to be dead time dominant and solutions are sought to deal with the problem. Anuj Narang is an advanced process control engineer at Spartan Controls Limited . He has more than 11 years of experience in the academics and the industry with a PhD in process control. He has designed and implemented large scale industrial control and optimization solutions to achieve sustainable and profitable process and control performance improvements for the customers in the oil and gas, oil sands, power and mining industry. He is a registered Professional Engineer with the Association of Professional Engineers and Geoscientists of Alberta, Canada. Anuj’s Question Is there any other control algorithm available to improve loop performance for dead time dominant systems other than using Smith predictor or model predictive control (MPC), both of which requires identification of process model? Greg McMillan’s Answer The solution cited for deadtime dominant loops is often a Smith predictor deadtime compensator (DTC) or model predictive control. There are many counter-intuitive aspects in these solutions. Not realized is that the improvement by the DTC or MPC is less for deadtime dominant systems than for lag dominant systems. Much more problematic is that both DTC and MPC are extremely sensitive to a mismatch between the compensator and model deadtime versus the actual total loop deadtime for a decrease besides an increase in the deadtime. Surprisingly, the consequences for the DTC and MPC are much greater for a decrease in plant dead time. For a conventional PID, a decrease in the deadtime just results in more robustness and slower control. For a DTC and MPC, a decrease in plant deadtime by as little as 25 percent can cause a big increase in integrated error and an erratic response. Of course, the best solution is to decrease the many source of dead time in the process and automation system (e.g., reduce transportation and mixing delays and use online analyzers with probes in the process rather than at-line analyzers with a sample transportation delay and an analysis delay that is 1.5 times the cycle time). An algorithmic mitigation of consequences of dead time first advocated by Shinskey and now particularly by me is to simply insert a deadtime block in the PID external-reset feedback path (BKCAL) with the deadtime updated to be always be slightly less than the actual total loop deadtime. Turning on external-reset feedback (e.g., dynamic reset limit) on and off enables and disables the deadtime compensation. Note that for transportation delays, this means updating the deadtime as the total feed rate or volume changes. This PID+TD implementation does not require the identification of the open loop gain and open loop time constant for inclusion as is required for a DTC or MPC. Please note that the external-reset feedback should be the result of a positive feedback implementation of integral action as described in the ISA Mentor Program webinar PID Options and Solutions – Part 3 . ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program. There will be no improvement from a deadtime compensator if the PID tuning settings are left the same as they were before the DTC or by a deadtime block in external-reset feedback (PID+TD).  In fact the performance can be slightly worse for even an accurate deadtime. You need to greatly decrease the PID integral time toward a limit of the execution time plus any error in deadtime. The PID gain should also be increased. The equation for predicting integrated error as a function of PID gain and reset time settings is no longer applicable because it predicts an error less than the ultimate limit that is not possible. The integrated error cannot be less than the peak error multiplied by the deadtime. The ultimate limit is still present because we are not making deadtime disappear. If the deadtime is due to analyzer cycle time or wireless update rate, we can use an enhanced PID (e.g., PIDPlus) to effectively prevent the PID from responding between updates. If the open loop response is deadtime dominant mostly due to the analyzer or wireless device, the effect of a new error upon update results in a correction proportional to the PID gain multiplied by the open loop error. If the PID gain is set equal to the inverse of the open loop gain for a self-regulating process, the correction is perfect and takes care of the step disturbance in a single execution after an update in the PID process variable. The integral time should be set smaller than expected (about equal to the total loop deadtime that ends up being the PID execution time interval) and the positive feedback implementation of integral action must be used with external reset feedback enabled. The enhanced PID greatly simplifies tuning besides putting the integrated error close to its ultimate limit. Note that you do not see the true error that could’/ have started at any time in between updates but only see the error measured after the update. For more on the sensitivity to both increases and decrease in the total loop deadtime and open loop time constant, see the ISA books Models Unleashed: A Virtual Plant and Predictive Control Applications (pages 56-70 for MPC) and Good Tuning: A Pocket Guide 4th Edition (pages 118-122 for DTC). For more on the enhanced PID, see the ISA blog post How to Overcome Challenges of PID Control and Analyzer Applications via Wireless Measurements and the Control Talk blog post, Batch and Continuous Control with At-Line and Offline Analyzers Tips . The following figures from Models Unleashed shows how a MPC with two controlled variables (CV1 and CV2)  and two manipulated variables for a matrix with condition number three (CN = 3) responds to a doubling and a halving of the plant dead time (delay) when the total loop dead time is greater than the open loop time constant. Figure 1: Dead Time Dominant MPC Test for Doubled Plant Delay Figure 2: Dead Time Dominant MPC Test for Halved Plant Delay Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • How to Improve Loop Performance for Dead Time Dominant Systems

    The post How to Improve Loop Performance for Dead Time Dominant Systems first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. Dead time is the source of the ultimate limit to control loop performance. The peak error is proportional to the dead time and the integrated error is dead time squared for load disturbances. If there was no dead time and no noise or interaction, perfect control would be theoretically possible. When the total loop dead time is larger than the open loop time constant, the loop is said to be dead time dominant and solutions are sought to deal with the problem. Anuj Narang is an advanced process control engineer at Spartan Controls Limited . He has more than 11 years of experience in the academics and the industry with a PhD in process control. He has designed and implemented large scale industrial control and optimization solutions to achieve sustainable and profitable process and control performance improvements for the customers in the oil and gas, oil sands, power and mining industry. He is a registered Professional Engineer with the Association of Professional Engineers and Geoscientists of Alberta, Canada. Anuj’s Question Is there any other control algorithm available to improve loop performance for dead time dominant systems other than using Smith predictor or model predictive control (MPC), both of which requires identification of process model? Greg McMillan’s Answer The solution cited for deadtime dominant loops is often a Smith predictor deadtime compensator (DTC) or model predictive control. There are many counter-intuitive aspects in these solutions. Not realized is that the improvement by the DTC or MPC is less for deadtime dominant systems than for lag dominant systems. Much more problematic is that both DTC and MPC are extremely sensitive to a mismatch between the compensator and model deadtime versus the actual total loop deadtime for a decrease besides an increase in the deadtime. Surprisingly, the consequences for the DTC and MPC are much greater for a decrease in plant dead time. For a conventional PID, a decrease in the deadtime just results in more robustness and slower control. For a DTC and MPC, a decrease in plant deadtime by as little as 25 percent can cause a big increase in integrated error and an erratic response. Of course, the best solution is to decrease the many source of dead time in the process and automation system (e.g., reduce transportation and mixing delays and use online analyzers with probes in the process rather than at-line analyzers with a sample transportation delay and an analysis delay that is 1.5 times the cycle time). An algorithmic mitigation of consequences of dead time first advocated by Shinskey and now particularly by me is to simply insert a deadtime block in the PID external-reset feedback path (BKCAL) with the deadtime updated to be always be slightly less than the actual total loop deadtime. Turning on external-reset feedback (e.g., dynamic reset limit) on and off enables and disables the deadtime compensation. Note that for transportation delays, this means updating the deadtime as the total feed rate or volume changes. This PID+TD implementation does not require the identification of the open loop gain and open loop time constant for inclusion as is required for a DTC or MPC. Please note that the external-reset feedback should be the result of a positive feedback implementation of integral action as described in the ISA Mentor Program webinar PID Options and Solutions – Part 3 . Join the ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about how you can join the ISA Mentor Program. There will be no improvement from a deadtime compensator if the PID tuning settings are left the same as they were before the DTC or by a deadtime block in external-reset feedback (PID+TD).  In fact the performance can be slightly worse for even an accurate deadtime. You need to greatly decrease the PID integral time toward a limit of the execution time plus any error in deadtime. The PID gain should also be increased. The equation for predicting integrated error as a function of PID gain and reset time settings is no longer applicable because it predicts an error less than the ultimate limit that is not possible. The integrated error cannot be less than the peak error multiplied by the deadtime. The ultimate limit is still present because we are not making deadtime disappear. If the deadtime is due to analyzer cycle time or wireless update rate, we can use an enhanced PID (e.g., PIDPlus) to effectively prevent the PID from responding between updates. If the open loop response is deadtime dominant mostly due to the analyzer or wireless device, the effect of a new error upon update results in a correction proportional to the PID gain multiplied by the open loop error. If the PID gain is set equal to the inverse of the open loop gain for a self-regulating process, the correction is perfect and takes care of the step disturbance in a single execution after an update in the PID process variable. The integral time should be set smaller than expected (about equal to the total loop deadtime that ends up being the PID execution time interval) and the positive feedback implementation of integral action must be used with external reset feedback enabled. The enhanced PID greatly simplifies tuning besides putting the integrated error close to its ultimate limit. Note that you do not see the true error that could’/ have started at any time in between updates but only see the error measured after the update. For more on the sensitivity to both increases and decrease in the total loop deadtime and open loop time constant, see the ISA books Models Unleashed: A Virtual Plant and Predictive Control Applications (pages 56-70 for MPC) and Good Tuning: A Pocket Guide 4th Edition (pages 118-122 for DTC). For more on the enhanced PID, see the ISA blog post How to Overcome Challenges of PID Control and Analyzer Applications via Wireless Measurements and the Control Talk blog post, Batch and Continuous Control with At-Line and Offline Analyzers Tips . The following figures from Models Unleashed shows how a MPC with two controlled variables (CV1 and CV2)  and two manipulated variables for a matrix with condition number three (CN = 3) responds to a doubling and a halving of the plant dead time (delay) when the total loop dead time is greater than the open loop time constant. Figure 1: Dead Time Dominant MPC Test for Doubled Plant Delay Figure 2: Dead Time Dominant MPC Test for Halved Plant Delay Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • How to Setup and Identify Process Models for Model Predictive Control

    The post How to Setup and Identify Process Models for Model Predictive Control first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. Luis Navas is an ISA Certified Automation Professional and electronic engineer with more than 11 years of experience in process control systems, industrial instrumentation and safety instrumented systems. Luis’ questions on evaporator control are important to improve evaporator concentration control and minimize steam consumption Luis Navas’ Introduction The process depicted in Figure 1 shows a concentrator with its process inputs and outputs. I have the following questions regarding the process testing in order to generate process models for a MPC in the correct way. I know that MPC process inputs must be perturbed to allow an identification and modeling of each process input and output relationship.   Figure 1: Variables for model predictive control of a concentrator   Luis Navas’ First Question Before I start perturbing the feed flow or steam flow, should the disturbance be avoided or at least minimized? Or simply let it be as usual in the process since this disturbance is always present? Mark Darby’s Answer If it is not difficult, you can try to suppress the disturbance. That can help the model identification for the feed and steam. To get a model to the disturbance, you will want movement of the disturbance outside the noise level (best is four to five times). If possible, this may require making changes upstream (for example, LIC.SP or FIC.SP). Luis Navas’ Second Question What about the steam flow? Should it be maintained a fix flow, (FIC in MAN with a fix % open FCV), while perturbing the feed flow and in the same way when perturbing the steam flow, should the feed flow be fixed? I know some MPC software packages excite its outputs in a PRBS (Pseudo Random Binary Sequence) practically at the same time while the process testing is being executed, and through mathematics catches the input and output relationships, finally generating the model.   ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program. Mark Darby’s Answer Because the steam and feed setpoints are manipulated variables, it is best to keep them both in auto for the entire test. PRBS is an option, but it will take more setup effort to get the magnitudes and the average switching interval right. An option is to start with a manual test and switch to PRBS after you’ve got a feel for the process and the right step sizes. Note: a pretest should have already been conducted to identify instrument issues, control issues, tuning, etc. Much more detail is offered in my Section 9.3 on in the McGraw-Hill handbook Process/Industrial Instruments and Control Sixth Edition . Luis Navas’s Last Questions What are the pros & cons for process testing  if the manipulated variables are perturbed through FIC Setpoints, (closed loop), or through FIC Outputs, (open loop)? Or simply: should it be done according with the MPC design? What are the pros & cons if in the final design the FCVs are directly manipulated by the MPC block or through FICs, as MPC’s downstream blocks? I know in this case the FICs will be faster than MPC, so I expect a good approach is to retain them. Mark Darby’s Answers Correct – do according to the MPC design. Note sometimes the design will need to change during a step test as you learn more about the process. Flow controllers should normally be retained unless they often saturate. This is the same idea for justifying a cascade – to have the inner loop manage the higher frequency disturbances (so the slower executing MPC doesn’t have to). The faster executing inner loop also helps with linearization (for example, valve position to flow).   Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • How to Setup and Identify Process Models for Model Predictive Control

    The post How to Setup and Identify Process Models for Model Predictive Control first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. Luis Navas is an ISA Certified Automation Professional and electronic engineer with more than 11 years of experience in process control systems, industrial instrumentation and safety instrumented systems. Luis’ questions on evaporator control are important to improve evaporator concentration control and minimize steam consumption Luis Navas’ Introduction The process depicted in Figure 1 shows a concentrator with its process inputs and outputs. I have the following questions regarding the process testing in order to generate process models for a MPC in the correct way. I know that MPC process inputs must be perturbed to allow an identification and modeling of each process input and output relationship.   Figure 1: Variables for model predictive control of a concentrator   Luis Navas’ First Question Before I start perturbing the feed flow or steam flow, should the disturbance be avoided or at least minimized? Or simply let it be as usual in the process since this disturbance is always present? Mark Darby’s Answer If it is not difficult, you can try to suppress the disturbance. That can help the model identification for the feed and steam. To get a model to the disturbance, you will want movement of the disturbance outside the noise level (best is four to five times). If possible, this may require making changes upstream (for example, LIC.SP or FIC.SP). Luis Navas’ Second Question What about the steam flow? Should it be maintained a fix flow, (FIC in MAN with a fix % open FCV), while perturbing the feed flow and in the same way when perturbing the steam flow, should the feed flow be fixed? I know some MPC software packages excite its outputs in a PRBS (Pseudo Random Binary Sequence) practically at the same time while the process testing is being executed, and through mathematics catches the input and output relationships, finally generating the model.   Join the ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about how you can join the ISA Mentor Program. Mark Darby’s Answer Because the steam and feed setpoints are manipulated variables, it is best to keep them both in auto for the entire test. PRBS is an option, but it will take more setup effort to get the magnitudes and the average switching interval right. An option is to start with a manual test and switch to PRBS after you’ve got a feel for the process and the right step sizes. Note: a pretest should have already been conducted to identify instrument issues, control issues, tuning, etc. Much more detail is offered in my Section 9.3 on in the McGraw-Hill handbook Process/Industrial Instruments and Control Sixth Edition . Luis Navas’s Last Questions What are the pros & cons for process testing  if the manipulated variables are perturbed through FIC Setpoints, (closed loop), or through FIC Outputs, (open loop)? Or simply: should it be done according with the MPC design? What are the pros & cons if in the final design the FCVs are directly manipulated by the MPC block or through FICs, as MPC’s downstream blocks? I know in this case the FICs will be faster than MPC, so I expect a good approach is to retain them. Mark Darby’s Answers Correct – do according to the MPC design. Note sometimes the design will need to change during a step test as you learn more about the process. Flow controllers should normally be retained unless they often saturate. This is the same idea for justifying a cascade – to have the inner loop manage the higher frequency disturbances (so the slower executing MPC doesn’t have to). The faster executing inner loop also helps with linearization (for example, valve position to flow).   Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • Webinar Recording: How to Use Modern Process Control to Maintain Batch-To-Batch Quality

    The post Webinar Recording: How to Use Modern Process Control to Maintain Batch-To-Batch Quality first appeared on the ISA Interchange blog site. This educational ISA webinar was presented by  Greg McMillan . Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now  Eastman Chemical ). Understanding the difficulties of batch processing and the new technologies and techniques offered can lead to solutions by better automation and control that offer much greater increases in efficiency and capacity than usually obtained for continuous process. Industry veteran and author Greg McMillan discusses analyzing batch data, elevating the role of the operator, tuning key control loops, and setting up simple control strategies to optimize batch operations. The presentation concludes with an extensive list of best practices. About the Presenter Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • Webinar Recording: How to Use Modern Process Control to Maintain Batch-To-Batch Quality

    The post Webinar Recording: How to Use Modern Process Control to Maintain Batch-To-Batch Quality first appeared on the ISA Interchange blog site. This educational ISA webinar was presented by  Greg McMillan . Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now  Eastman Chemical ). Understanding the difficulties of batch processing and the new technologies and techniques offered can lead to solutions by better automation and control that offer much greater increases in efficiency and capacity than usually obtained for continuous process. Industry veteran and author Greg McMillan discusses analyzing batch data, elevating the role of the operator, tuning key control loops, and setting up simple control strategies to optimize batch operations. The presentation concludes with an extensive list of best practices. About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • What Types of Process Control Models are Best?

    The post What Types of Process Control Models are Best? first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. In the  ISA Mentor Program , I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the U.S. This question comes from Daniel Rodrigues. Daniel Rodrigues is one of our newest protégés in the ISA Mentor Program. Daniel has been working in research & development for Norsk Hydro Brazil since 2016 specializing in: Development of a greener, safer, more accurate, and cheaper analytical method Cost reduction, efficiency enhancement opportunities identification Process modelling and advanced control logic development and assessment Research methodology development, execution, and planning Statistical analysis of process variables and test results Daniel Rodrigues’ Question What is your take on process control based on phenomenological models (using first-principle models to guide the predictive part of controllers)? I am aware of the exponential growth of complexity in these, but I’d also like to have an experienced opinion regarding the reward/effort of these. Greg McMillan’s Answer I prefer first principle models to gain a deeper understanding of cause and effects, process relationships, process gains, and the response to abnormal situations. Most of my control system improvements start with first principle models. The incorporation of the actual control system (digital twin) to form a virtual plant has made these models a more powerful tool.  However, most first principle models use perfectly mixed volumes neglecting mixing delays and are missing transportation delays and automation system dynamics. For pH systems, including all of the non-ideal dynamics from piping and vessel design, control valves or variable speed pumps, and electrodes is particularly essential, I have consequently partitioned the total vessel volume into a series of plug flow and perfectly back mixed volumes to model the mixing dead times that originate from agitation pattern and the relative location of input and output streams. I add a transportation delay for reagent piping and dip tubes due to gravity flow or blending. For extremely low reagent flows (e.g., gph), I also add an equilibration time in dip tube after closure of a reagent valve associated with migration of the reagent into the process followed by migration of process fluid back up into the dip tube. I add a transportation delay to electrodes in piping. I use a variable dead time block and time constant blocks in series to show the effect of velocity, coating, age, buffering and direction of pH change on electrode response. I use a backlash-stiction and a variable dead time block to show the resolution and response time of control valves. The important goal is to get the total loop dead time and secondary lag right. ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program. By having the much more complete model in a virtual plant, the true dynamic behavior of the system can be investigated and the best control system performance achieved by exploring, discovering, prototyping, testing, tuning, justifying, deploying, commissioning, maintaining and continuously improving, as described in the Control magazine feature article Virtual Plant Virtuosity . Figure 1: Virtual Plant that includes Automation System Dynamics and Digital Twin Controller Model predictive control is much better at ensuring you have the actual total dynamics including dead time, lags and lead times at a particular operating point. However, the models do not include the effect of backlash-stiction or actuator and positioner design on valve response time and consequentially on total loop dead time because by design the steps are made several times larger than the deadband and resolution or sensitivity limits of the control valve. Also, the models identified are for a particular operating point and normal operation. To cover different modes of operation and production rates, multiple models must be used requiring logic for a smooth transition or recently developed adaptive capabilities. I see an opportunity to use the results from the identification software used by MPC to provide a more accurate dead time, lag time and lead time by inserting these in blocks on the measurement of the process variable in first principle models. The identification software would be run for different operating points and operating conditions enabling the addition of supplemental dynamics in the first principle models. This addresses the fundamental deficiency of dead times, lag times and lead times being too small in first principle models. Statistical models are great at identifying unsuspected relationships, disturbances and variability in the process and measurements. However, these are correlations and not necessarily cause and effect. Also, continuous processes require dynamic compensation of each process input so that it matches the dynamic response timewise of each process output being studied. This is often not stated in the literature and is a formidable task. Some methods propose using a dead time on the input but for large time constants, the dynamic response of the predicted output is in error during a transient. These models are more designed for steady state operation but this is often an ideal situation not realized due to disturbances originating from the control system due to interactions, resonance, tuning, and limit cycles from stiction as discussed in the Control Talk Blog The most disturbing disturbances are self-inflicted . Batch processes do not require dynamic compensation of inputs making data analytics much more useful in predicting batch end points. I think there is a synergy to be gained by using MPC to find missing dynamics and statistical process control to help track down missing disturbances and relationships that are subsequently added to the first principle models. Recent advances in MPC capability (e.g., Aspen DMC3) to automatically identify changes in process gain, dead time and time constant including the ability to compute and update them online based on first principals has opened the door to increased benefits from the using MPC to improve first principle models and vice versa. Multivariable control and optimization where there are significant interactions and multiple controlled, manipulated and constraint variables are best handled by MPC. The exception is very fast systems where the PID controller is directly manipulating control valves or variable frequency drives for pressure control. Batch end point prediction might also be better implemented by data analytics. However, in all cases the first principle model should be accordingly improved and used to test the actual configuration and implementation of the MPC and analytics and to provide training of operators extended to all engineers and technicians supporting plant operation. I would think for research and development, the ability to gain a deeper and wider understanding of different process relationships for different operating conditions would be extremely important. This knowledge can lead to process improvements and to better equipment and control system design. For pH and biological control systems, this capability is essential. For a greater perspective on the capability of various modeling and control methodologies, see the ISA Mentor Program post with questions by protégé Danaca Jordan and answers by Hunter Vegas and I: What are the New Technologies and Approaches for Batch and Continuous Control? Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • What Types of Process Control Models are Best?

    The post What Types of Process Control Models are Best? first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. In the  ISA Mentor Program , I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the U.S. This question comes from Daniel Rodrigues. Daniel Rodrigues is one of our newest protégés in the ISA Mentor Program. Daniel has been working in research & development for Norsk Hydro Brazil since 2016 specializing in: Development of a greener, safer, more accurate, and cheaper analytical method Cost reduction, efficiency enhancement opportunities identification Process modelling and advanced control logic development and assessment Research methodology development, execution, and planning Statistical analysis of process variables and test results Daniel Rodrigues’ Question What is your take on process control based on phenomenological models (using first-principle models to guide the predictive part of controllers)? I am aware of the exponential growth of complexity in these, but I’d also like to have an experienced opinion regarding the reward/effort of these. Greg McMillan’s Answer I prefer first principle models to gain a deeper understanding of cause and effects, process relationships, process gains, and the response to abnormal situations. Most of my control system improvements start with first principle models. The incorporation of the actual control system (digital twin) to form a virtual plant has made these models a more powerful tool.  However, most first principle models use perfectly mixed volumes neglecting mixing delays and are missing transportation delays and automation system dynamics. For pH systems, including all of the non-ideal dynamics from piping and vessel design, control valves or variable speed pumps, and electrodes is particularly essential, I have consequently partitioned the total vessel volume into a series of plug flow and perfectly back mixed volumes to model the mixing dead times that originate from agitation pattern and the relative location of input and output streams. I add a transportation delay for reagent piping and dip tubes due to gravity flow or blending. For extremely low reagent flows (e.g., gph), I also add an equilibration time in dip tube after closure of a reagent valve associated with migration of the reagent into the process followed by migration of process fluid back up into the dip tube. I add a transportation delay to electrodes in piping. I use a variable dead time block and time constant blocks in series to show the effect of velocity, coating, age, buffering and direction of pH change on electrode response. I use a backlash-stiction and a variable dead time block to show the resolution and response time of control valves. The important goal is to get the total loop dead time and secondary lag right. Join the ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about how you can join the ISA Mentor Program. By having the much more complete model in a virtual plant, the true dynamic behavior of the system can be investigated and the best control system performance achieved by exploring, discovering, prototyping, testing, tuning, justifying, deploying, commissioning, maintaining and continuously improving, as described in the Control magazine feature article Virtual Plant Virtuosity . Figure 1: Virtual Plant that includes Automation System Dynamics and Digital Twin Controller Model predictive control is much better at ensuring you have the actual total dynamics including dead time, lags and lead times at a particular operating point. However, the models do not include the effect of backlash-stiction or actuator and positioner design on valve response time and consequentially on total loop dead time because by design the steps are made several times larger than the deadband and resolution or sensitivity limits of the control valve. Also, the models identified are for a particular operating point and normal operation. To cover different modes of operation and production rates, multiple models must be used requiring logic for a smooth transition or recently developed adaptive capabilities. I see an opportunity to use the results from the identification software used by MPC to provide a more accurate dead time, lag time and lead time by inserting these in blocks on the measurement of the process variable in first principle models. The identification software would be run for different operating points and operating conditions enabling the addition of supplemental dynamics in the first principle models. This addresses the fundamental deficiency of dead times, lag times and lead times being too small in first principle models. Statistical models are great at identifying unsuspected relationships, disturbances and variability in the process and measurements. However, these are correlations and not necessarily cause and effect. Also, continuous processes require dynamic compensation of each process input so that it matches the dynamic response timewise of each process output being studied. This is often not stated in the literature and is a formidable task. Some methods propose using a dead time on the input but for large time constants, the dynamic response of the predicted output is in error during a transient. These models are more designed for steady state operation but this is often an ideal situation not realized due to disturbances originating from the control system due to interactions, resonance, tuning, and limit cycles from stiction as discussed in the Control Talk Blog The most disturbing disturbances are self-inflicted . Batch processes do not require dynamic compensation of inputs making data analytics much more useful in predicting batch end points. I think there is a synergy to be gained by using MPC to find missing dynamics and statistical process control to help track down missing disturbances and relationships that are subsequently added to the first principle models. Recent advances in MPC capability (e.g., Aspen DMC3) to automatically identify changes in process gain, dead time and time constant including the ability to compute and update them online based on first principals has opened the door to increased benefits from the using MPC to improve first principle models and vice versa. Multivariable control and optimization where there are significant interactions and multiple controlled, manipulated and constraint variables are best handled by MPC. The exception is very fast systems where the PID controller is directly manipulating control valves or variable frequency drives for pressure control. Batch end point prediction might also be better implemented by data analytics. However, in all cases the first principle model should be accordingly improved and used to test the actual configuration and implementation of the MPC and analytics and to provide training of operators extended to all engineers and technicians supporting plant operation. I would think for research and development, the ability to gain a deeper and wider understanding of different process relationships for different operating conditions would be extremely important. This knowledge can lead to process improvements and to better equipment and control system design. For pH and biological control systems, this capability is essential. For a greater perspective on the capability of various modeling and control methodologies, see the ISA Mentor Program post with questions by protégé Danaca Jordan and answers by Hunter Vegas and I: What are the New Technologies and Approaches for Batch and Continuous Control? Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • Many Objectives, Many Worlds of Process Control

    The post, Many Objectives, Many Worlds of Process Control first appeared on ControlGlobal.com's Control Talk blog. In many publications on process control, the common metric you see is integrated absolute error for a step disturbance on the process output. In many tests for tuning, setpoint changes are made and the most important criteria becomes overshoot of setpoint. Increasingly, oscillations of any type are looked at as inherently bad. What is really important varies because of the different loops and types of processes. Here we seek to open minds and develop a better understanding of what is important. Many Objectives Minimum PV peak error in load response to prevent: – Compressor surge, SIS activation, relief activation, undesirable reactions, poor cell health Minimum PV integrated error in load or setpoint response to minimize: – total amount of off-spec product to enable closer operation to optimum setpoint Minimum PV overshoot of SP in setpoint response to prevent: – Compressor surge, SIS activation, relief activation, undesirable reactions, poor cell health Minimum Out overshoot of FRV* in setpoint response to prevent: – Interaction with heat integration and recycle loops in hydrocarbon gas unit operations Minimum PV time to reach SP in setpoint response to minimize: – Batch cycle time, startup time, transition time to new products and operating rates Minimum split range point crossings to prevent: – Wasted energy-reactants-reagents, poor cell health (high osmotic pressure) Maximum absorption of variability in level control (e.g. surge tank) to prevent: – Passing of changes in input flows to output flows upsetting downstream unit ops Optimum transfer of variability from controlled variable to manipulated variable to prevent: – Resonance, interaction and propagation of disturbances to other loops * FRV is the Final Resting Value of PID output. Overshoot of FRV is necessary for setpoint and load response for integrating and runaway processes. However for self-regulating processes not involving highly mixed vessels (e.g., heat exchangers and plug flow reactors), aggressive action in terms of PID output can upset other loops and unit operations that are affected by the flow manipulated by the PID. Not recognized in the literature is that external-reset feedback of the manipulated flow enables setpoint rate limits to smooth out changes in manipulated flows without affecting the PID tuning. Many Worlds Hydrocarbon processes and other gas unit operations with plug flow, heat integration & recycle streams (e.g. crackers, furnaces, reformers) – Fast self-regulating responses, interactions and complex secondary responses with sensitivity to SP and FRV overshoot, split range crossings and utility interactions. Chemical batch and continuous processes with vessels and columns – Important loops tend to have slow near or true integrating and runaway responses with minimizing peak and integrated errors and rise time as key objectives. Utility systems (e.g., boilers, steam headers, chillers, compressors) – Important loops tend to have fast near or true integrating responses with minimizing peak and integrated errors and interactions as key objectives. Pulp, paper, food and polymer inline, extrusion and sheet processes – Fast self-regulating responses and interactions with propagation of variability into product (little to no attenuation of oscillations by back mixed volumes) with extreme sensitive to variability and resonance. Loops (particularly for sheets) can be dead time dominant due to transportation delays unless there are heat transfer lags. Biological vessels (e.g., fermenters and bioreactors) – Most important loops tend have slow near or true integrating responses with extreme sensitivity to SP and FRV overshoot, split range crossings and utility interactions. Load disturbances originating from cells are incredibly slow and therefore not an issue. A critical insight is that most disturbances are on the process input not the process output and are not step changes. The fastest disturbances are generally flow or liquid pressure but even these have an 86% response time of at least several seconds because of the 86% response time of valves and the tuning of PID controllers. The fastest and most disruptive disturbances are often manual actions by an operator or setpoint changes by a batch sequence. Setpoint rate limits and a 2 Degrees of Freedom (2DOF) PID structure with Beta and Gamma approaching zero can eliminate much of the disruption from setpoint changes by slowing down changes in the PID output from proportional and derivative action. A disturbance to a loop can be considered to be fast if it has a 86% response time less than the loop deadtime. If you would like to hear more on this, checkout the ISA Mentor Program Webinar Recording: PID Options and Solutions Part1 If you want to be able to explain this to young engineers, check out the dictionary for translation of slang terms in the Control Talk Column “ Hands-on Labs build real skills .”
  • How to Get Rid of Level Oscillations in Industrial Processes

    The post How to Get Rid of Level Oscillations in Industrial Processes first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. In the  ISA Mentor Program , I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the U.S. This question comes from Luis Navas. Luis Navas is an ISA Certified Automation Professional and electronic engineer with more than 11 years of experience in process control systems, industrial instrumentation and safety instrumented systems. Luis’ questions on evaporator control are important to improve evaporator concentration control and minimize steam consumption Luis Navas’ Questions For an MPC application I need to build a smoothed moving mean from a batch level to use as a controlled variable for my MPC, so the simple moving average is done as depicted below. However, I need to smooth the signal, due there is some signal ripple still. I tried with a low-pass filter achieving some improvement as seen in Figure 1. But perhaps you know a better way to do it, or I simply need to increase the filter time. Figure 1: Old Level Oscillations (blue: actual level and green: level with simple moving mean followed by simple moving mean + first order filter) Greg McMillan’s Initial Answer I use rate limiting when a ripple is significantly faster than a true change in the process variable. The velocity limit would be the maximum possible rate of change of the level. The velocity limit should be turned off when maintenance is being done and possibly during startup or shutdown. The standard velocity limit block should offer this option. A properl Save & Exit y set velocity limit introduces no measurement lag. A level system (any integrator) is very sensitive to a lag anywhere. If the oscillation stops when the controller is in manual, the oscillation could be from backlash or stiction. In your case, the controller appears to be in auto with a slow rolling oscillation possibly due to a PID reset time being too small. I did a Control Talk Blog that discusses good signal filtering tips from various experts besides my intelligent velocity limit. Mark Darby’s Initial Answer In many cases, I’ve seen signals overly filtered. Often, if the filtered signal looks good to your eye, it’s too much filtering. As Michel Ruel states: If period is known, moving average (sum of most recent N values divided by N) will nearly completely remove a uniform periodic cycle. So the issue is how much lag is introduced. Depending on the MPC, one may be able to specify variable CV weights as a function of the magnitude error, which will decrease the amount of MV movement when the CV weight is low; or the level signal could be brought in as a CV twice with different tuning or filtering applied to each. ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program. Greg McMillan’s Follow-Up Answer Since the oscillation is uniform in period and amplitude, the moving average as described my Michel Ruel is best as a starting point. Any subsequent noise from non-uniformity can be removed by an additional filter but nearly all of this filter time becomes equivalent dead time in near and true integrating processes. You need to be careful that the reset time is not too small as you decrease the controller gain either due to filtering or to absorb variability. The product of PID gain and reset time should be greater than twice the inverse of the integrating process gain (1/sec) to prevent the slow rolling oscillations that decay gradually. Slide 29 of the ISA webinar on PID options and solutions give the equations for the window of allowable PID gains. Slide 15 shows how to estimate the attenuation of an oscillation by a filter. The webinar presentation and discussion is in the ISA Mentor Program post How to optimize PID controller settings . If you need to minimize dead time introduced by filtering, you could develop a smarter statistical filter such as cumulative sum of measured values (CUSUM). For an excellent review of how to remove unwanted data signal components, see the InTech magazine article Data filtering in process automation systems . Mark Darby’s Follow-Up Answer My experience is that most times a cycle in a disturbance flow is already causing cycling in other variables (due to the multivariable nature of the process).  And advanced control, including MPC, will not significantly improve the situation and may make it worse.  So it is best to fix the cycle before proceeding with advanced control.  Making a measured cyclic disturbance a feedforward to MPC likely won’t help much.  MPC normally assumes the current value of the feedforward variables stays constant over the prediction horizon. What you’d want is to have the future prediction include the cycle.  Unfortunately this is not easily done with the MPC packages today. Often, levels are controlled by a PID loop, not in the MPC.  The exception can be if there are multiple MVs that must be used to control the level (e.g., multiple outlet flows), or the manipulated flow is useful for alleviating a constraint (see the handbook).  Another exception is if there is significant dead time between the flow and the level. Luis Navas’ Follow-up Response Thank you for the support. I think the ISA Mentor Program resources are a truly elite support team, by the way, I have already read the blogs about signal filtering. My comments and clarifications: The signal corresponds to a tank level in a batch process, due that it has an oscillating behavior (without noise). The downstream process is continuous, (evaporator) and the idea is control the Feed tank level with MPC (using the moving average), through evaporator flow input. The feed tank level is critical for the evaporator works fine. I have applied the Michel Ruel statement: If period is known, moving average (sum of most recent N values divided by N) will nearly completely remove a periodic cycle. Now the moving average is better as seen in Figure 2. Figure 2: New Level Oscillations (blue: actual level and green: level with Ruel moving average) Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • How to Get Rid of Level Oscillations in Industrial Processes

    The post How to Get Rid of Level Oscillations in Industrial Processes first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. Luis Navas is an ISA Certified Automation Professional and electronic engineer with more than 11 years of experience in process control systems, industrial instrumentation and safety instrumented systems. Luis’ questions on effectively reducing evaporator level oscillations from an upstream batch operation so that the level controller can see the true level trajectory represent a widespread concern in chemical plants where the front end for conversion has batch operations and back end for separation has continuous operations.   Luis Navas’ Questions For the MPC application I need to build a smoothed moving mean from a batch level to use as a controlled variable for my MPC, so the simple moving average is done as depicted below. However, I need to smooth the signal, (due there is some signal ripple still), I tried with a low-pass filter achieving some improvement as seen in Figure 1. But perhaps you know a better way to do it, or I simply need to increase the filter time. Figure 1: Old Level Oscillations (blue: actual level and green: level with simple moving mean followed by simple moving mean + first order filter) Greg McMillan’s Initial Answer I use rate limiting when a ripple is significantly faster than a true change in the process variable. The velocity limit would be the maximum possible rate of change of the level. The velocity limit should be turned off when maintenance is being done and possibly during startup or shutdown. The standard velocity limit block should offer this option. A properly set velocity limit introduces no measurement lag. A level system (any integrator) is very sensitive to a lag anywhere. If the oscillation stops when the controller is in manual, the oscillation could be from backlash or stiction. In your case, the controller appears to be in auto with a slow rolling oscillation possibly due to a PID reset time being too small. I did a Control Talk Blog that discusses What are good signal filtering tips from various experts besides my intelligent velocity limit. Mark Darby’s Initial Answer In many cases, I’ve seen signals overly filtered.  Often, if the filtered signal looks good to your eye, it’s too much filtering. As Michel Ruel states: If period is known, moving average (sum of most recent N values divided by N) will nearly completely remove a uniform periodic cycle. So the issue is how much lag is introduced. Depending on the MPC, one may be able to specify variable CV weights as a function of the magnitude error, which will decrease the amount of MV movement when the CV weight is low; or the level signal could be brought in as a CV twice with different tuning or filtering applied to each. Join the ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about how you can join the ISA Mentor Program. Greg McMillan’s Follow-up Answer Since the oscillation is uniform in period and amplitude, the moving average as described my Michel Ruel is best as a starting point. Any subsequent noise from nonuniformity can be removed by an additional filter but nearly all of this filter time becomes equivalent dead time in near and true integrating processes. You need to be careful that the reset time is not too small as you decrease the controller gain either due to filtering or to absorb variability. The product of PID gain and reset time should be greater than twice the inverse of the integrating process gain (1/sec) to prevent the slow rolling oscillations that decay gradually. Slide 29 of the ISA WebEx on PID Options and Solutions give the equations for the window of allowable PID gains. Slide 15 shows how to estimate the attenuation of an oscillation by a filter. The WebEx presentation and discussion is in the ISA Mentor Program post How to optimize PID controller settings . If you need to minimize dead time introduced by filtering, you could develop a smarter statistical filter such as cumulative sum of measured values (CUSUM). For an excellent review of how to remove unwanted data signal components, see the InTech magazine article Data filtering in process automation systems . Mark Darby’s Follow-up Answer My experience is that most times a cycle in a disturbance flow is already causing cycling in other variables (due to the multivariable nature of the process).  And advanced control, including MPC, will not significantly improve the situation and may make it worse.  So it is best to fix the cycle before proceeding with advanced control.  Making a measured cyclic disturbance a feedforward to MPC likely won’t help much.  MPC normally assumes the current value of the feedforward variables stays constant over the prediction horizon. What you’d want is to have the future prediction include the cycle.  Unfortunately this is not easily done with the MPC packages today. Often, levels are controlled by a PID loop, not in the MPC.  The exception can be if there are multiple MVs that must be used to control the level (e.g., multiple outlet flows), or the manipulated flow is useful for alleviating a constraint (see the handbook).  Another exception is if there is significant dead time between the flow and the level. Luis Navas’ Follow-up Response Thank you for the support. I think the ISA Mentor Program resources are a truly Elite support team, by the way,I have already read the blogs about signal filtering. My comments and clarifications: The signal corresponds to a tank level in a batch process, due that it has an oscillating behavior, (without noise). The downstream process is continuous, (evaporator) and the idea is control the Feed tank level with MPC, (using the Moving average), through evaporator flow input. The feed tank level is critical for the evaporator works fine. I have applied the “Michel Ruel statement: If period is known, moving average (sum of most recent N values divided by N) will nearly completely remove a periodic cycle”, and now the moving average is better as seen in Figure 2. Figure 2: New Level Oscillations (blue: actual level and green: level with Ruel moving average) Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • Webinar Recording: Loop Tuning and Optimization

    The post Webinar Recording: Loop Tuning and Optimization first appeared on the ISA Interchange blog site. This educational ISA webinar was presented by  Greg McMillan  in conjunction with the  ISA Mentor Program . Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now  Eastman Chemical ). In this ISA Mentor Program presentation, Michel Ruel , a process control expert and consultant, provides insight and guidance as to the importance of optimization and how to achieve it through better PID control. ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program. About the Presenter Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • Webinar Recording: Loop Tuning and Optimization

    The post Webinar Recording: Loop Tuning and Optimization first appeared on the ISA Interchange blog site. This educational ISA webinar was presented by  Greg McMillan  in conjunction with the  ISA Mentor Program . Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now  Eastman Chemical ). In this ISA Mentor Program presentation, Michel Ruel , a process control expert and consultant, provides insight and guidance as to the importance of optimization and how to achieve it through better PID control. Join the ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about how you can join the ISA Mentor Program. About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg:
  • How to Optimize Industrial Evaporators

    The post How to Optimize Industrial Evaporators first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. In the  ISA Mentor Program , I am providing guidance for extremely talented individuals from Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Luis Navas. Luis Navas is an ISA Certified Automation Professional and electronic engineer with more than 11 years of experience in process control systems, industrial instrumentation and safety instrumented systems. Luis’ questions on evaporator control are important to improve evaporator concentration control and minimize steam consumption Luis Navas’ Questions Which criteria should I follow to define the final control strategy with model predictive control (MPC) in an existing PID strategy? Only one MPC for all existing PIDs? Or may be 1MPC + 1PID or 1MPC + 2 PIDs? What are the criteria to make the correct decision? What is the step by step procedure to deploy the advanced control in the real process in the safest way? Which are your hints, tips, advice and experiences regarding MPC implementations? Greg McMillan’s Initial Answer In general you try to include all of the controlled variables (CV), manipulated variables (MV), disturbance variables (DC), and constraint variables (QC) in the same MPC unless the equipment are not related, there is a great difference in time horizons or there is a cascade control opportunity like we see with Kiln MPC control where a slower MPC with more important controlled variables send setpoints to a secondary MPC for faster controlled variables. For your evaporator control, this does not appear to be the case. We first discuss advanced PID control and its common limitations before moving into a MPC. For optimization, a PID valve position controller could maximize production rate by pushing the steam valve to its furthest effective throttle position. So far as increasing efficiency in terms of minimizing steam use, this would be generally be achieved by tight concentration control that allows you to operate closer to minimum concentration spec. The level and concentration response would be true and near integrating. In both cases, PID integrating process tuning rules should be used. Do not decrease the PID gain computed by these rules without proportionally increasing the PID reset time. The product of the PID gain and reset time must be greater than the inverse of the integrating process gain to prevent slow rolling oscillations, a very common problem. Often the reset time is two or more orders of magnitude too small because user decreased the PID gain due to noise or thinking oscillations are caused by too high a PID gain. I don’t see constraint control for a simple evaporator but if there were constraints, an override controller would be setup for each. However, only one constraint would be effectively governing operation at a given time via signal selection. Also, the proper tuning of override controllers and valve position controllers is not well known. Furthermore, the identification of dynamics for feedback and particularly feedforward control typically requires the expertise by a specialist. Often comparisons are done showing how much better Model Predictive Control is than PID control without good identification and tuning of feedback and feedforward control parameters. While optimization limitations and typical errors in identification and tuning push your case toward the use of MPC, here are the best practices for PID control of evaporators. Measure product concentration by a Coriolis meter on evaporator system discharge. Control product concentration by manipulation of the heat input to product flow ratio. Use evaporator level measurements with an excellent sensitivity and signal noise ratio. When possible, use radar instead of capillary systems to reduce level noise, drift, and lag Control product concentration by changing heat input to feed rate ratio. If production rate is set by discharge flow, use PID to manipulate heat input. If production rate is set by heat input, use PID to manipulate product flow rate. Use near integrator rules maximizing rate action in PID concentration controller tuning. Use a flow feedforward of product flow rate to set feed rate to minimize the response time for production rate or product concentration control. For feed concentration disturbances, use feedforward to correct the heat input based on feed solids concentration computed from density measured by a feed Coriolis meter. The actual heat to feed ratio must be displayed and manual adjustment of desired ratio be provided to operations for startup and abnormal operation. To provide faster concentration control for small disturbances, use a PD controller to manipulate a small bypass flow whose bias is about 50% of maximum bypass flow. The use of model predictive control software often does a good job of identifying the dynamics and automatically incorporating them into the controller. Also, it can simultaneously handle multiple constraints with predictive capability as to violation of constraints. Furthermore, a linear program or other optimizer built into MPC can find and achieve the optimum intersection of the minimum and maximum values of controlled, constraint, and manipulated variables plotted on a common axis of the manipulated variables. I have asked for more detailed advice on MPC by Mark Darby , a great new resource, who wrote the MPC Sections for the McGraw-Hill Handbook Hunter and I just finished. Mark Darby’s Initial Answer It is normally best to keep PID controls in place for basic regulatory control if they perform well, which may require re-tuning or reconfiguration of the strategy.  Your case is getting into advanced control and optimization where the advantage shifts to MPC. Multiple interactions and measured disturbances are best done by MPC compared to PID decoupling and feedforward control. First principle models should be used to compute smarter disturbance variables such as solids feed flow rather than separate feed flow and feed concentration disturbance variables. Override control and valve position control schemes are better handled by MPC.  More general optimization is also better done with an MPC. Remember to include PID outputs to valves as constraint variables if they can saturate in normal operation.  If a valve is operated close to a limit (e.g., 5% or 95%), it may be better to have the MPC manipulate the valve signal directly using signal characterization as needed using installed flow characteristic to linearize response. ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program. Here are some MPC best practices from Process/Industrial Instruments and Controls Handbook Sixth Edition , by Gregory K. McMillan and Hunter Vegas (co-editors), and scheduled to be published in early 2019. This sixth edition is revolutionary in having nearly 50 industry experts provide a focus on the steps needed for all aspects to achieve a successful automation project to maximize the return on investment. MPC Project Best Practices Project team members should include not only control engineers, but also process engineers and operations personnel. First level support of MPC requires staff with knowledge of both the MPC and the process.  Site staff needs to have sufficient understanding to troubleshoot and answer the questions of operations.  Larger companies often have central teams for second level support and to participate in projects. Even in companies with experienced teams, it is not unusual to use outside MPC consultants. The right level of usage of outside consultants is rarely 0% or 100%. It may be tempting to avoid the benefit estimation and/or post audit, especially when a company has previous successful history with MPC. But doing so carries a risk.  New management may not have experience or understand the value of MPC, leading to the inevitable question: “What is MPC doing for me today?” The other temptation is to forgo needed instrumentation or hardware repairs and proceed directly with an MPC project, arguing that MPC can compensate for such deficiencies.  This carries the risk of not meeting expectations and MPC getting a bad reputation, which will be difficult to erase. Regular reporting of relevant KPIs and benefits is seen as the best way of keeping the organization in the know and motivating additional MPC applications. MPC Design Best Practices Develop a functional design with input from operations, process engineering, economics staff, and instrument techs.  Update the design as the project progresses, and after the project is completed to reflect the as built MPC. Not all MPC variables must be determined up front in the project.  Most important is identifying the MVs.  The final selection of CVs and DVs can be made after plant testing, assuming data for these variables was collected. The use of a dynamic simulation can be useful for testing a new regulatory control strategy.  It can also be used to test and demonstrate an MPC, which can be quite illustrative and educational, particularly if MPC is being applied for the first time in a facility. If filtering of a CV or DV for MPC is required, it needs to be done at the DCS or PLC level.  The faster scan times allow effective filtering (usually on the order of seconds) without significantly affecting the underlying dynamics of the signal.  In addition, filters associated with the PVs of PID loops should be reviewed to ensure excessive filtering is not being used to mask other problems. The use of a steady-state or dynamic simulation can be useful for determining thermo-physical equation parameters for PID calculated control variables (e.g., duty or PCT) and MPC CVs, estimating process gains, and evaluating possible inferential predictors. With most MPC products, adding MVs, CVs, and DVs is a straightforward task once models are identified.  This allows starting with a smaller MPC on one part of the unit, and later increasing the scope as experience and confidence is gained. Inferential models can be developed ahead of the plant test, which allows the model to be evaluated and adjustments made.  For data driven regression based inferentials, one needs to have at least confirmed that measurements exist that correlate with the analyzed valve.  Final determination of model inputs can be made during the modeling phase. A challenge with lab-based inferentials is accurately knowing when a lab sample is collected.  A technique for automating this is to install a thermocouple directly in the line of the sample point. A spike in the temperature measurement is used to detect a sample collection. When implementing a steady-state inferential model online, it is often useful to filter inputs to the calculation to remove phantom effects such as overshoot or inverse response. MPC Model Development Best Practices Plant Testing A test plan should be developed with operations and process engineering. It will need to be flexible to accommodate the needs of the modeling as well as operational issues that may arise. Data collected for model identification should not use data compression. A separate data collection is recommended to minimize the likelihood of latency effects such as PVs exhibiting changes before SPs. The data collection should include all pertinent tags for the units being tested. This can allows integrity checks to be made, and models to be identified for new CVs and DVs that may be added later to the MPC. Model identification runs should be done frequently, typically at least once per day. This allows the testing to be modified to emphasize MV-CV models that are insufficiently identified. The plant test is an opportunity to answer operational or process questions of which there are differing opinions, such as the effect of a recycle on recovery or on a constraint. This can help to develop consensus on a new strategy. MPC products that include automatic closed-loop should provide the necessary logic to change step sizes, bring MVs in and out of test mode, and displays to follow the testing. Lab sample collection for inferential model: include multiple samples collected at same time to assess sample container differences (reproducibility) and lab repeatability. When multiple sample containers are used, record the container used for each sample.  Coordinate lab sample with collection personnel and record time samples are collected from the process. Model Identification The MPC identification package should automatically handle the required scaling of MVs and CVs and differencing and/or de-trending. The ability to slice or section out bad data is a necessary feature. Note that each section of data that is excluded requires a re-initialization of the identification algorithm for the next section of good data. A useful technique for deciding on which MVs and DVs are significant in a model, and should therefore be included, is to compare their contribution to the CV response based on the average move sizes made during the plant test. Model assessment tools to guide model quality assessments are desirable. These may be error bounds on the step responses or other techniques to grade the model.  A common technique for assessing model errors are bode error plots which express errors as a function of frequency.  They can be useful for modifying the test to improve certain aspects of the model (e.g., reducing errors at low or high frequencies. Features to assist in the development of nonlinear transformation are desirable. Ideally, the necessary pre- and post-controller calculations to support transformations are a standard option in the MPC. Features that help document the various model runs and the construction of the final MPC model is a desirable feature. Even if an MPC includes online options for removing weak degrees of freedom, it is recommended that known consistency relationships be imposed as part of model identification. Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • How to Optimize Industrial Evaporators

    The post How to Optimize Industrial Evaporators first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. Luis Navas Luis Navas is an ISA Certified Automation Professional and electronic engineer with more than 11 years of experience in process control systems, industrial instrumentation and safety instrumented systems. Luis’ questions on evaporator control are important to improve evaporator concentration control and minimize steam consumption Luis Navas’ Questions Which criteria should I follow to define the final control strategy with model predictive control (MPC) in an existing PID strategy? Only one MPC for all existing PIDs? Or may be 1MPC + 1PID or 1MPC + 2 PIDs? What are the criteria to make the correct decision? What is the step by step procedure to deploy the advanced control in the real process in the safest way? Which are your hints, tips, advice and experiences regarding MPC implementations? PID control of a double-effect evaporator Greg McMillan’s Initial Answer In general you try to include all of the controlled variables (CV), manipulated variables (MV), disturbance variables (DC), and constraint variables (QC) in the same MPC unless the equipment are not related, there is a great difference in time horizons or there is a cascade control opportunity like we see with Kiln MPC control where a slower MPC with more important controlled variables send setpoints to a secondary MPC for faster controlled variables. For your evaporator control, this does not appear to be the case. We first discuss advanced PID control and its common limitations before moving into a MPC. For optimization, a PID valve position controller could maximize production rate by pushing the steam valve to its furthest effective throttle position. So far as increasing efficiency in terms of minimizing steam use, this would be generally be achieved by tight concentration control that allows you to operate closer to minimum concentration spec. The level and concentration response would be true and near integrating. In both cases, PID integrating process tuning rules should be used. Do not decrease the PID gain computed by these rules without proportionally increasing the PID reset time. The product of the PID gain and reset time must be greater than the inverse of the integrating process gain to prevent slow rolling oscillations, a very common problem. Often the reset time is two or more orders of magnitude too small because user decreased the PID gain due to noise or thinking oscillations are caused by too high a PID gain. I don’t see constraint control for a simple evaporator but if there were constraints, an override controller would be setup for each. However, only one constraint would be effectively governing operation at a given time via signal selection. Also, the proper tuning of override controllers and valve position controllers is not well known. Furthermore, the identification of dynamics for feedback and particularly feedforward control typically requires the expertise by a specialist. Often comparisons are done showing how much better Model Predictive Control is than PID control without good identification and tuning of feedback and feedforward control parameters. While optimization limitations and typical errors in identification and tuning push your case toward the use of MPC, here are the best practices for PID control of evaporators. Measure product concentration by a Coriolis meter on evaporator system discharge. Control product concentration by manipulation of the heat input to product flow ratio. Use evaporator level measurements with an excellent sensitivity and signal noise ratio. When possible, use radar instead of capillary systems to reduce level noise, drift, and lag Control product concentration by changing heat input to feed rate ratio. If production rate is set by discharge flow, use PID to manipulate heat input. If production rate is set by heat input, use PID to manipulate product flow rate. Use near integrator rules maximizing rate action in PID concentration controller tuning. Use a flow feedforward of product flow rate to set feed rate to minimize the response time for production rate or product concentration control. For feed concentration disturbances, use feedforward to correct the heat input based on feed solids concentration computed from density measured by a feed Coriolis meter. The actual heat to feed ratio must be displayed and manual adjustment of desired ratio be provided to operations for startup and abnormal operation. To provide faster concentration control for small disturbances, use a PD controller to manipulate a small bypass flow whose bias is about 50% of maximum bypass flow. The use of model predictive control software often does a good job of identifying the dynamics and automatically incorporating them into the controller. Also, it can simultaneously handle multiple constraints with predictive capability as to violation of constraints. Furthermore, a linear program or other optimizer built into MPC can find and achieve the optimum intersection of the minimum and maximum values of controlled, constraint, and manipulated variables plotted on a common axis of the manipulated variables. I have asked for more detailed advice on MPC by Mark Darby, a great new resource, who wrote the MPC Sections for the McGraw-Hill Handbook Hunter and I just finished. Mark Darby’s Initial Answer It is normally best to keep PID controls in place for basic regulatory control if they perform well, which may require re-tuning or reconfiguration of the strategy.  Your case is getting into advanced control and optimization where the advantage shifts to MPC. Multiple interactions and measured disturbances are best done by MPC compared to PID decoupling and feedforward control. First principle models should be used to compute smarter disturbance variables such as solids feed flow rather than separate feed flow and feed concentration disturbance variables. Override control and valve position control schemes are better handled by MPC.  More general optimization is also better done with an MPC. Remember to include PID outputs to valves as constraint variables if they can saturate in normal operation.  If a valve is operated close to a limit (e.g., 5% or 95%), it may be better to have the MPC manipulate the valve signal directly using signal characterization as needed using installed flow characteristic to linearize response. Join the ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program. Here are some MPC best practices from Process/Industrial Instruments and Controls Handbook Sixth Edition , by Gregory K. McMillan and Hunter Vegas (co-editors), and scheduled to be published in early 2019. This sixth edition is revolutionary in having nearly 50 industry experts provide a focus on the steps needed for all aspects to achieve a successful automation project to maximize the return on investment.  MPC Project Best Practices Project team members should include not only control engineers, but also process engineers and operations personnel. First level support of MPC requires staff with knowledge of both the MPC and the process.  Site staff needs to have sufficient understanding to troubleshoot and answer the questions of operations.  Larger companies often have central teams for second level support and to participate in projects. Even in companies with experienced teams, it is not unusual to use outside MPC consultants. The right level of usage of outside consultants is rarely 0% or 100%. It may be tempting to avoid the benefit estimation and/or post audit, especially when a company has previous successful history with MPC. But doing so carries a risk.  New management may not have experience or understand the value of MPC, leading to the inevitable question: “What is MPC doing for me today?” The other temptation is to forgo needed instrumentation or hardware repairs and proceed directly with an MPC project, arguing that MPC can compensate for such deficiencies.  This carries the risk of not meeting expectations and MPC getting a bad reputation, which will be difficult to erase. Regular reporting of relevant KPIs and benefits is seen as the best way of keeping the organization in the know and motivating additional MPC applications. MPC Design Best Practices Develop a functional design with input from operations, process engineering, economics staff, and instrument techs.  Update the design as the project progresses, and after the project is completed to reflect the as built MPC. Not all MPC variables must be determined up front in the project.  Most important is identifying the MVs.  The final selection of CVs and DVs can be made after plant testing, assuming data for these variables was collected. The use of a dynamic simulation can be useful for testing a new regulatory control strategy.  It can also be used to test and demonstrate an MPC, which can be quite illustrative and educational, particularly if MPC is being applied for the first time in a facility. If filtering of a CV or DV for MPC is required, it needs to be done at the DCS or PLC level.  The faster scan times allow effective filtering (usually on the order of seconds) without significantly affecting the underlying dynamics of the signal.  In addition, filters associated with the PVs of PID loops should be reviewed to ensure excessive filtering is not being used to mask other problems. The use of a steady-state or dynamic simulation can be useful for determining thermo-physical equation parameters for PID calculated control variables (e.g., duty or PCT) and MPC CVs, estimating process gains, and evaluating possible inferential predictors. With most MPC products, adding MVs, CVs, and DVs is a straightforward task once models are identified.  This allows starting with a smaller MPC on one part of the unit, and later increasing the scope as experience and confidence is gained. Inferential models can be developed ahead of the plant test, which allows the model to be evaluated and adjustments made.  For data driven regression based inferentials, one needs to have at least confirmed that measurements exist that correlate with the analyzed valve.  Final determination of model inputs can be made during the modeling phase. A challenge with lab-based inferentials is accurately knowing when a lab sample is collected.  A technique for automating this is to install a thermocouple directly in the line of the sample point. A spike in the temperature measurement is used to detect a sample collection. When implementing a steady-state inferential model online, it is often useful to filter inputs to the calculation to remove phantom effects such as overshoot or inverse response. MPC Model Development Best Practices Plant Testing A test plan should be developed with operations and process engineering. It will need to be flexible to accommodate the needs of the modeling as well as operational issues that may arise. Data collected for model identification should not use data compression. A separate data collection is recommended to minimize the likelihood of latency effects such as PVs exhibiting changes before SPs. The data collection should include all pertinent tags for the units being tested. This can allows integrity checks to be made, and models to be identified for new CVs and DVs that may be added later to the MPC. Model identification runs should be done frequently, typically at least once per day. This allows the testing to be modified to emphasize MV-CV models that are insufficiently identified. The plant test is an opportunity to answer operational or process questions of which there are differing opinions, such as the effect of a recycle on recovery or on a constraint. This can help to develop consensus on a new strategy. MPC products that include automatic closed-loop should provide the necessary logic to change step sizes, bring MVs in and out of test mode, and displays to follow the testing. Lab sample collection for inferential model: include multiple samples collected at same time to assess sample container differences (reproducibility) and lab repeatability. When multiple sample containers are used, record the container used for each sample.  Coordinate lab sample with collection personnel and record time samples are collected from the process. Model Identification The MPC identification package should automatically handle the required scaling of MVs and CVs and differencing and/or de-trending. The ability to slice or section out bad data is a necessary feature. Note that each section of data that is excluded requires a re-initialization of the identification algorithm for the next section of good data. A useful technique for deciding on which MVs and DVs are significant in a model, and should therefore be included, is to compare their contribution to the CV response based on the average move sizes made during the plant test. Model assessment tools to guide model quality assessments are desirable. These may be error bounds on the step responses or other techniques to grade the model.  A common technique for assessing model errors are bode error plots which express errors as a function of frequency.  They can be useful for modifying the test to improve certain aspects of the model (e.g., reducing errors at low or high frequencies. Features to assist in the development of nonlinear transformation are desirable. Ideally, the necessary pre- and post-controller calculations to support transformations are a standard option in the MPC. Features that help document the various model runs and the construction of the final MPC model is a desirable feature. Even if an MPC includes online options for removing weak degrees of freedom, it is recommended that known consistency relationships be imposed as part of model identification. See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly “Control Talk” columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg :
  • How to Calibrate a Thermocouple

    The post How to Calibrate a Thermocouple first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. In the  ISA Mentor Program , I am providing guidance for extremely talented individuals from Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Daniel Brewer. Daniel Brewer , one of our newest protégés, has over six years of industry experience as an I&E technician. He attended the University of Kansas process instrumentation and control online courses. Daniel’s questions focus on aspects affecting thermocouple accuracy. Daniel Brewer’s Question How do you calibrate a thermocouple transmitter? How do you simulate a thermocouple? When do you use zero degree reference junction? What if your measuring junction temperature varies? Hunter Vegas’ Answer Most people use a thermocouple simulator to calibrate temperature transmitters. You can usually set them to generate a wide selection of thermocouple types. Just make sure the thermocouple lead you use to connect the simulator to the transmitter is the right kind of wire. “Calibrating” the thermocouple is another matter – because realistically it works or it doesn’t. You can pull it and put it in a bath though very few people actually do that. However if it is critical most will take the time to either put the thermocouple in a bath or dry block or at least cross check the reading against another thermocouple or some other means to check it. The zero degree junction is a bit more complicated. Basically any time two dissimilar metals are connected a slight millivolt signal is generated. That is what a thermocouple is – two dissimilar metals welded together which generate varying voltages depending on the temperature at the junction. When you run a thermocouple circuit you try to use the same metals as the thermocouple for the whole circuit – that is you run thermocouple wire that matches the thermocouple and you use special thermocouple terminal blocks that are the same kind. This eliminates any extra junctions – the same metal is always connected to itself.   However at some point you have to hook up to some kind of device that has copper terminal blocks – (transmitter, indicator, etc.) Unfortunately this creates another thermocouple junction where the copper touches the wires.  That junction will impact the reading and will also fluctuate with temperature so the error will be variable. To fix this most devices have a cold junction compensation circuit built in that automatically senses the temperature of the terminal block and subtracts the effect from the reading. Nearly every transmitter and read out device has it build in as a standard feature now – only older equipment would lack it. Greg McMillan’s Answer The error from properly calibrated smart temperature transmitter with the correct span is generally negligible compared to the noise and errors from the sensor and signal wiring and connections. The use of Class 1 special grade instead of Class 2 standard grade thermocouples and extension lead wires enables an accuracy that is 50% better. The use of thermocouple input cards instead of smart transmitters introduces large errors due to the large spans and inability to individualize the calibrations. Thermocouple (TC) drift can vary from 1 to 20 degrees Fahrenheit per year and the repeatability can vary from 1 to 8 degrees Fahrenheit depending upon the TC type and application conditions. For critical operations demanding high accuracy, the frequency of sensor calibrations needed is problematic. While a dry block calibrator is faster than a wet batch and can cover a higher temperature range, the removal of the sensor from the process is disruptive to operations and the time required compared to a simple transmitter calibration is still considerable. The best bet is a single point temperature check to compensate for the offset due to drift and manufacturing tolerances. ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program. In a distillation column application, operations were perplexed and more than annoyed at the terrible column performance when the thermocouple was calibrated or replaced. It turns out operations had homed in on a temperature setpoint that had effectively compensated for the offset in the thermocouple measurement. Even after realizing the need for a new setpoint due to a more accurate thermocouple, it would take months to years to find the best setpoint. Temperature is critical for column control because it is an inference of composition. It is also critical for reactor control because the reaction rate determining process capacity and selectivity setting process efficiency and product quality is greatly affected by temperature. In these applications where the operating temperature is below 400 degrees Fahrenheit, a resistance temperature detector (RTD) is a much better choice. Table 1 compares the performance of a thermocouple and RTD. Table 1: Temperature Sensor Precision, Accuracy, Signal, Size and Linearity Stepped thermowells should be specified with an insertion length greater than five times the tip diameter (L/D > 5) to minimize error from heat going from thermowell tip to pipe or equipment connection from thermal conduction and an insertion length less than 20 times the tip diameter (L/D < 20) to minimize vibration from wake frequencies. Calculations by supplier on length should be done to confirm that heat conduction error and vibration damage is not a problem. Stepped thermowells reduce the error and damage and provide a faster response. Spring loaded grounded thermocouples as seen in Figure 1 with minimum annular clearance between sheath and thermowell interior walls provide the fastest response that minimizes errors introduced by the sensor tip temperature lagging the actual process temperature. Figure 1: Spring loaded compression fitting for sheathed TC or RTD Thermowell material must provide corrosion resistance and if possible, a thermal conductivity to minimize conduction error or response time, whichever is most important.  The tapered tip of the thermowell must be close to center line of pipe and the tapered portion of the thermowell completely past the equipment wall including any baffles. For columns, the location showing the largest and most symmetrical change in temperature for an increase and decrease in manipulated flow should be used. Simulations can help find this but it is wise to have several connections to confirm by field tests the best location, The tip of the thermowell must see the liquid, which may require a longer extension length or mounting on the opposite side of the downcomer to avoid the tip being in the vapor phase due the drop in level at the downcomer. For TCs above 600 degrees Celsius, ensure sheath material is compatible with TC type. For TCs above temperature limit of sheaths, use the ceramic material with best thermal conductivity and design to minimize measurement lag time. For TCs above the temperature limit of sheaths with gaseous contaminants or reducing conditions, use possibly purged primary (outer) and secondary (inner) protection tubes to prevent contamination of TC element and provide a faster response. The best location for a thermowell for small diameter pipelines (e.g., less than 12 inch) is in a pipe elbow facing upstream to maximize insertion length in center of the pipe. If abrasion from solids is an issue, the thermowell can be installed in the elbow facing downstream but a greater length is needed to reduce noise from swirling.  If a pipe is half filled, the installation should ensure the narrowed diameter of the stepped thermowell is in liquid and not vapor. The location of a thermowell must be sufficiently downstream of a joining of streams or heat exchanger tube side outlet to enable remixing of streams. The location must not be too far downstream due to the increase in transportation delay, which is the residence time for plug flow that is the pipe volume between the outlet or junction and sensor location divided by the pipe flow (volume/flow). For a length that is 25 times the pipe diameter (L/D = 25), the increase in loop deadtime of a few seconds is not as detrimental as a poor signal to noise ratio from poor uniformity. For desuperheaters, to prevent water droplets from creating noise, the thermowell must provide a residence time that is greater 0.3 seconds, which for high gas velocities can be much further than the distance required for liquid heat exchangers. For greater reliability and better diagnostics dual isolated sensing elements can be used but the more effective solution is redundant installations of thermowells and transmitters. The middle signal selection of three completely redundant measurements offers best reliability and least effect of drift, noise, repeatability and slow response. The measurement from middle signal selection will be valid for any type of failure of one measurement. There is also considerable knowledge gained to head off problems from comparison of each measurement to middle. Drift in the sensor shows up as a different average controller output at the same production rate assuming there is no fouling or change in raw materials. Poor repeatability in the sensor shows up as excessive variability in temperature controller output. For very tight control where the controller gain is high, sensor variability is most apparent in the controller output assuming the controller is tuned properly and the valve has a smooth consistent response. For much more on calibration and temperature measurement see the Beamex e-book Calibration Essentials and Rosemount’s The Engineer’s Guide to Industrial Temperature Measurement . Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • How to Calibrate a Thermocouple

    The post How to Calibrate a Thermocouple first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. Daniel Brewer , one of our newest protégés, has over six years of industry experience as an I&E technician. He attended the University of Kansas process instrumentation and control online courses. Daniel’s questions focus on aspects affecting thermocouple accuracy. Daniel Brewer’s Question How do you calibrate a thermocouple transmitter? How do you simulate a thermocouple? When do you use zero degree reference junction? What if your measuring junction temperature varies? Hunter Vegas’ Answer Most people use a thermocouple simulator to calibrate temperature transmitters. You can usually set them to generate a wide selection of thermocouple types. Just make sure the thermocouple lead you use to connect the simulator to the transmitter is the right kind of wire. “Calibrating” the thermocouple is another matter – because realistically it works or it doesn’t. You can pull it and put it in a bath though very few people actually do that. However if it is critical most will take the time to either put the thermocouple in a bath or dry block or at least cross check the reading against another thermocouple or some other means to check it. The zero degree junction is a bit more complicated. Basically any time two dissimilar metals are connected a slight millivolt signal is generated. That is what a thermocouple is – two dissimilar metals welded together which generate varying voltages depending on the temperature at the junction. When you run a thermocouple circuit you try to use the same metals as the thermocouple for the whole circuit – that is you run thermocouple wire that matches the thermocouple and you use special thermocouple terminal blocks that are the same kind. This eliminates any extra junctions – the same metal is always connected to itself.   However at some point you have to hook up to some kind of device that has copper terminal blocks – (transmitter, indicator, etc.) Unfortunately this creates another thermocouple junction where the copper touches the wires.  That junction will impact the reading and will also fluctuate with temperature so the error will be variable. To fix this most devices have a cold junction compensation circuit built in that automatically senses the temperature of the terminal block and subtracts the effect from the reading. Nearly every transmitter and read out device has it build in as a standard feature now – only older equipment would lack it. Join the ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program. Greg McMillan’s Answer The error from properly calibrated smart temperature transmitter with the correct span is generally negligible compared to the noise and errors from the sensor and signal wiring and connections. The use of Class 1 special grade instead of Class 2 standard grade thermocouples and extension lead wires enables an accuracy that is 50% better. The use of thermocouple input cards instead of smart transmitters introduces large errors due to the large spans and inability to individualize the calibrations. Thermocouple (TC) drift can vary from 1 to 20 degrees Fahrenheit per year and the repeatability can vary from 1 to 8 degrees Fahrenheit depending upon the TC type and application conditions. For critical operations demanding high accuracy, the frequency of sensor calibrations needed is problematic. While a dry block calibrator is faster than a wet batch and can cover a higher temperature range, the removal of the sensor from the process is disruptive to operations and the time required compared to a simple transmitter calibration is still considerable. The best bet is a single point temperature check to compensate for the offset due to drift and manufacturing tolerances. In a distillation column application, operations were perplexed and more than annoyed at the terrible column performance when the thermocouple was calibrated or replaced. It turns out operations had homed in on a temperature setpoint that had effectively compensated for the offset in the thermocouple measurement. Even after realizing the need for a new setpoint due to a more accurate thermocouple, it would take months to years to find the best setpoint. Temperature is critical for column control because it is an inference of composition. It is also critical for reactor control because the reaction rate determining process capacity and selectivity setting process efficiency and product quality is greatly affected by temperature. In these applications where the operating temperature is below 400 degrees Fahrenheit, a resistance temperature detector (RTD) is a much better choice. Table 1 compares the performance of a thermocouple and RTD. Table 1: Temperature Sensor Precision, Accuracy, Signal, Size and Linearity Stepped thermowells should be specified with an insertion length greater than five times the tip diameter (L/D > 5) to minimize error from heat going from thermowell tip to pipe or equipment connection from thermal conduction and an insertion length less than 20 times the tip diameter (L/D < 20) to minimize vibration from wake frequencies. Calculations by supplier on length should be done to confirm that heat conduction error and vibration damage is not a problem. Stepped thermowells reduce the error and damage and provide a faster response. Spring loaded grounded thermocouples as seen in Figure 1 with minimum annular clearance between sheath and thermowell interior walls provide the fastest response that minimizes errors introduced by the sensor tip temperature lagging the actual process temperature. Figure 1: Spring loaded compression fitting for sheathed TC or RTD Thermowell material must provide corrosion resistance and if possible, a thermal conductivity to minimize conduction error or response time, whichever is most important.  The tapered tip of the thermowell must be close to center line of pipe and the tapered portion of the thermowell completely past the equipment wall including any baffles. For columns, the location showing the largest and most symmetrical change in temperature for an increase and decrease in manipulated flow should be used. Simulations can help find this but it is wise to have several connections to confirm by field tests the best location, The tip of the thermowell must see the liquid, which may require a longer extension length or mounting on the opposite side of the downcomer to avoid the tip being in the vapor phase due the drop in level at the downcomer. For TCs above 600 degrees Celsius, ensure sheath material is compatible with TC type. For TCs above temperature limit of sheaths, use the ceramic material with best thermal conductivity and design to minimize measurement lag time. For TCs above the temperature limit of sheaths with gaseous contaminants or reducing conditions, use possibly purged primary (outer) and secondary (inner) protection tubes to prevent contamination of TC element and provide a faster response. The best location for a thermowell for small diameter pipelines (e.g., less than 12 inch) is in a pipe elbow facing upstream to maximize insertion length in center of the pipe. If abrasion from solids is an issue, the thermowell can be installed in the elbow facing downstream but a greater length is needed to reduce noise from swirling.  If a pipe is half filled, the installation should ensure the narrowed diameter of the stepped thermowell is in liquid and not vapor. The location of a thermowell must be sufficiently downstream of a joining of streams or heat exchanger tube side outlet to enable remixing of streams. The location must not be too far downstream due to the increase in transportation delay, which is the residence time for plug flow that is the pipe volume between the outlet or junction and sensor location divided by the pipe flow (volume/flow). For a length that is 25 times the pipe diameter (L/D = 25), the increase in loop deadtime of a few seconds is not as detrimental as a poor signal to noise ratio from poor uniformity. For desuperheaters, to prevent water droplets from creating noise, the thermowell must provide a residence time that is greater 0.3 seconds, which for high gas velocities can be much further than the distance required for liquid heat exchangers. For greater reliability and better diagnostics dual isolated sensing elements can be used but the more effective solution is redundant installations of thermowells and transmitters. The middle signal selection of three completely redundant measurements offers best reliability and least effect of drift, noise, repeatability and slow response. The measurement from middle signal selection will be valid for any type of failure of one measurement. There is also considerable knowledge gained to head off problems from comparison of each measurement to middle. Drift in the sensor shows up as a different average controller output at the same production rate assuming there is no fouling or change in raw materials. Poor repeatability in the sensor shows up as excessive variability in temperature controller output. For very tight control where the controller gain is high, sensor variability is most apparent in the controller output assuming the controller is tuned properly and the valve has a smooth consistent response. For much more on calibration and temperature measurement see the Beamex e-book Calibration Essentials and Rosemount’s The Engineer’s Guide to Industrial Temperature Measurement . See the ISA book 101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly “Control Talk” columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg :
  • When is Reducing Variability Wrong?

    The post, When is Reducing Variability Wrong? , first appeared on the ControlGlobal.com Control Talk blog. Having the blind wholesale goal of reducing variability can lead to doing the wrong thing that can reduce plant safety and performance. Here we look at some common mistakes made that users may not realize until they have better concept of what is really going on. We seek to provide some insightful knowledge here to keep you out of trouble. Is a smoother data historian plot or a statistical analysis showing less short term variability good or bad? The answer is no for the following situations misleading users and data analytics. First of all, the most obvious case is surge tank level control. Here we want to maximize the variation in level to minimize the variation in manipulated flow typically to downstream users. This objective has a positive name of absorption of variability. What this is really indicative of is the principle that control loops do not make variability disappear but transfer variability from a controlled variable to a manipulated variable. Process engineers often have a problem with this concept because they think of setting flows per a Process Flow Diagram (PFD) and are reluctant to let a controller freely move them per some algorithm they do not fully understand. This is seen in predetermined sequential additions of feeds or heating and cooling in a batch operation rather allowing a concentration or temperature controller do what is needed via fed-batch control. No matter how smart a process engineer is, not all of the situations, unknowns and disturbances can be accounted for continuously. This is why fed-batch control is called semi-continuous. I have seen where process engineers, believe or not, sequence air flows and reagent flows to a batch bioreactor rather than going to Dissolved Oxygen or pH control. We need to teach chemical and biochemical engineers process control fundamentals including the transfer of variability. The variability of a controlled variable is minimized by maximizing the transfer of variability to the manipulated variable. Unnecessary sharp movements of the manipulated variability can be prevented by a setpoint rate of change limit on analog output blocks for valve positioners or VFDs or directly on other secondary controllers (e.g., flow or coolant temperature) and the use of external-reset feedback (e.g., dynamic reset limit) with fast feedback of the actual manipulated variable (e.g., position, speed, flow, or coolant temperature). There is no need to retune the primary process variable controller by the use of external-reset feedback. Data analytics programs need to use manipulated variables in addition to controlled variables to indicate what is happening. For tight control and infrequent setpoint changes to a process controller, what is really happening is seen in the manipulated variable (e.g., analog output). A frequent problem is data compression in a data historian that conceals what is really going on. Hopefully, this is only affecting the trend displays and not the actual variables been used by a controller. The next most common problem has been extensively discussed by me so at this point you may want to move on to more pressing needs. This problem is the excessive use of signal filters that may even be more insidious because the controller does not see a developing problem as quickly. A signal filter that is less than the largest time constant in the loop (hopefully in the process) creates dead time. If the signal filter becomes the largest time constant in the loop, the previously largest time constant creates dead time. Since the controller tuning based on largest time constant has no idea where it is, the controller gain can be increased, which combined with the smoother trends can lead one to believe the large filter was beneficial. The key here is a noticeably increase in the oscillation period particularly if the reset time was not increased. Signal filters become increasingly detrimental as the process loses self-regulation. Integrating processes such as level, gas pressure and batch temperature are particularly sensitive. Extremely dangerous is the use of a large filter on the temperature measurement for a highly exothermic reaction. If the PID gain window (ratio of maximum to minimum PID gain) reduces due to measurement lag to the point of not being able to withstand nonlinearities (e.g., ratio less than 6), there is a significant safety risk. A slow thermowell response often due to a sensor that is loose or not touching the bottom of the thermowell causes the same problem as a signal filter. An electrode that is old or coated can have a time constant that is orders of magnitude larger (e.g., 300 sec) than a clean new pH electrode. If the velocity is slightly low (e.g., less than 5 fps), pH electrodes become more likely to foul and if the velocity is very low (e.g., less than 0.5 fps), the electrode time constant can increase by one order of magnitude (e.g., 30 sec) compared to electrode seeing recommended velocity. If the thermowell or electrode is being hidden by a baffle, the response is smoother but not representative of what is actually going on. For gas pressure control, any measurement filter including that due to transmitter damping generally needs to be less than 0.2 sec, particularly if volume boosters on a valve positioner output(s) or a variable frequency drive is needed for a faster response. Practitioners experienced in doing Model Predictive Control (MPC), want data compression and signal filters to be completely removed so that the noise can be seen and a better identification of process dynamics especially dead time is possible. Virtual plants can show how fast the actual process variables should be changing revealing poor analyzer or sensor resolution and response time and excessive filtering. In general, you want measurement lags to total up to being less than 10% of the total loop dead time or less than 5% of reset time. However, you cannot get a good idea of the loop dead time unless you remove the filter and look for the time it takes to see a change in the right direction beyond noise after a controller setpoint or output change. For more on the deception causes by a measurement time constant, see the Control Talk Blog “ Measurement Attenuation and Deception .”