*Posts on this page are from the Control Talk blog, which is one of the ControlGlobal.com blogs for process automation and instrumentation professionals and Greg McMillan’s contributions to the ISA Interchange blog.
The post Basic Guidelines for Control Valve Selection and Sizing first appeared on the ISA Interchange blog site.
The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.
In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Hiten Dalal.
Hiten Dalal, PE, PMP, is senior automation engineer for products pipeline at Kinder Morgan, Inc. Hiten has extensive experience in pipeline pressure and flow control.
Are there basic rule of thumb guidelines for control valve sizing outside of relying on the valve supplier and using the valve manufacturer’s sizing program?
Selecting and sizing control valves seems to have become a lost art. Most engineers toss it over the fence to the vendor along with a handful of (mostly wrong) process data values, and a salesperson plugs the values into a vendor program which spits out a result. Control valves often determine the capability of the control system, and a poorly sized and selected control valve will make tight control impossible regardless of the control strategy or tuning employed. Selecting the right valve matters!
There are several aspects of sizing/selecting a control valve that must be addressed:
Note that gathering this data is probably the hardest to do. It often takes a sketch of the piping, an understanding of the process hydraulics, and examination of the system pump curves to determine the real pressure drops under various conditions. Note too that the DP may change when you select a valve since it might require pipe reducers/expanders to be installed in a pipe that is sized larger.
This can be another difficult task. Ideally the control valve response should be linear (from the control system’s perspective). If the PID output changes 5%, the process should respond in a similar fashion regardless of where the output is. (In other words 15% to 20% or 85% to 90% should ideally generate the same process response). If the valve response is non-linear, control becomes much more difficult. (You can tune for one process condition but if conditions change the dynamics change and now the tuning doesn’t work nearly as well.) The valve response is determined by a number of items including:
The user has to understand all of these conditions so he/she can pick the right valve plug. Ideally you pick a valve characteristic that will offset the non-linear effects of the process and make the overall response of the system linear.
That complicates matter still further because now you’ll need to know a lot more about the process fluid itself. If you are faced with cavitation or flashing you may need to know the vapor pressure and critical pressure of the fluid. This information may be readily available or not if the fluid is a mix of products. Choked flow conditions are usually accompanied with noise problems and will also require additional fluid data to perform the calculations. Realize too that the selection of the valve internals will have a big impact on the flow rates, response, etc. (You’ll be looking at anti-cav trim, diffusers, etc.)
Usually the vendor’s program is a good place to start, but some programs are much better than others because some have more process data ‘built in’ and have the advanced calculations required to handle cavitation, flashing, choked flow, and noise calculations. Others are very simplistic and may not handle the more advanced conditions. Theoretically you could use any vendor’s program to do any valve but obviously the vendor program will typically have only its valve data built in so if you use a different program you’ll have to enter that data (if you can find it!) One caution about this – some vendors have different valve constants which can be difficult to convert.
Hope this helped. It was probably a bit more than you were wanting but control valve selection and sizing is a lot more complicated than most realize.
The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program.
Hunter did a great job of providing detailed concise advice. My offering here is to help avoid the common problems from an inappropriate focus on maximizing valve capacity, minimizing valve pressure drop, minimizing valve leakage and minimizing valve cost. All these things have resulted in “on-off valves” posing as “throttling valves” creating problems of poor actuator and positioner sensitivity, excessive backlash and stiction, unsuspected nonlinearity, poor rangeability, and smart positioners giving dumb diagnostics.
While certain applications, such as pH control, are particularly sensitive to these valve problems, nearly all loops will suffer from backlash and stiction exceeding 5% (quite common with many “on-off valves”) causing limit cycles that can spread through the process. These “on-off valves” are quite attractive because of the high capacity and low pressure drop, leakage and cost. To address leakage requirements, a separate tight shutoff valve should be used in series with a good throttling valve and coordinated to open and close to enable a good throttling valve to smoothly do its job.
Unfortunately there is nothing on a valve specification sheet that requires the valve have a reasonably precise and timely response to signals and not create oscillations from a loop simply being in automatic making us extremely vulnerable to common misconceptions. The most threatening one that comes to mind in selection and sizing is that rangeability is determined by how well a minimum Cv matches the theoretical characteristic. In reality, the minimum Cv cannot be less than the backlash and stiction near the seat. Most valve suppliers will not provide backlash and stiction for positions less than 40% because of the great increase from the sliding stem valve plug riding the seat or the rotary disk or ball rubbing the seal. Also, tests by the supplier are for loose packing. Many think piston actuators are better than diaphragm actuators.
Maybe the physical size and cost is less and the capability for thrust and torque higher, but the sensitivity is an order of magnitude less and vulnerability to actuator seal problems much greater. Higher pressure diaphragm actuators are now available enabling use on larger valves and pressure drops. One more major misconception is that boosters should be used instead of positioners on fast loops. This is downright dangerous due to positive feedback between flexure of diaphragm slightly changing actuator pressure and extremely high booster outlet port sensitivity. To reduce response time, the booster should be put on the positioner output with a bypass valve opened just enough to stop high frequency oscillations by allowing the positioner to see the much greater actuator and booster volume.
The following excerpt from the Control Talk blog Sizing up valve sizing opportunities provides some more detailed warnings:
We are pretty diligent about making sure the valve can supply the maximum flow. In fact, we can become so diligent we choose a valve size much greater than needed thinking bigger is better in case we ever need more. What we often do not realize is that the process engineer has already built in a factor to make sure there is more than enough flow in the given maximum (e.g., 25% more than needed). Since valve size and valve leakage are prominent requirements on the specification sheet if the materials of construction requirements are clear, we are setup for a bad scenario of buying a larger valve with higher friction.
The valve supplier is happy to sell a larger valve and the piping designer is happier that not much or any of a pipe reducer is needed for valve installation and the pump size may be smaller. The process is not happy. The operators are not happy looking at trend charts unless the trend chart time and process variable scales are so large the limit cycle looks like noise. Eventually everyone will be unhappy.
The limit cycle amplitude is large because of greater friction near the seat and the higher valve gain. The amplitude in flow units is the percent resolution (e.g., % stick-slip) multiplied by the valve gain (e.g., delta pph per delta % signal). You get a double whammy from a larger resolution limit and a larger valve gain. If you further decide to reduce the pressure drop allocated to the valve as a fraction of total system pressure drop to less than 0.25, a linear characteristic becomes quick opening greatly increasing the valve gain near the closed position. For a fraction much less than 0.25 and an equal percentage trim you may be literally and figuratively bottoming out for the given R factor that sets the rangeability for the inherent flow characteristic (e.g., R=50).
What can you do to lead the way and become the “go to” resource for intelligent valve sizing?
You need to compute the installed flow characteristic for various valve and trim sizes as discussed in the Jan 2016 Control Talk post Why and how to establish installed valve flow characteristics. You should take advantage of supplier software and your company’s mechanical engineer’s knowledge of the piping system design and details.
You must choose the right inherent flow characteristic. If the pressure drop available to the control valve is relatively constant, then linear trim is best because the installed flow characteristic is then the inherent flow characteristic. The valve pressure drop can be relatively constant due to a variety of reasons most notably pressure control loops or changes in pressure in the rest of the piping system being negligible (fictional losses in system piping negligible). For more on this see the 5/06/2015 Control Talk blog Best Control Valve Flow Characteristic Tips.
On the installed flow characteristic you need to make sure the valve gain in percent (% flow per % signal) from minimum to maximum flow does not change by more than a factor of 4 (e.g., 0.5 to 2.0) with the minimum gain greater than 0.25 and the maximum gain less than 4. For sliding stem valves, this valve gain requirement corresponds to minimum and maximum valve positions of 10% and 90%. For many rotary valves, this requirement corresponds to minimum and maximum disk or ball rotations of 20 degrees and 50 degrees.
Furthermore, the limit cycle amplitude being the resolution in percent multiplied by the valve gain in flow units (e.g., pph per %) and by the process gain in engineering units (e.g., pH per pph) must be less than the allowable process variability (e.g., pH). The amplitude and conditions for a limit cycle from backlash is a bit more complicated but still computable. For sliding stem valves, you have more flexibility in that you may be able to change out trim sizes as the process requirements change. Plus, sliding stem valves generally have a much better resolution if you have a sensitive diaphragm actuator with plenty of thrust or torque and a smart positioner.
The books Tuning and Control Loop Performance Fourth Edition and Essentials of Modern Measurements and Final Elements have simple equations to compute the installed flow characteristic and the minimum possible Cv for controllability based on the theoretical inherent flow characteristic, valve drop to total system drop pressure ratio and the resolution limit.
Here is some guidance from “Chapter 4 – Best Control Valves and Variable Frequency Drives” of Process/Industrial Instruments and Controls Handbook Sixth Edition that Hunter and I just finished with the contributions of 50 experts in our profession to address nearly all aspects of achieving the best automation project performance.
The effect of resolution limits from stiction and dead band from backlash are most noticeable for changes in controller output less than 0.4% and the effect of rate limiting is greatest for changes greater than 40%. For PID output changes of 2%, a poor valve or VFD design and setup are not very noticeable. An increase in PID gain resulting in changes in PID output greater than 0.4% can reduce oscillations from poor positioner design and dead band.
The requirements in terms of 86% response time and travel gain (change in valve position divided by change in signal) should be specified for small, medium and large signal changes. In general, the travel gain requirement is relaxed for small signal changes due to effect of backlash and stiction, and the 86% response time requirement is relaxed for large signal changes due to the effect of rate limiting. The measurement of actual valve travel is problematic for on-off valves posing as throttling valves because the shaft movement is not disk or ball movement. The resulting difference between shaft position and actual ball or disk position has been observed in several applications to be as large as 8 percent.
Use sizing software with physical properties for worst case operating conditions. The minimum valve position must be greater than backlash and deadband. Based on a relatively good installed flow characteristic valve gains (valve drop to system pressure drop ratio greater than 0.25), there are minimum and maximum positions during sizing to minimize nonlinearity to less than 4:1. For sliding stem valves, the minimum and maximum valve positions are typically 10% and 90%, respectively. For many rotary valves, the minimum and maximum disk or ball rotations are typically 20 degrees and 50 degrees, respectively. The range between minimum and maximum positions or rotations can be extended by signal characterization to linearize the installed flow characteristic.
For much more on valve response see the Control feature article How to specify valves and positioners that do not compromise control.
The best book I have for understanding the many details of valve design is Control Valves for the Chemical Process Industries written by Bill Fitzgerald and published by McGraw-Hill. The book that is specifically focused on this Q&A topic is Control Valve Selection and Sizing written by Les Driskell and published by ISA. Most of my books in my office are old like me. Sometimes newer versions do not exist or are not as good.
See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).
About the AuthorGregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.
Connect with Greg
Image Credit: Wikipedia
The post What Are the Opportunities for Nonlinear Control in Process Industry Applications? first appeared on the ISA Interchange blog site.
In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. These questions come from Flavio Briquente and Syed Misbahuddin.
Model predictive control (MPC) has a proven successful history of providing extensive multivariable control and optimization. The applications in refineries are extensive forcing the PID in most cases to take a backseat. These processes tend to employ very large MPC matrices and employ extensive optimization by Linear Programs (LP). The models are linear and may be switched for different product mixtures. The plants tend to have a more constant production rates and greater linearity than seen in specialty chemical and biological processes.
MPC is also widely used in petrochemical plants. The applications in other parts of the process industry are increasing but tend to use much smaller MPC matrices focused on a unit operation. MPC offers dynamic decoupling, disturbance and constraint control. To do the same in PID requires dynamic compensation of decoupling and feedforward signals and override control. The software to accomplish dynamic compensation for the PID is not explained or widely used. Also, interactions and override control involving more than two process variables is more challenging than practitioners can address. MPC is easier to tune and has an integrated LP for optimization.
Flavio Briguente is an advanced process control consultant at Evonik in North America, and is one of the original protégés of the ISA Mentor Program. Flavio has expertise in model predictive control and advanced PID control. He has worked at Rohm, Haas Company, and Monsanto Company. At Monsanto, he was appointed to the manufacturing technologist program, and served as the process control lead at the Sao Jose dos Campos plant in Brazil and a technical reference for the company’s South American sites. During his career, Flavio focused on different manufacturing processes, and made major contributions in optimization, advanced control strategies, Six Sigma and capital projects. He earned a chemical engineering degree from the University of São Paulo, a post-graduate degree in environmental engineering from FAAP, a master’s degree in automation and robotics from the University of Taubate, and a PhD in material and manufacturing processes from Aeronautics Institute of Technology.
Syed Misbahuddin is an advanced process control engineer for a major specialty chemicals company with experience in model predictive control and advanced PID control. Before joining industry, he received a master’s degree in chemical engineering with a focus on neural network-based controls. Additionally, he is trained as a Six Sigma Black Belt, which focuses on utilizing statistical process controls for variability reduction. This combination helps him implement controls utilizing physics-based, as well as, data-driven methods.
The considerable experience and knowledge of Flavio and Syed blurs the line between protégé and resource leading to exceptionally technical and insightful questions and answers.
Can the existing MPC/APC techniques be applied for batch operation? Is there a non-linear MPC application available? Is there a known case in operation for chemical industry? What are the pros and cons of linear versus nonlinear MPC?
MPC was originally developed for continuous or semi-continuous processes. It is based on a receding horizon where the prediction and control horizons are fixed and shifted forwarded each execution of the controller. Most MPCs include an optimizer that optimizes the steady state at the end of the horizon, which the dynamic part of the MPC steers towards.
Batch processes are by definition non-steady-state and typically have an end-point condition that must be met at batch end and usually have a trajectory over time that controlled variables (CVs) are desired to follow. As a result, the standard MPC algorithm is not appropriate for batch processes and must be modified (note: there may be exceptions to this based on the application). I am aware of MPC batch products available in the market, but I have no experience with them. Due to the nonlinear nature of batch processes, especially those involving exothermic reaction, a nonlinear MPC may be necessary.
By far, the majority of MPCs applied industrially utilize a linear model. Many of the commercial linear packages include previsions for managing nonlinearities, such as using linearizing transformations, changing the gain, dynamics, or the models themselves. A typical approach is to apply a nonlinear static transformation to a manipulated variable or a controlled variable, commonly called Hammerstein and Wiener transformations. An example is characterizing the valve-flow relationship or controlling the logarithm of a distillation composition. Transformations are performed before or after the MPC engine (optimization) so that a linear optimization problem is retained.
Given the success of modeling chemical processes it may be surprising that linear, empirically developed models are still the norm. The reason is that it is still quicker and cheaper to develop an empirical model and linear models most often perform well for the majority of processes, especially with the nonlinear capabilities mentioned previously.
Nonlinear MPC applications tend to be reserved for those applications where nonlinearities are present in both system gains and dynamic responses and the controller must operate at significantly different targets. Nonlinear MPC is routinely applied in polymer manufacturing. These applications typically have less than five manipulated variables (MVs). A range of models have been used in nonlinear MPC, including neural nets, first principles, and hybrid models that combine first principle and empirical models.
A potential disadvantage of developing a nonlinear MPC application is the time necessary to develop and validate the model. If a first principle model is used, lower level PID loops must also be modeled if the dynamics are significant (i.e., cannot be ignored). With empirical modeling, the dynamics of the PID loops are embedded in the plant responses. Compared to a linear model, a nonlinear model will also require more computation time, so one would need to ensure that the controller can meet the required execution period based on the dynamics of the process and disturbances. In addition, there may be decisions around how to update the mode, i.e., which parameters or biases to adjust. For these reasons, nonlinear MPC is reserved for those applications that cannot be adequately controlled with linear MPC.
My opinion is that we’ll be seeing more nonlinear applications once it becomes easier to develop nonlinear models. I see hybrid models being critical to this. Known information would be incorporated and unknown parts would be described using empirically models using a range of techniques that might include machine learning. Such an approach might actually reduce the time of model development compared to linear approaches.
MPC for batch operations can be achieved by the translation of the controlled variable from batch temperature or composition with a unidirectional response (e.g., increasing temperature or composition) to the slope of the batch profile (temperature or composition rate of change) as noted in my article Get the Most out of Your Batch you then have a continuous type of process with a bi-directional response. There is still potentially a nonlinearity issue. For a perspective on the many challenges see my blog Why batch processes are difficult.
I agree with Mark Darby that the use of hybrid systems where nonlinear models are integrated could be beneficial. My preference would be in the following order in terms of ability to understand and improve:
There is an opportunity to use principle components for neural network inputs to eliminate correlations between inputs and to reduce the number of inputs. You are much more vulnerable with black box approaches like neural networks to inadequacies in training data. More details about the use of NN and recent advances will be discussed in a subsequent question by Syed.
There is some synergy to be gained by using the best of what each of the above have to offer. In the literature and in practices, experts in a particular technology often do not see the benefit of other technologies. There are exceptions as seen in papers referenced in my answer to the next question. I personally see benefits in running a first principle model (FPM) to understand causes and effects and to identify process gains. Not realized is that the FPM parameters in a virtual plant that uses a digital twin running real time using the same setpoints as the actual plant can be adapted by use of a MPC. In the next section we will see how NN can be used to help a FPM.
Signal characterization is a valuable tool to address nonlinearities in the valve and process as detailed in my blog Unexpected benefits of signal characterizers. I tried using NN to predict pH for a mixture of weak acids and bases and found better results from the simple use of a signal characterizer. Part of the problem is that the process gain is inversely proportional to production rate as detailed in my blog Hidden factor in our most important control loops.
Since dead time mismatch has a big effect on MPC performance as detailed in the ISA Mentor Post How to Improve Loop Performance for Dead Time Dominant Systems, an intelligent update of dead time simply based on production rate for a transportation delay can be beneficial.
Recently, there has been an increased focus on the use of deep neural networks for artificial intelligence (AI) applications. Deep signifies many hidden layers. Recurrent neural networks have also been able in some cases to insure relationships are cause and effect rather than just correlations. They use a rather black box approach with models built from training data. How successful are deep neural networks in process control?
Pavilion Technologies in Austin has integrated Neural Networks with Model Predictive Control. Successful applications in the optimization of ethanol processes have been reported a decade ago. In the Pavilion 1996 white paper “The Process Perfector: The next step to Multivariable Control and Optimization” it appears that process gains possibly, from step testing of FPM or bump testing of actual process for an MPC, were used as the starting point. The NN was then able to provide a nonlinear model of the dynamics given the steady state gains. I am not sure what complexity of dynamics can be identified. The predictions of NN for continuous processes have the most notable successes in plug flow processes where there is no appreciable process time constant and the process dynamics simplify to a transportation delay. Examples of successes of NN for plug flow include dryer moisture, furnace CO, and kiln or catalytic reactor product composition prediction. Possible applications also exist for inline systems and sheets in pulp and paper processes and for extruders and static mixers.
While the incentive is greater for high value biologic products, there are challenges with models of biological processes due to multiplicative effects (neural networks and data analytic models assume additive effects). Almost, every first principle model (FPM) has specific growth rate and product formation the result of a multiplication of factors each between 0 and 1 to detail the effect of temperature, pH, dissolved oxygen, glucose, amino acid (e.g., glutamine), and inhibitors (e.g., lactic acid). Thus, each factor changes the effect of every other factor. You can understand this by realizing that if the temperature is too high, cells are not going to grow and may in fact die. It does not matter if there is enough oxygen or glucose. Similarly if there is not enough oxygen, it does not matter if the all the other conditions are fine. One way to address this problem is to make all factors as close to one and as constant as possible except for the factor of greatest interest. It has been shown data analytics can be used to identify the limitation and/or inhibition FPM parameter for one condition, such as the effect of glucose concentration via the Michaelis-Menten equation if all other factors are constant and nearly one.
Process control is about changes in process inputs and consequential changes in process outputs. If there is no change, you cannot identify the process gain or dynamics. We know this is necessary in the identification of models for MPC and PID tuning and feedforward control. We often forget this in the data sets used to develop data models. A smart Design of Experiments (DOE) is really best to get the data sets to show changes in process outputs for changes in process inputs and to cover the range of interest. If setpoints are changed for different production rates and products, existing historical data may be rich enough if carefully pruned. Remember neural network models like statistical models are correlations and not cause and effect. Review by people knowledgeable in the process and control system is essential.
Time synchronization of process inputs with process outputs is needed for continuous but not necessarily for batch models, explaining the notable successes in predicting batch end points. Often delays are inserted on continuous process inputs. This is sufficient for plug flow volumes, such as dryers, where the dynamics are principally a transport delay. For back mixed volumes such as vessels and columns a time lag and delay should be used that is dependent upon production rate. Neural network (NN) models are more difficult to troubleshoot than data analytic models and are vulnerable to correlated inputs (data analytics benefits from principle component analysis and drill down to contributors). NN models can introduce localized reversal of slope and bizarre extrapolation beyond training data not seen in data analytics. Data analytics’ piecewise linear fit can successfully model nonlinear batch profiles. To me this is similar in principle to the use of signal characterizers to provide a piecewise fit of titration curves.
Process inputs and outputs that are coincidental are an issue for process diagnostics and predictions by MVSPC and NN models. Coincidences can come and go and never even appear again. They can be caused by unmeasured disturbances (e.g., concentrations of unrealized inhibiters and contaminants), operator actions (e.g., largely unpredictable and unrepeatable), operating states (e.g., controllers not in highest mode or at output limits), weather (e.g., blue northerners), poor installations (e.g., unsecured capillary blowing in wind), and just bad luck.
I found a 1998 Hydrocarbon Processing article by Aspen Technology Inc. “Applying neural networks” that provides practical guidance and opportunities for hybrid models.
The dynamics can be adapted and cause and effect relationships increased by advancements associated with recurrent neural networks as discussed in Chapter 2 Neural Networks with Feedback and Self-Organization in The Fundamentals of Computational Intelligence: System Approach by Mikhail Z. Zgurovsky and Yuriy P. Zaychenko (Springer 2016).
The companies best known for neural net-based controllers are Pavilion (now Rockwell) and AspenTech. There have been multiple papers and presentations by these companies over the past 20 years with many successful applications in polymers. It’s clear from reading these papers that their approaches have continued to evolve over time and standard approaches have been developed. Today both approaches incorporate first principles models and make extensive use of historical data. For polymer reactor applications, the FPM involves dynamic reaction heat and mass balance equations and historical data is used to develop steady-state property predictions. Process testing time is needed only to capture or confirm dynamic aspects of the models.
Enhancements to the neural networks used in control applications have been reported. AspenTech addressed the extrapolation challenges of neural nets with bounded derivatives. Pavilion makes use of constrained neural nets in their fitting of models.
Rockwell describes a different approach to the modeling and control of a fed-batch ethanol process in a presentation made at the 2009 American Control Conference, titled “Industrial Application of Nonlinear Model Predictive Control Technology for Fuel Ethanol Fermentation.” The first step was the development of a kinetic model based on the structure of a FPM. Certain reaction parameters in the nonlinear state space model were modeled using a neural net. The online model is a more efficient non-linear model, fit from the initial model that handles nonlinear dynamics. Parameters are fit by a gain constrained neural-net. The nonlinear model is described in a Hydrocarbon Processing article titled Model predictive control for nonlinear processes with varying dynamics.
To Syed’s follow-up question about deep neural networks, Deep neural networks require more parameters, but techniques have been developed that help deal with this. I have not seen results in process control applications, but it will be interesting to see if these enhancements developed and used by the google-types will be useful for our industries.
In addition to Greg’s citings, I wanted to mention a few other articles that describe approaches to nonlinear control. A FPM-based nonlinear controller was developed by ExxonMobil, primarily for polymer applications. It is described in a paper presented at the Chemical Process Control VI conference (2001) titled “Evolution of a Nonlinear Model Predictive Controller,” and in a subsequent paper presented at another conference, Assessment and future directions of nonlinear model predictive control (2005), entitled NLMPC: A Platform for Optimal Control of Feed- or Product-Flexible Manufacturing. The motivation for a first principles model-based MPC for polymers included the nonlinearity associated with both gains and dynamics, constraint handling, control of new grades not previous produced, and the portability of the model/controller to other plants. In the modeling step, the estimation of model parameters in the FPM (parameter estimation) was a cited as a challenge. State estimation of the CVs, in light of unmeasured disturbances, is considered essential for the model update (feedback step). Finally, the increased skills necessary to support and maintain the nonlinear controller was mentioned, in particular, to diagnosis and correct convergence problems.
A hybrid modeling approach to batch processes is described in a 2007 conference presentation at the 8th International IFAC Symposium on Dynamics and Control of Process Systems by IPCOS, titled “An Efficient Approach for Efficient Modeling and Advanced Control of Chemical Batch Processes.” The motivation for the nonlinear controller is the nonlinear behavior of many batch processes. Here, fundamental relationships were used for mass and energy balances and an empirical model for the reaction energy (which includes the kinetics), which was fit from historical data. The controller used the MPC structure, modified for the batch process. Future prediction of the CVs in the controller were made using the hybrid model, whereas the dynamic controller incorporated linearizations of the hybrid model.
I think it is fair to say that there is a lack of nonlinear solvers tailored to hybrid modeling. An exception is the freely available software environments APMonitor and GEKKO developed by John Hedengren’s group at BYU. It solves dynamic optimization problems with first principle or hybrid models. It has built-in functions for model building, updating, and control. Here is a link to the website that contain references and videos for a range of nonlinear applications, including a batch distillation application.
I worked with neutral networks quite a bit when they first came out in the late 1990s. I have not tried working with them much since but I will pass on my findings which I expect are as applicable now as they were then.
Neural networks sound useful in principle. Give a neural network a pile of training data, let it ‘discover’ correlations between the inputs and the output data, then reverse those correlations in order to create a model which can be used for control. Unfortunately actually creating such a neural network and using it for control is much harder than it looks. Some reasons for this are:
I am not saying neutral networks do not work – I actually had very good success with them. However when all was said and done I pretty much figured out the correlations myself through trial and error and was able to utilize that information to improve control. I wrote a paper on the topic and won an ISA award because neural networks were all the rage at that time, but the reality was I just used the software to reinforce what I learned during the ‘network training’ process.
The post, Missed Opportunities in Process Control - Part 4, first appeared on the ControlGlobal.com Control Talk blog.
Here is the Fourth part of a point blank decisive comprehensive list of what we really need to know in a detailed attempt to reduce the disparity between theory and practice. Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail. If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts.
You can also get a comprehensive resource focused on what you really need to know for a successful automaton project including nearly a thousand best practices in the 98% new 2019 Process/Industrial Instruments and Controls Handbook Sixth Edition capturing the expertise of 50 leaders in industry.
Eliminate air gap in thermowells to make a temperature response much faster. Contrary to popular opinion, the type of sensor is not a significant factor in the speed of the temperature response in the process industry. While an RTD may be a few seconds slower than a TC, the annular clearance around the sheath can cause an order of magnitude larger measurement time lag. Additionally, a tip not touching the bottom of the thermowell can be even worse. Air is a great insulator as seen in the design of more energy efficient windows. Spring loaded tight fitting sheathed sensors in stepped metal thermowells of the proper insertion length are best. Ceramic protection tubes cause a large measurement lag due to poor thermal conductivity. Low fluid velocities can cause an increase in the lag as well. See the Control Talk column “A meeting of minds” on how to get the most precise and responsive temperature measurement.
Use the best glass and sufficient velocity to keep pH measurement fast by reducing aging and coatings. A aged glass electrode due to even moderately high temperature (e.g., > 30 oC), chemical attack from strong acids or strong bases (e.g., caustic) or dehydration from not being wetted or exposed to non-aqueous solvents can increase the sensor lag time by orders of magnitude. High temperature glass and specific ion resistant glasses are incredibly beneficial to sustain accuracy and a clean healthy electrode sensor lag of just a few seconds. Velocities must be greater than 1 fps for fast response and greater than 5 fps to prevent fouling that can also increase sensor lag by orders of magnitude by almost imperceptible coatings. This is helpful for thermowells as well but the adverse effects in terms of slower response time are not as dramatic as seen for pH. Electrodes must be kept wetted and exposure to non-aqueous solvents and harsh process conditions reduced by automatically retractable assemblies with ability to soak in buffer solutions. See the Control Talk column “Meeting of minds encore” on how to get the most precise and responsive pH measurement.
Avoid the measurement lag becoming the primary lag. If the measurement lag becomes larger than the largest process time constant, the trend charts may look better due to attenuation of oscillation amplitude by the filtering effect. The PID gain may even be able to be increased because the PID does not know where the primary lag came from. The key is that the actual amplitude of the process oscillation and the peak error is larger (often unknown unless a special separate fast measurement is installed). Seen on the trend charts is the fact the period of oscillation is larger possibly to the point of creating a sustained oscillation. Besides slow electrodes and thermowells, this situation can occur simply due to transmitter damping or signal filter time settings. For compressor surge and many gas pressure control systems, the filter time and transmitter damping settings must not exceed 0.2 sec. For a much greater understanding, see the Control Talk Blog “Measurement Attenuation and Deception Tips”.
Real rangeability of a control valve depends upon ratio of valve drop to system pressure drop, actuator and positioner sensitivity, backlash, and stiction. Often rangeability is based on the deviation from an inherent flow characteristic leading to statements that a rotary valve often designed for on-off control as having the greatest rangeability. The real definition should depend upon minimum controllable flow that is a function of the installed flow characteristic, sensitivity, backlash and stiction near the closed position, all of which are generally worse for these on-off valves that supposedly have the best rangeability. The best valve rangeability is achieved with a valve drop to system pressure drop ratio greater than 0.25, generously sized diaphragm actuators, digital positioner tuned with high gain and no integral action, low friction packing (e.g., Enviro-Seal), and a sliding stem valve. If a rotary valve must be used, there should be a splined shaft to stem connection and stem integrally cast with ball or disk to minimize backlash and a low friction seal or ideally no seal to minimize stiction. A graduated v-notch ball or contoured butterfly should be used to improve flow characteristic. Equations to compute the actual valve rangeability based on pressure drop ratio and resolution are given in Tuning and Control Loop Performance Fourth Edition.
Real rangeability of a variable frequency drive (VFD) depends upon ratio of static pressure to system pressure drop, motor design, inverter type, input card resolution and dead band setting. The best VFD rangeability and response is achieved by a static pressure to system pressure drop ratio less than 0.25, a generously sized TEFC motor with 1.15 service factor and Class F insulation, pulse width modulated inverter, speed to torque cascade control in inverter, and no dead band or rate limiting in the inverter setup.
Identify and minimize transportation delays. The delay for a temperature or composition change to propagate from the point of manipulated change to the process or sensor is simply the process volume divided by the process flow rate. Normal process design procedures do not recognize the detrimental effect of dead time. The biggest example is equipment design guidelines that have a dip tube designed to be large in diameter extending down toward the impeller. Missing is the understanding the incredibly large dead time for pH control where the reagent flow is a gph or less and the dip tube volume is a gallon or more. When the reagent valve is closed, the dip tube is back filled with process fluid from migration of high to low concentrations. To get the reagent to displace the process fluid takes more than an hour. When the reagent valve shuts off, it may take hours before reagent stops dripping and migrating into the process. To go from acid to base in split range control may take hours to displace the acid in the dip tube. The same thing happens to go from base to acid. The stiction is also highest at the closure position. When you consider pH is so sensitive, it is no wonder that pH systems oscillate across the split range point.
The real rangeability of flow meters depends upon the signal to noise ratio at low flows, minimum velocity, and whether accuracy is a percent of scale or reading. The best flow rangeability is achieved by meters with accuracy in percent of reading, minimal noise at low flows and least effect of low velocities including the possible transition to laminar flow. Consequently Coriolis flow meters have the best rangeability (e.g., 200:1) and magmeters have the next best rangeability (e.g., 50:1). Most rangeability statements for other meters is based on a ratio of maximum to minimum meter velocity and turbulent flow and do not take into account the actual maximum flow experienced is much less than meter capacity.
Use Coriolis flow meters for stoichiometric control and heating value control. The Coriolis flowmeter has the greatest accuracy with a mass flow measurement independent of composition. This capability is key to keeping flows in the right ratio particularly for reactants per the factors in the stoichiometric equation for the reaction (mole flow rate is simply mass flow rate divided by molecular weight of reactant). For waste fuels, the heat release rate upon combustion is a strong function of the mass flow greatly facilitating optimization of supplemental fuel use. Nearly all ratio control systems could benefit from true mass flow measurements with great accuracy and rangeability. For more on what you need to know to achieving what the Coriolis meter is capable of, see the Control Talk Column “Knowing the best is the best”.
Identify and minimize the total dead time. Dead time is easily identified on a properly scaled trend chart as simply the time delay between a manual output change or setpoint change and the start of the change in the process variable being controlled. The least disruption is usually by simply putting PID momentarily in manual and making a small output change simulating a load disturbance. The test should be done at different production rates and run times. The dead time tends to be largest at low production rates due to larger transportation delays and slower heat transfer rates and sensor response. Dead time also tends to increase with production run time due to fouling or frosting of heat transfer surfaces. See the Control Talk Blog “Deadtime, the Simple Easy Key to Better Control” for a more extensive explanation of why I would be out of a job if the dead time was zero.
Identify and minimize the ultimate period. This goes hand in hand with knowing and reducing the total loop dead time. The ultimate period in most loops is simply 4 times the dead time in a first order approximation where a secondary time constant is taken as creating additional dead time. Dead time dominant loops have a smaller ultimate period that approaches 2 times the dead time for a pure dead time loop (extremely rare). Input oscillations with a period between ½ and twice the ultimate period result in resonance requiring less aggressive tuning. Input oscillations less than ½ the ultimate period can be considered to be noise requiring filtering and less aggressive tuning. Oscillations periods greater than twice the ultimate period, are attenuated by more aggressive tuning. Note that input oscillations persist when the PID is in manual. For damped oscillations that only appear when the PID is in auto, an oscillation period close to ultimate period indicates too high a PID gain and more than twice the ultimate period indicates too low a PID reset time. A damped oscillation period approaching or exceeding 10 times the ultimate period indicates a violation of the gain window for near-integrating, true integrating or runaway processes. Oscillations greater than four times the ultimate period with constant amplitude are limit cycles due to backlash (dead band) or stiction (resolution limit). See the Control Talk Blogs “Controller Attenuation and Resonance Tips” and “Processes with no Steady State in PID Time Frame Tips” for more guidance.
The post How to Implement Effective Safety Instrumented Systems for Process Automation Applications first appeared on the ISA Interchange blog site.
In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Hariharan Ramachandran.
Hariharan starts an enlightening conversation introducing platform independent key concepts for an effective safety instrumented system with the Mentor Program resource Len Laskowski, a principal technical SIS consultant, and Hunter Vegas, co-founder of the Mentor Program.
Hariharan Ramachandran, a recent resource added to the ISA Mentor Program, is a control and safety systems professional with various levels of experience in the field of Industrial control, safety and automation. He has worked for various companies and executed global projects for oil and gas and petrochemical industries gaining experience in the entire life cycle of industrial automation and safety projects.
Len Laskowski is a principal technical SIS consultant for Emerson Automation Solutions, and is a voting member of ISA84, Instrumented Systems to Achieve Functional Safety in the Process Industries.
Hunter Vegas, P.E., has worked as an instrument engineer, production engineer, instrumentation group leader, principal automation engineer, and unit production manager. In 2001, he entered the systems integration industry and is currently working for Wunderlich-Malec as an engineering project manager in Kernersville, N.C. Hunter has executed thousands of instrumentation and control projects over his career, with budgets ranging from a few thousand to millions of dollars. He is proficient in field instrumentation sizing and selection, safety interlock design, electrical design, advanced control strategy, and numerous control system hardware and software platforms. Hunter earned a B.S.E.E. degree from Tulane University and an M.B.A. from Wake Forest University.
How is the safety integrity level (SIL) of a critical safety system maintained throughout the lifecycle?
The answer might sound a bit trite by the simple answer is by diligently following the lifecycle steps from beginning to end. Perform the design correctly and verify that it has been executed correctly. The SIS team should not blindly accept HAZOP and LOPA results at face value. The design that the LOPAs drive is no better than the team that determined the LOPA and the information they were provided. Often the LOPA results are based on incomplete or possibly misleading information. I believe a good SIS design team should question the LOPA and seek to validate its assumptions. I have seen LOPA’s declare that there is no hazard because XYZ equipment protects against it. But a walk in the field later discovered that equipment was taken out of service a year ago and had not yet been replaced. Obviously getting the LOPA/Hazop right is the first step.
The second step is to make sure one does a robust design and specifies good quality instruments that are a good fit for the application. For example, a vortex meter may be a great meter for some applications but a poor choice for others. Similarly certain valve designs may have limited value as a safety shutdown valve. Inexperienced engineers may specify Class VI shutoff for on-off valves thinking they are making the system safer, but Class V metal seat valves would stand up to the service much better in the long run since the soft elastomer seats can easily be destroyed in less than month of operation. The third leg of this triangle is using the equipment by exercising it and routinely testing the loop. Partial stroke testing the valves is a very good idea to keep valves from sticking. Also for new units that do not have extensive experience with a process, the SIF components (valves and sensors) should be inspected at the first shutdown to assess their condition. This needs to be done until a history with the installation can be established. Diagnostics also fall into this category, deviation alarms, stroke time and any other diagnostics that can help determine the SIS health is important.
The safety instrumented function has to be monitored and managed throughout its lifecycle. Each layer in a safety protection system must have the ability to be audited. SIS verification and validation process provides a high level of assurance that the SIS will operate in accordance with its safety requirements specification (SRS). The proof testing must be carried out periodically at the intervals specified in the safety requirement specification. There should be a mechanism for recording of SIF life event data (proof test results, failures, and demands) for comparison of actual to expected performance. Continuous evaluation and improvement is the key concept here in maintaining the SIS efficiently.
What is the best approach to eliminate the common cause failures in a safety critical system?
There are many ways that common cause failures can creep into a safety system design. Some of the more common ways include:
Both, random and systematic events can induce common cause failure (CCF) in the form of single points of failure or the failure of redundant devices.
Random hardware failures are addressed by Design architecture, diagnostics, estimation (analysis) of probabilistic failures, design techniques and measures (to IEC 61508‐7).
Systematic failures are best addressed through the implementation of a protective management system, which overlays a quality management system with a project development process. A rigorous system is required to decrease systematic errors and enhance safe and reliable operation. Each verification, functional assessment, audit, and validation is aimed at reducing the probability of systematic error to a sufficiently low level.
The management system should define work processes, which seek to identify and correct human error. Internal guidelines and procedures should be developed to support the day-to-day work processes for project engineering and on-going plant operation and maintenance. Procedures also serve as a training tool and ensure consistent execution of required activities. As errors or failures are detected, their occurrence should be investigated, so that lessons can be learned and communicated to potentially affected personnel.
An incident happened at a process plant, what are all the engineering aspects that needs to be verified during the Investigation?
I would start at the beginning of the lifecycle look at Hazop and LOPA’s to see that they are done properly. Look to see that documentation is correct; P&IDs, SRS, C&Es, MOC and test logs and procedures. Look to see where the break down occurred. Were things specified correctly? Were the designs verified? Was the System correctly validated? Was proper training given? Look for test records once the system was commissioned.
Usually the first step is to determine exactly what happened separating conjecture from facts. Gather alarm logs, historian data, etc. while it is available. Individually interview any personnel involved as soon as possible to lock in the details. With that information in hand, begin to work backwards determining exactly what initiated the event and what subsequent failures occurred to allow it to happen. In most cases there will be a cascade of failures that actually enabled the event to happen. Then examine each failure to understand what happened and how it can be avoided in the future. Often there will be a number of changes implemented. If the SIS system failed, then Len’s answer provides a good list of items to check.
Also verify if the device/equipment is appropriately used within the design intent.
What are all the critical factors involved in decommissioning a control systems?
The most critical factor is good documentation. You need to know what is going to happen to your unit and other units in the plant once an instrument, valve, loop or interlock is decommissioned. A proper risk and impact assessment has to be carried out prior to the decommissioning. One must ask very early on in a project’s development if all units controlled by the system are planning to shut down at the same time. This is needed for maintenance and upgrades. Power distribution and other utilities are critical. One may not be able to demo a system because it would affect other units. In many cases, a system cannot be totally decommissioned until the next shutdown of the operating unit and it may require simultaneous shutdowns of neighboring units as well. Waste management strategy, regulatory framework and environmental safety control are the other factors to be considered.
A proper risk and impact assessment has to be carried out prior to the decommissioning. Waste management strategy, regulatory framework and environmental safety control are the other factors to be considered.
The post Webinar Recording: The Amazing World of ISA Standards first appeared on the ISA Interchange blog site.
This educational ISA webinar was presented by Greg McMillan in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).
Historically, predictive maintenance required very expensive technology and resources, like data scientists and domain experts, to be effective. Thanks to artificial intelligence (AI) methods such as machine learning making its way into the mainstream, predictive maintenance is now more achievable than ever. Our webinar will explore how machine learning is changing the game and greatly reducing the need for data scientists and domain experts. These technologies self-learn and autonomously monitor for data pattern anomalies. Not only does this make predictive maintenance far more practical than what was historically possible, but now predictions 30 days in advance are the norm. Don’t let the old way of doing predictive maintenance cause you loss in productivity any longer.
This webinar covers:
About the Featured PresenterNicholas P. Sands, P.E., CAP, serves as senior manufacturing technology fellow at DuPont, where he applies his expertise in automation and process control for the DuPont Safety and Construction business (Kevlar, Nomex, and Tyvek). During his career at DuPont, Sands has worked on or led the development of several corporate standards and best practices in the areas of automation competency, safety instrumented systems, alarm management, and process safety. Nick is: an ISA Fellow; co-chair of the ISA18 committee on alarm management; a director of the ISA101 committee on human machine interface; a director of the ISA84 committee on safety instrumented systems; and secretary of the IEC (International Electrotechnical Commission) committee that published the alarm management standard IEC62682. He is a former ISA Vice President of Standards and Practices and former ISA Vice President of Professional Development, and was a significant contributor to the development of ISA’s Certified Automation Professional program. He has written more than 40 articles and papers on alarm management, safety instrumented systems, and professional development, and is co-author of the new edition of A Guide to the Automation Body of Knowledge. Nick is a licensed engineer in the state of Delaware. He earned a bachelor of science degree in chemical engineering at Virginia Tech.
About the PresenterGregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.
Here is the third part of a point blank decisive comprehensive list of what we really need to know in a detailed attempt to reduce the disparity between theory and practice. Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail. If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts.
The following list reveals common misconceptions that need to be understood to seek real solutions that actually address the opportunities.
The post Solutions for Unstable Industrial Processes first appeared on the ISA Interchange blog site.
In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Caroline Cisneros.
Negative resistance also known as positive feedback can cause processes to jump, accelerate and oscillate confusing the control system and the operator. These are characterized as open loop unstable processes. Not properly addressing these situations can result in equipment damage and plant shutdowns besides the loss of process efficiency. Here we first develop a fundamental understanding of the causes and then quickly move on to the solutions to keep the process safe and productive.
Caroline Cisneros, a recent graduate of the University of Texas who became a protégé about a year ago, is gaining significant experience working with some of the best process control engineers in an advanced control applications group. Caroline asks a question about the dynamics that cause unstable processes. The deeper understanding gained as to the sources of instability can lead to process and control system solutions to minimize risk and to increase process performance.
What causes processes to be unstable when controllers are in manual?
Fortunately, most processes are self-regulating by virtue of having negative feedback that provides a resistance to excursions (e.g., flow, liquid pressure, and continuous composition and temperature). These processes come to a steady state when the controller is in manual. Somewhat less common are processes that have no feedback that will result in a ramp (e.g., batch composition and temperature, gas pressure and level). Fortunately, the ramp rate is quite slow except for gas pressure giving the operator time to intervene.
There are a few processes where the deviation from setpoint can accelerate when in manual due to positive feedback. These processes should never be left in manual. We can appreciate how positive feedback causes problems in sound systems (e.g., microphones too close to speakers). We can also appreciate from circuit theory how negative resistance and positive feedback would cause an acceleration of a change in current flow. We can turn this insight into an understanding of how a similar situation develops for compressor, steam-jet ejector, exothermic reactor and parallel heat exchanger control.
The compressor characteristic curves from the compressor manufacturer that are a plot of compressor pressure rise versus suction flow shows a curve of decreasing pressure rise for each speed or suction vane position whose slope magnitude increases as the suction flow increases in the normal operating region. The pressure rise consequently decreases more as the flow increases opposing additional increases in compressor flow creating a positive resistance to flow. Not commonly seen is that compressor characteristic curve slope to the left of the surge point becomes zero as you decrease flow, which denotes a point on the surge curve, and then as the flow decreases further, the pressure rise decreases causing a further decrease in compressor flow creating a negative resistance to a decrease in flow.
When the flow becomes negative, the slope reverses sign creating a positive resistance with a shape similar to that seen in the normal operating region to the right of the surge point. The compressor flow then increases to a positive flow at which point the slope reverses sign creating negative resistance. The compressor flow jumps in about 0.03 seconds from the start of negative resistance to some point of positive resistance. The result is a jump in 0.03 seconds to negative flow across the negative resistance, a slower transition along positive resistance to zero flow, than a jump in 0.03 seconds across the negative resistance to a positive flow well to the right of the surge curve. If the surge valve is not open far enough, the operating point walks about 0.5 to 0.75 seconds along the positive resistance to the surge point. The whole cycle repeats itself with an oscillation period of 1 to 2 seconds. If this seems confusing, don’t feel alone. The PID controller is confused as well.
Once a compressor gets into surge, the very rapid jumps and oscillations are too much for a conventional PID loop. Even a very fast measurement, PID execution rate and control valve response can’t deal with it alone. Consequently, the oscillation persists until an open loop backup activates and holds open the surge valves till the operating point is sustained well to the right of the surge curve for about 10 seconds at which point there is a bumpless transfer back to PID control. The solution is a very fast valve and PID working bumplessly with an open loop backup that detects a zero slope indicating an approach to surge or a rapid dip in flow indicating an actual surge. The operating point should always be kept well to the right of the surge point.
For much more on compressor surge control see the article Compressor surge control: Deeper understanding, simulation can eliminate instabilities.
The same shape but with much less of a dip in the compressor curve, sometimes occurs just to the right of the surge point. This local dip causes a jumping back and forth called buzzing. While the oscillation is much less severe than surge, the continual buzzing is disruptive to users.
A similar sort of dip in a curve occurs in a plot of pumping rate versus absolute pressure for a steam-jet ejector. The result is a jumping across the path of negative resistance. The solution here is a different operating pressure or nozzle design, or multiple jets to reduce the operating range so that operation to one side or the other of the dip can be assured.
Positive feedback occurs in exothermic reactors when the heat of reaction exceeds the cooling rate causing an accelerating rise in temperature that further increases the heat of reaction. The solution is to always insure the cooling rate is larger than the heat of reaction. However, in polymerization reactions the rate of reaction can accelerate so fast the cooling rate cannot be increased fast enough causing a shutdown or a severe oscillation. For safety and process performance, an aggressively tuned PID is essential where the time constants and dead time associated with heat transfer in cooling surface and thermowell and loop response are much less than the positive feedback time constant.
Derivative action must be maximized and integral action must be minimized. In some cases a proportional plus derivative controller is used. The runway response of such reactors is characterized by a positive feedback time constant as shown in Figure 1 for an open loop response. The positive feedback time constant is calculated from the ordinary differential equations for the energy balance as shown in Appendix F of 101 Tips for a Successful Automation Career. The point of acceleration cannot be measured in practice because it is unsafe to have the controller in manual. A PID gain too low will allow a reactor to runaway since the PID controller is not adding enough negative feedback. There is a window of allowable PID gains that closes as the time constants from heat transfer surface and thermowell and the total loop dead time approach the positive feedback time constant.
Figure 1: 1 Positive Feedback Process Open Loop Response
Positive feedback can also occur when parallel heat exchanges have a common process fluid input each with outlet temperature controller(s) with a setpoint close to the boiling point or temperature resulting in vaporization of a component in the process fluid. Each temperature controller is manipulating a utility stream providing heat input. The control system is stable if the process flow is exactly the same to all exchangers. However, a sudden reduction in one process flow causes overheating causing bubbles to form expanding back into the exchanger causing an increase in back pressure and hence a further decrease in process flow thru this hot exchanger.
The increasing back pressure eventually forces all of the process flow into the colder heat exchanger making it colder. The high velocity in the hot exchanger from boiling and vaporization causes vibration and possibly damage to any discontinuity in its path from slugs of water. When nearly all of the water is pushed out of the hot exchanger, its temperature drops drawing feed that was going to the cold heat exchanger that causes the hot exchanger to overheat repeating the whole cycle. The solution is separate flow controllers and pumps for all streams so that changes in the flow to one exchanger do not affect another and a lower temperature setpoint.
To summarize, to eliminate oscillations, the best solution is a process and equipment design that eliminates negative resistance and positive feedback. When this cannot provide the total solution, operating points may need to be restricted, loop dead time and thermowell time constant minimized and the controller gain increased with integral action decreased or suspended.
The post, Missed Opportunities in Process Control - Part 2, first appeared on the ControlGlobal.com Control Talk blog.
Here is the second part of a point blank decisive comprehensive list of what we really need to know in a detailed attempt to reduce the disparity between theory and practice. Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail.
If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts.
The post What Skill Sets Do You Need to Excel at IIoT Applications in an Automation Industry Career? first appeared on the ISA Interchange blog site.
In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Angela Valdes.
The Industrial Internet of Things (IIoT) is the hot topic as seen in the many feature articles published. The much greater availability of data is hoped to provide the knowledge needed to sustain and improve plant safety, reliability and performance. Here we look at what are some of the practical issues and resources in achieving the expected IIoT benefits.
Angela Valdes is a recently added resource in the ISA Mentor Program. Angela is the automation manager of the Toronto office for SNC-Lavalin. She has over 12 years of experience in project leadership and execution, framed under PMI, lean, agile and stage-gate methodologies. Angela seeks to apply her knowledge in process control and automation in different industries such as pharmaceutical, food and beverage, consumer packaged products and chemicals.
What skill sets and ISA standards shall I start building/referencing in order to grow in the IIoT space and work field?
The ISA communication division is forming a technical interest group in IIoT. The division has had presentations on the topic for several years at conferences. The leader will be announced in InTech magazine. The ISA95 standard committee is working on updating the enterprise – control system communication to better support IIoT concepts.
One tremendous resource would be to read most of Jonas Berge’s LinkedIn blog posts. He writes about IIoT and digital communications and the impact they can have on reliability, safety, efficiency and production. I recommend you send him a connection request to see when he has new things to post. One other person to connect with includes Terrance O’Hanlon of ReliabilityWeb.com. Searching on the #IIoT hashtag in Twitter and LinkedIn is also a very good way to discover new articles and influencers in these areas.
One of the things we need to be careful about is to make sure there are people with the expertise to use the data and associated software, such as data analytics. There was a misrepresentation in a feature article that IIoT would make the automation engineer obsolete when in fact the opposite is true. We need more process control engineers besides process analytical technology and IIoT experts to make the most out of the data. The data by itself can be overwhelming as seen in the series of articles “Drowning in Data; Starving for Information”: Part 1, Part 2, Part 3, and Part 4.
Process control engineers with a fundamental knowledge of the process and the automation system need to intelligently analyze and make the associated improvements in instrumentation, valves, setpoints, tuning, control strategies, and use of controller features whether PID or MPC. Often lacking is the recognition of the importance of dynamics in the process and particularly the automation system. The process inputs must be synchronized with the process outputs for continuous processes before true correlations can be identified.
Knowledge of process first principles is also needed to determine whether correlations are really cause and effect. While the solution would seem to be employing expert rules to the IIoT results, a word of caution here is that the attempts to develop and use real time expert systems in the 1980s and 1990s were largely failures wasting an incredible amount of time and money. Deficiencies in conditions, interrelationships and knowledge in the rules of logic implemented plus lack of visibility of interplay between rules and ability to troubleshoot rules led to a lot of false alerts resulting in the systems being turned off and eventually abandoned.
There have been multiple “data revolutions” over the years, and I consider IIoT to be just another wave where new information is made available that wasn’t available before. Unfortunately the problem that bedeviled the previous data revolutions still remains today. More data is not necessarily useful unless the right information is delivered at the right time to a person who can act on it. In many cases the operators have too much information now – when something goes wrong they get 1000 alarms and have to wade through the noise to try to figure out what went wrong and how to fix it.
IIoT data can undoubtedly be useful, but it takes a huge amount of time and effort to create an interface than can effectively present that information and still more time and effort to keep it up. All too often management reads a few trendy articles and thinks IIoT is something you buy or install and savings should just appear. Unfortunately most fail to appreciate the effort required to implement such a system and keep it working and adding value. Usually money is spent, people celebrate the glorious new system, then it falls out of favor and use and gets eliminated a short time later.
As far as I know there aren’t any specific standards associated with IIoT. I do think that there are several skill sets that can you help you implement it:
The post Webinar Recording: Practical Limits to Control Loop Performance first appeared on the ISA Interchange blog site.
Part 2 provides a quick review of Part 1 and then discusses the contribution of each PID mode, why reset time is orders of magnitude too small for most composition and temperature loops, the ultimate and practical limits to control loop performance, the critical role of dead time, and when PID gain that is too high or too low causes more oscillation.
The post Webinar Recording: Simple Loop Tuning Methods and PID Features to Prevent Oscillations first appeared on the ISA Interchange blog site.
This educational ISA webinar was presented by Greg McMillan in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).
Part 3 (the final part) describes simple tuning methods and the PID features that can be used to prevent the oscillations that plague our most important loops and to achieve the desired degree of tightness or looseness in level control. A general procedure is offered and a block diagram of the most effective PID structure, not shown anywhere else, is given followed by questions and answers.
The post, Missed Opportunities in Process Control - Part 1, first appeared on the ControlGlobal.com Control Talk blog.
I had an awakening as to the much greater than realized disconnect between what is said in the literature and courses and what we need to know as practitioners as I was giving guest lectures and labs to chemical engineering students on PID control. We are increasingly messed up. The disparity between theory and practice is exponentially growing because of leaders in process control leaving the stage and users today not given the time to explore and innovate and the freedom to publish. Much of what is out there is a distraction at best. I decided to make a decisive pitch not holding back for sake of diplomacy. Here is the start of a point blank decisive comprehensive list in a six part series.
Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail. If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts.
The post How to Get Started with Effective Use of OPC first appeared on the ISA Interchange blog site.
Encouraged to ask general questions that would help share knowledge, Nikki Escamillas provided several questions on OPC. Initially, the OPC standard was restricted to the Windows operating system with the acronym originally designating OLE (object linking and embedding) for process control. OPC is the acronym for open platform communications that is much more widely used playing a key role in automation systems. We are fortunate to have answers to Nikki’s questions from a knowledgeable expert in higher level automation system communications, Tom Freiberger, product manager for industrial Ethernet in R&D engineering for Emerson Automation Solutions.
Nikki Escamillas is a recently added protégé in the ISA Mentor Program. Nikki is an Automation Process Engineer for Republic Cement and Building Materials – Batangas Plant. Nikki specializes in process optimization and automation control, committed in minimizing cost and product quality improvement through effective time management and efficient use of resources and data analytics. Nikki has an excellent knowledge and experience of advanced process control principles and its application to different plant processes more specifically on cement and building materials manufacturing.
How does OPC work?
OPC is a client/server protocol. The server has a list of data points (normally in a tree structure) that it provides. A client can connect to a server and pick a set of data points it wishes to use. The client can then read or write to those data points. OPC is meant to be a common language for integrating products from multiple vendors. The OPC Foundation has a good introduction of OPC DA and UA at their website.
Does configuration of OPC DA differs from OPC UA?
Yes and no. The core concept of client/server and working with a set of data points remains consistent between the two, but the details of how to configure differ. The security configuration is the primary difference. OPC DA is based off of Microsoft’s DCOM technology, which means the security settings in the operating system are used. OPC UA runs on many operating systems and therefore the security settings are embedded into the configuration of the OPC application. OPC UA applications should use common terminology in their configuration, to ease integration between multiple vendors
Do we have any guidelines to follow when installing and configuring one OPC based upon its type?
Installation and configuration guidelines are going to be specific to the products being used. Some products are going to be limited on the number of data points that can be exchanged by a license or other application limitation. Some products may have performance limits. All of these details should be supplied in the documentation of the product.
The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program.
Could I directly make one computer to become OPC capable?
An OPC server or client by itself is just a means to transfer data. OPC is not very interesting without another application behind it to supply information. The computer you are attempting to add OPC to would need some other application to provide data. The vendor of that application would need to build OPC into their product. If the application with the data supports some other protocol to exchange data (like Modbus TCP, Ethernet/IP, or PROFINET) an OPC protocol converter could be used to interface with other OPC applications. If the application with the data has no means of extracting the information, there is nothing an OPC server or client can do.
Is it also possible to create a server to server communication between two OPC?
I believe there are options for this in the OPC protocol specification, but the details would be specific to the product being used. If it allows server to server connections, it should be listed in its documentation.
The post Webinar Recording: PID and Loop Tuning Options and Solutions for Industrial Applications first appeared on the ISA Interchange blog site.
This is Part 1 of a series on the benefits of knowing your process and PID capability. Part 1 focuses on process behavior, the many loop objectives and different worlds of industrial applications, and the loop component’s contribution to the dynamic response.
The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about how you can join the ISA Mentor Program.
The post How to Improve Loop Performance for Dead Time Dominant Systems first appeared on the ISA Interchange blog site.
Dead time is the source of the ultimate limit to control loop performance. The peak error is proportional to the dead time and the integrated error is dead time squared for load disturbances. If there was no dead time and no noise or interaction, perfect control would be theoretically possible. When the total loop dead time is larger than the open loop time constant, the loop is said to be dead time dominant and solutions are sought to deal with the problem.
Anuj Narang is an advanced process control engineer at Spartan Controls Limited. He has more than 11 years of experience in the academics and the industry with a PhD in process control. He has designed and implemented large scale industrial control and optimization solutions to achieve sustainable and profitable process and control performance improvements for the customers in the oil and gas, oil sands, power and mining industry. He is a registered Professional Engineer with the Association of Professional Engineers and Geoscientists of Alberta, Canada.
Is there any other control algorithm available to improve loop performance for dead time dominant systems other than using Smith predictor or model predictive control (MPC), both of which requires identification of process model?
The solution cited for deadtime dominant loops is often a Smith predictor deadtime compensator (DTC) or model predictive control. There are many counter-intuitive aspects in these solutions. Not realized is that the improvement by the DTC or MPC is less for deadtime dominant systems than for lag dominant systems. Much more problematic is that both DTC and MPC are extremely sensitive to a mismatch between the compensator and model deadtime versus the actual total loop deadtime for a decrease besides an increase in the deadtime. Surprisingly, the consequences for the DTC and MPC are much greater for a decrease in plant dead time. For a conventional PID, a decrease in the deadtime just results in more robustness and slower control. For a DTC and MPC, a decrease in plant deadtime by as little as 25 percent can cause a big increase in integrated error and an erratic response.
Of course, the best solution is to decrease the many source of dead time in the process and automation system (e.g., reduce transportation and mixing delays and use online analyzers with probes in the process rather than at-line analyzers with a sample transportation delay and an analysis delay that is 1.5 times the cycle time). An algorithmic mitigation of consequences of dead time first advocated by Shinskey and now particularly by me is to simply insert a deadtime block in the PID external-reset feedback path (BKCAL) with the deadtime updated to be always be slightly less than the actual total loop deadtime. Turning on external-reset feedback (e.g., dynamic reset limit) on and off enables and disables the deadtime compensation. Note that for transportation delays, this means updating the deadtime as the total feed rate or volume changes. This PID+TD implementation does not require the identification of the open loop gain and open loop time constant for inclusion as is required for a DTC or MPC. Please note that the external-reset feedback should be the result of a positive feedback implementation of integral action as described in the ISA Mentor Program webinar PID Options and Solutions – Part 3.
There will be no improvement from a deadtime compensator if the PID tuning settings are left the same as they were before the DTC or by a deadtime block in external-reset feedback (PID+TD). In fact the performance can be slightly worse for even an accurate deadtime. You need to greatly decrease the PID integral time toward a limit of the execution time plus any error in deadtime. The PID gain should also be increased. The equation for predicting integrated error as a function of PID gain and reset time settings is no longer applicable because it predicts an error less than the ultimate limit that is not possible. The integrated error cannot be less than the peak error multiplied by the deadtime. The ultimate limit is still present because we are not making deadtime disappear.
If the deadtime is due to analyzer cycle time or wireless update rate, we can use an enhanced PID (e.g., PIDPlus) to effectively prevent the PID from responding between updates. If the open loop response is deadtime dominant mostly due to the analyzer or wireless device, the effect of a new error upon update results in a correction proportional to the PID gain multiplied by the open loop error. If the PID gain is set equal to the inverse of the open loop gain for a self-regulating process, the correction is perfect and takes care of the step disturbance in a single execution after an update in the PID process variable.
The integral time should be set smaller than expected (about equal to the total loop deadtime that ends up being the PID execution time interval) and the positive feedback implementation of integral action must be used with external reset feedback enabled. The enhanced PID greatly simplifies tuning besides putting the integrated error close to its ultimate limit. Note that you do not see the true error that could’/ have started at any time in between updates but only see the error measured after the update.
For more on the sensitivity to both increases and decrease in the total loop deadtime and open loop time constant, see the ISA books Models Unleashed: A Virtual Plant and Predictive Control Applications (pages 56-70 for MPC) and Good Tuning: A Pocket Guide 4th Edition (pages 118-122 for DTC). For more on the enhanced PID, see the ISA blog post How to Overcome Challenges of PID Control and Analyzer Applications via Wireless Measurements and the Control Talk blog post, Batch and Continuous Control with At-Line and Offline Analyzers Tips.
The following figures from Models Unleashed shows how a MPC with two controlled variables (CV1 and CV2) and two manipulated variables for a matrix with condition number three (CN = 3) responds to a doubling and a halving of the plant dead time (delay) when the total loop dead time is greater than the open loop time constant.
Figure 1: Dead Time Dominant MPC Test for Doubled Plant Delay
Figure 2: Dead Time Dominant MPC Test for Halved Plant Delay
The post How to Setup and Identify Process Models for Model Predictive Control first appeared on the ISA Interchange blog site.
Luis Navas is an ISA Certified Automation Professional and electronic engineer with more than 11 years of experience in process control systems, industrial instrumentation and safety instrumented systems. Luis’ questions on evaporator control are important to improve evaporator concentration control and minimize steam consumption
The process depicted in Figure 1 shows a concentrator with its process inputs and outputs. I have the following questions regarding the process testing in order to generate process models for a MPC in the correct way. I know that MPC process inputs must be perturbed to allow an identification and modeling of each process input and output relationship.
Figure 1: Variables for model predictive control of a concentrator
Before I start perturbing the feed flow or steam flow, should the disturbance be avoided or at least minimized? Or simply let it be as usual in the process since this disturbance is always present?
If it is not difficult, you can try to suppress the disturbance. That can help the model identification for the feed and steam. To get a model to the disturbance, you will want movement of the disturbance outside the noise level (best is four to five times). If possible, this may require making changes upstream (for example, LIC.SP or FIC.SP).
What about the steam flow? Should it be maintained a fix flow, (FIC in MAN with a fix % open FCV), while perturbing the feed flow and in the same way when perturbing the steam flow, should the feed flow be fixed? I know some MPC software packages excite its outputs in a PRBS (Pseudo Random Binary Sequence) practically at the same time while the process testing is being executed, and through mathematics catches the input and output relationships, finally generating the model.
Because the steam and feed setpoints are manipulated variables, it is best to keep them both in auto for the entire test. PRBS is an option, but it will take more setup effort to get the magnitudes and the average switching interval right. An option is to start with a manual test and switch to PRBS after you’ve got a feel for the process and the right step sizes. Note: a pretest should have already been conducted to identify instrument issues, control issues, tuning, etc. Much more detail is offered in my Section 9.3 on in the McGraw-Hill handbook Process/Industrial Instruments and Control Sixth Edition.
What are the pros & cons for process testing if the manipulated variables are perturbed through FIC Setpoints, (closed loop), or through FIC Outputs, (open loop)? Or simply: should it be done according with the MPC design? What are the pros & cons if in the final design the FCVs are directly manipulated by the MPC block or through FICs, as MPC’s downstream blocks? I know in this case the FICs will be faster than MPC, so I expect a good approach is to retain them.
Correct – do according to the MPC design. Note sometimes the design will need to change during a step test as you learn more about the process. Flow controllers should normally be retained unless they often saturate. This is the same idea for justifying a cascade – to have the inner loop manage the higher frequency disturbances (so the slower executing MPC doesn’t have to). The faster executing inner loop also helps with linearization (for example, valve position to flow).
See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).
The post Webinar Recording: How to Use Modern Process Control to Maintain Batch-To-Batch Quality first appeared on the ISA Interchange blog site.
This educational ISA webinar was presented by Greg McMillan. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).
Understanding the difficulties of batch processing and the new technologies and techniques offered can lead to solutions by better automation and control that offer much greater increases in efficiency and capacity than usually obtained for continuous process. Industry veteran and author Greg McMillan discusses analyzing batch data, elevating the role of the operator, tuning key control loops, and setting up simple control strategies to optimize batch operations. The presentation concludes with an extensive list of best practices.
The post What Types of Process Control Models are Best? first appeared on the ISA Interchange blog site.
In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the U.S. This question comes from Daniel Rodrigues.
Daniel Rodrigues is one of our newest protégés in the ISA Mentor Program. Daniel has been working in research & development for Norsk Hydro Brazil since 2016 specializing in:
What is your take on process control based on phenomenological models (using first-principle models to guide the predictive part of controllers)? I am aware of the exponential growth of complexity in these, but I’d also like to have an experienced opinion regarding the reward/effort of these.
I prefer first principle models to gain a deeper understanding of cause and effects, process relationships, process gains, and the response to abnormal situations. Most of my control system improvements start with first principle models. The incorporation of the actual control system (digital twin) to form a virtual plant has made these models a more powerful tool. However, most first principle models use perfectly mixed volumes neglecting mixing delays and are missing transportation delays and automation system dynamics. For pH systems, including all of the non-ideal dynamics from piping and vessel design, control valves or variable speed pumps, and electrodes is particularly essential, I have consequently partitioned the total vessel volume into a series of plug flow and perfectly back mixed volumes to model the mixing dead times that originate from agitation pattern and the relative location of input and output streams. I add a transportation delay for reagent piping and dip tubes due to gravity flow or blending. For extremely low reagent flows (e.g., gph), I also add an equilibration time in dip tube after closure of a reagent valve associated with migration of the reagent into the process followed by migration of process fluid back up into the dip tube. I add a transportation delay to electrodes in piping. I use a variable dead time block and time constant blocks in series to show the effect of velocity, coating, age, buffering and direction of pH change on electrode response. I use a backlash-stiction and a variable dead time block to show the resolution and response time of control valves. The important goal is to get the total loop dead time and secondary lag right.
By having the much more complete model in a virtual plant, the true dynamic behavior of the system can be investigated and the best control system performance achieved by exploring, discovering, prototyping, testing, tuning, justifying, deploying, commissioning, maintaining and continuously improving, as described in the Control magazine feature article Virtual Plant Virtuosity.
Figure 1: Virtual Plant that includes Automation System Dynamics and Digital Twin Controller
Model predictive control is much better at ensuring you have the actual total dynamics including dead time, lags and lead times at a particular operating point. However, the models do not include the effect of backlash-stiction or actuator and positioner design on valve response time and consequentially on total loop dead time because by design the steps are made several times larger than the deadband and resolution or sensitivity limits of the control valve. Also, the models identified are for a particular operating point and normal operation. To cover different modes of operation and production rates, multiple models must be used requiring logic for a smooth transition or recently developed adaptive capabilities. I see an opportunity to use the results from the identification software used by MPC to provide a more accurate dead time, lag time and lead time by inserting these in blocks on the measurement of the process variable in first principle models. The identification software would be run for different operating points and operating conditions enabling the addition of supplemental dynamics in the first principle models. This addresses the fundamental deficiency of dead times, lag times and lead times being too small in first principle models.
Statistical models are great at identifying unsuspected relationships, disturbances and variability in the process and measurements. However, these are correlations and not necessarily cause and effect. Also, continuous processes require dynamic compensation of each process input so that it matches the dynamic response timewise of each process output being studied. This is often not stated in the literature and is a formidable task. Some methods propose using a dead time on the input but for large time constants, the dynamic response of the predicted output is in error during a transient. These models are more designed for steady state operation but this is often an ideal situation not realized due to disturbances originating from the control system due to interactions, resonance, tuning, and limit cycles from stiction as discussed in the Control Talk Blog The most disturbing disturbances are self-inflicted. Batch processes do not require dynamic compensation of inputs making data analytics much more useful in predicting batch end points.
I think there is a synergy to be gained by using MPC to find missing dynamics and statistical process control to help track down missing disturbances and relationships that are subsequently added to the first principle models. Recent advances in MPC capability (e.g., Aspen DMC3) to automatically identify changes in process gain, dead time and time constant including the ability to compute and update them online based on first principals has opened the door to increased benefits from the using MPC to improve first principle models and vice versa. Multivariable control and optimization where there are significant interactions and multiple controlled, manipulated and constraint variables are best handled by MPC. The exception is very fast systems where the PID controller is directly manipulating control valves or variable frequency drives for pressure control. Batch end point prediction might also be better implemented by data analytics. However, in all cases the first principle model should be accordingly improved and used to test the actual configuration and implementation of the MPC and analytics and to provide training of operators extended to all engineers and technicians supporting plant operation.
I would think for research and development, the ability to gain a deeper and wider understanding of different process relationships for different operating conditions would be extremely important. This knowledge can lead to process improvements and to better equipment and control system design. For pH and biological control systems, this capability is essential.
For a greater perspective on the capability of various modeling and control methodologies, see the ISA Mentor Program post with questions by protégé Danaca Jordan and answers by Hunter Vegas and I: What are the New Technologies and Approaches for Batch and Continuous Control?
The post, Many Objectives, Many Worlds of Process Control first appeared on ControlGlobal.com's Control Talk blog.
In many publications on process control, the common metric you see is integrated absolute error for a step disturbance on the process output. In many tests for tuning, setpoint changes are made and the most important criteria becomes overshoot of setpoint. Increasingly, oscillations of any type are looked at as inherently bad. What is really important varies because of the different loops and types of processes. Here we seek to open minds and develop a better understanding of what is important.
– Compressor surge, SIS activation, relief activation, undesirable reactions, poor cell health
– total amount of off-spec product to enable closer operation to optimum setpoint
– Interaction with heat integration and recycle loops in hydrocarbon gas unit operations
– Batch cycle time, startup time, transition time to new products and operating rates
– Wasted energy-reactants-reagents, poor cell health (high osmotic pressure)
– Passing of changes in input flows to output flows upsetting downstream unit ops
– Resonance, interaction and propagation of disturbances to other loops
* FRV is the Final Resting Value of PID output. Overshoot of FRV is necessary for setpoint and load response for integrating and runaway processes. However for self-regulating processes not involving highly mixed vessels (e.g., heat exchangers and plug flow reactors), aggressive action in terms of PID output can upset other loops and unit operations that are affected by the flow manipulated by the PID. Not recognized in the literature is that external-reset feedback of the manipulated flow enables setpoint rate limits to smooth out changes in manipulated flows without affecting the PID tuning.
– Fast self-regulating responses, interactions and complex secondary responses with sensitivity to SP and FRV overshoot, split range crossings and utility interactions.
– Important loops tend to have slow near or true integrating and runaway responses with minimizing peak and integrated errors and rise time as key objectives.
– Important loops tend to have fast near or true integrating responses with minimizing peak and integrated errors and interactions as key objectives.
– Fast self-regulating responses and interactions with propagation of variability into product (little to no attenuation of oscillations by back mixed volumes) with extreme sensitive to variability and resonance. Loops (particularly for sheets) can be dead time dominant due to transportation delays unless there are heat transfer lags.
– Most important loops tend have slow near or true integrating responses with extreme sensitivity to SP and FRV overshoot, split range crossings and utility interactions. Load disturbances originating from cells are incredibly slow and therefore not an issue.
A critical insight is that most disturbances are on the process input not the process output and are not step changes. The fastest disturbances are generally flow or liquid pressure but even these have an 86% response time of at least several seconds because of the 86% response time of valves and the tuning of PID controllers. The fastest and most disruptive disturbances are often manual actions by an operator or setpoint changes by a batch sequence. Setpoint rate limits and a 2 Degrees of Freedom (2DOF) PID structure with Beta and Gamma approaching zero can eliminate much of the disruption from setpoint changes by slowing down changes in the PID output from proportional and derivative action. A disturbance to a loop can be considered to be fast if it has a 86% response time less than the loop deadtime.
If you would like to hear more on this, checkout the ISA Mentor Program Webinar Recording: PID Options and Solutions Part1
If you want to be able to explain this to young engineers, check out the dictionary for translation of slang terms in the Control Talk Column “Hands-on Labs build real skills.”
The post How to Get Rid of Level Oscillations in Industrial Processes first appeared on the ISA Interchange blog site.
In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the U.S. This question comes from Luis Navas.
For an MPC application I need to build a smoothed moving mean from a batch level to use as a controlled variable for my MPC, so the simple moving average is done as depicted below. However, I need to smooth the signal, due there is some signal ripple still. I tried with a low-pass filter achieving some improvement as seen in Figure 1. But perhaps you know a better way to do it, or I simply need to increase the filter time.
Figure 1: Old Level Oscillations (blue: actual level and green: level with simple moving mean followed by simple moving mean + first order filter)
I use rate limiting when a ripple is significantly faster than a true change in the process variable. The velocity limit would be the maximum possible rate of change of the level. The velocity limit should be turned off when maintenance is being done and possibly during startup or shutdown. The standard velocity limit block should offer this option. A properl Save & Exit y set velocity limit introduces no measurement lag. A level system (any integrator) is very sensitive to a lag anywhere.
If the oscillation stops when the controller is in manual, the oscillation could be from backlash or stiction. In your case, the controller appears to be in auto with a slow rolling oscillation possibly due to a PID reset time being too small.
I did a Control Talk Blog that discusses good signal filtering tips from various experts besides my intelligent velocity limit.
In many cases, I’ve seen signals overly filtered. Often, if the filtered signal looks good to your eye, it’s too much filtering. As Michel Ruel states: If period is known, moving average (sum of most recent N values divided by N) will nearly completely remove a uniform periodic cycle. So the issue is how much lag is introduced. Depending on the MPC, one may be able to specify variable CV weights as a function of the magnitude error, which will decrease the amount of MV movement when the CV weight is low; or the level signal could be brought in as a CV twice with different tuning or filtering applied to each.
Since the oscillation is uniform in period and amplitude, the moving average as described my Michel Ruel is best as a starting point. Any subsequent noise from non-uniformity can be removed by an additional filter but nearly all of this filter time becomes equivalent dead time in near and true integrating processes. You need to be careful that the reset time is not too small as you decrease the controller gain either due to filtering or to absorb variability. The product of PID gain and reset time should be greater than twice the inverse of the integrating process gain (1/sec) to prevent the slow rolling oscillations that decay gradually. Slide 29 of the ISA webinar on PID options and solutions give the equations for the window of allowable PID gains. Slide 15 shows how to estimate the attenuation of an oscillation by a filter. The webinar presentation and discussion is in the ISA Mentor Program post How to optimize PID controller settings.
If you need to minimize dead time introduced by filtering, you could develop a smarter statistical filter such as cumulative sum of measured values (CUSUM). For an excellent review of how to remove unwanted data signal components, see the InTech magazine article Data filtering in process automation systems.
My experience is that most times a cycle in a disturbance flow is already causing cycling in other variables (due to the multivariable nature of the process). And advanced control, including MPC, will not significantly improve the situation and may make it worse. So it is best to fix the cycle before proceeding with advanced control. Making a measured cyclic disturbance a feedforward to MPC likely won’t help much. MPC normally assumes the current value of the feedforward variables stays constant over the prediction horizon. What you’d want is to have the future prediction include the cycle. Unfortunately this is not easily done with the MPC packages today.
Often, levels are controlled by a PID loop, not in the MPC. The exception can be if there are multiple MVs that must be used to control the level (e.g., multiple outlet flows), or the manipulated flow is useful for alleviating a constraint (see the handbook). Another exception is if there is significant dead time between the flow and the level.
Thank you for the support. I think the ISA Mentor Program resources are a truly elite support team, by the way, I have already read the blogs about signal filtering.
My comments and clarifications:
Figure 2: New Level Oscillations (blue: actual level and green: level with Ruel moving average)
This is the official online community site of the Emerson Global Users Exchange, a forum for the free exchange of non-proprietary information among the global user community of all Emerson Automation Solution's products and services. Our goal is to improve the efficiency and use of automation systems and solutions employed at members’ facilities by sharing our knowledge, experiences, and application information.
User Groups |
World Areas |
Community Guidelines |
Legal Information |
Contact Community Manager
Website translation provided by
© 2015-2019 Emerson Global Users Exchange. All rights reserved.