*Posts on this page are from the Control Talk blog, which is one of the ControlGlobal.com blogs for process automation and instrumentation professionals and Greg McMillan’s contributions to the ISA Interchange blog.
The post, Residing on Residence Time, first appeared on ControlGlobal.com's Control Talk blog.
The time spent residing on this column is time well spent if you want to become famous for improving process performance with the side benefit of becoming best buds with the process engineer. The implications are enormous in terms of process efficiency and capacity from the straightforward concept of how much time a fluid resides in process equipment.
The residence time is simply the equipment volume divided by the fluid volumetric flow rate. The fluid can be back mixed (swirling in opposite direction of flow) in the volume due to agitation, recirculation or boiling. A lot of back mixing makes nearly all of the residence time a process time constant. If there is hardly any back mixing, we have plug flow and nearly all of the residence time becomes deadtime (transportation delay).
Deadtime is always bad. The ultimate limit to the peak and integrate errors for a load disturbance is proportional to the deadtime and deadtime squared, respectively.
A particular process time constant can be good or bad. If the process time constant in question is the largest time constant in the loop, it slows down disturbances on the process input and enables a larger PID gain. The process variability can be dramatically reduced for a process time constant much larger than the total loop deadtime. The slower time to reach setpoint can be speeded up by the higher PID gain provided there is proportional action on error and not just on PV in the PID structure.
If the process time constant in question is smaller than another process time constant possibly due volumes in series or heat transfer lags, a portion of the smaller time constants become effectively deadtime in a first order approximation. Thus, heat transfer lags and volumes between the manipulated variable and controlled variable create detrimental time constants. A time constant due to transmitter damping or signal filtering will add effective deadtime and should be just large enough to keep fluctuations in the PID output due to noise from exceeding the valve or variable speed driver deadband and resolution, whichever is largest.
At low production rates, the residence time gets larger, which is helpful if the volume is back mixed, but the process gain increases dramatically for temperature and composition control. If the volume is plug flow, we are in dire straits because the larger residence time creates a larger transportation delay resulting in a double whammy of high process gain and high deadtime causing oscillations as explained in the Control Talk Blog “Hidden factor in Our Most Important Loops”. For gas volumes (e.g., catalytic reactors), the residence time is usually very small (e.g., few seconds) and the effect is mitigated.
If you want more information on opportunities to learn what is really important, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts.
You can also get a comprehensive resource focused on what you really need to know for a successful automaton project including nearly a thousand best practices in the 98% new McGraw-Hill 2019 Process/Industrial Instruments and Controls Handbook Sixth Edition capturing the expertise of 50 leaders in industry.
The post What Factors Affect Dead Time Identification for a PID Loop? first appeared on the ISA Interchange blog site.
The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.
In the ISA Mentor Program, I am providing guidance for extremely talented individuals from Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Adrian Taylor.
Adrian Taylor is an industrial control systems engineer at Phillips 66.
One would expect dead time to be the easiest parameter to estimate, yet when using software tools that identify the process model in closed loop I find identification of dead time is inconsistent. Furthermore when using software identification tools on simulated processes where the exact dead time is actually known, I find on occasions the estimate of dead time is very inaccurate. What factors affect dead time identification for a PID loop?
When we identify dead time associated with tuning a PID loop, it is normally part of a model such as First Order Plus Dead Time (FOPDT), or a slightly more complicated Second Order model (SOPDT). Normally, and traditionally, we generate the data by step-testing (start at steady state, SS, and make a step-and-hold in the manipulated variable, MV, (the controller output), then observe the process-controlled variable, CV, response. We pretend that there were no uncontrolled disturbances, and make the simple linear model best fit the data. This procedure has served us well for 80 years, or so, that we’ve used models for tuning PID feedback controllers or setting up feedforward devices; but there are many issues that would lead to inconsistent or unexpected results.
One of the issues is that these models do not exactly match the process behavior. The process may be of higher order than the model. Consider a simple flow rate response. If the i/p device driving the valve has a first-order response, and the valve has a first-order response, and there is a noise filter on the measurement, then the flow rate measurement has a third-order response to the controller output. The distributed nature of heat exchangers and thermowells, and the multiple trays in distillation all lead to high order responses.
So, the FOPDT model will not exactly represent the process, the optimization algorithm, the modeling approach, seeks the simple model that best fits overall the process response. In a zero dead time, high-order process, the best model will delay the modeled response so that the subsequent first-order part of the model can best fit the remaining data. The best model will report a dead time even if there is none. The model does not report the process dead time, but provides a pseudo-delay that makes the rest of the model best fit the process response. The model dead time is not the time point where one can first observe the CV change.
A second issue is that processes are usually nonlinear, and the linear FOPDT model cannot match the process. Accordingly, steps up or down from a nominal MV value, or testing at alternate operating conditions will experience different process gains and dynamics, which will lead to linear models of different pseudo-dead time values.
A third issue is that the best fit might be in a least squares sense over all the response data, or it might be on a two-point fit of mid-response data. The classic hand-calculated “reaction curve” models use the point of highest slope of the response to get the delay and time-constant by extrapolating the slope from that point to where it intersects the initial and final CV values. A “parametric” method might use the points when the CV rose one-quarter and three-quarters of the way from the initial to the final steady state values and estimate delay and time-constant from those two points. By contrast, a least squares approach would seek to make the model best fit all the response data not just a few points. The two-point methods will be more sensitive to noise or uncontrolled disturbances. My preference is to use regression to best fit the model over all the data to minimize the confounding aspects of process noise.
A fourth issue is that the step testing might not have started at steady-state, SS, nor ended at SS. If the process was initially changing because of its response to prior adjustments, then the step test CV response might initially be moving up or down. This will confound estimating the pseudo-delay and time-constant of any modeling approach. If the process does not settle to a SS, but is continuing to slowly rise, then the gain will be in error, and if gain is used in the estimation procedure for the pseudo-delay, it will also include that error. If replicate trials have a different background beginning, a different residual trend, then the models will be inconsistent.
A fifth issue relates to the assumption of no disturbances. If a disturbance is affecting the process then, similar the case of not starting at SS, the model will be affected by the disturbance, not just the MV.
Here is a sixth. Delay is nonlinear, and it is an integer. If the best value for the pseudo-delay was 8.7 seconds, but the sample interval was on a 1-sec interval, the delay would either be rounded or truncated. It might be reported as 8 or as 9 sec. This is a bit inconsistent. Further, even if the model is linear in differential equation terminology, the search for an optimum pseudo-delay is nonlinear. Most optimizers end up in a local minimum, which depends on the initialization values. In my explorations, the 8.7-sec ideal value might be reported within a 0- to 10-sec range on any one particular optimization trial. Optimizers need to be run from many initial values to find the global.
So, there are many reasons for the inconsistent and inaccurate results.
You might sense that I don’t particularly like the classic single step response approach. But I have to admit that it is fully functional. Even if a control action is only 70 percent right because the model was in error, the next controller correction will reduce the 30 percent error by 70 percent. And, after several control actions, the feedback aspect will get the controller on track.
Although fully functional, I think that the classic step-and-hold modeling approach can be improved. I used to recommend 4 MV steps – up-down-down-up. This keeps the CV in the vicinity of the nominal value, and the 4 steps temper the effect of noise, nonlinearity, disturbances, and a not-at-SS beginning. However, it takes time to complete 4 steps, production usually gets upset with the extended CV deviations, and it requires an operator monitoring to determine when to start each new test.
My preference now is to use a “skyline” MV sequence, which is patterned after the MV sequence used to develop models for model-predictive control, MPC, also termed advanced process control, APC. In the skyline testing, the MV makes steps to random values within a desired range, at random time intervals ranging from about ½ to 2 time-constants. In this way, in the same time interval for the 4-step up-down-down-up response, the skyline generates about 10 responses, can be automated, and does not push the process as far or for an extended period from the nominal value as traditional step testing. The large number of responses does a better job of tempering noise and disturbances, while requiring less attention and causing smaller process upsets.
Because the skyline input sequence does not create step-and-hold responses from one SS to another, the two-point methods for reaction curve modeling cannot be used. But regression certainly can be used. What is needed is an approach to nonlinear regression (to find the global minimum in the presence of local optima), and a nonlinear optimizer that can handle the integer aspects of the delay. I offer open-code software on my web site in Visual Basic for Applications, free to any visitor. Visit r3eda.com and use the menu item “Regression” then the sub-item “FOPDT Modeling.”
You can enter your data in the Excel spread sheet and press the run button to let the optimizer find the best model. The model includes both the reference values for the MV and CV (FOPDT models are deviations from a reference) and initial values (in the case the data does not start at an ideal SS). The optimizer is Leapfrogging, one of the newer multiplayer direct search algorithms that can cope with multi-optima, nonlinearity, and discontinuities. It seeks to minimize the sum of squared deviations, SSD, over all the data. The optimizer is reinitialized as many times as you wish to ensure that the global is found, and the software reports the cumulative distribution of SSD values to reveal confidence that the global best has been found.
Did you find this information of value? Want more? Click this link to view other ISA Mentor Program blog posts, technical discussions and educational webinars.
Many thanks for your very detailed response. I look for to having a play with your skyline test + regression method. I have previously set up a spreadsheet to carry out the various two point methods using points at 25 percent/75 percent, 35.3 percent/85.3 percent and 28.3 percent/63.2 percent. As your recursive method minimizes the errors and should always give the best fit, it will also be interesting to compare to the various two point methods to see which of these methods most closely match your recursive best fit method for various different type of process dynamics. For the example code given in your guidance notes: I presume r is a random number between 0 and 1? I note the open settling time is required. Is the procedure to still carry out an open loop step test initially to establish open loop setting time, and then in turn use this to generate the skyline test?
Yes, RND is a uniform distributed random number on the 0-1 interval. It is not necessary to have an exact number for the settling time. In a nonlinear process, it changes with operating conditions; and the choice of where the process settles is dependent on the user’s interpretation of a noisy or slowly changing signal. An intuitive estimate from past experience is fully adequate. If you have any problems with the software, let me know.
See the ISA Mentor Program webinar Loop Tuning and Optimization for tips. Usually the dead time is easily identified in closed loop techniques but in open loop you can miss a chunk of it. Most modern tools analyze the process response in the frequency domain and in this case, dead time corresponds to high frequencies. Tests using a series of pulses (or double pulses) are rich in high frequencies and in this case dead time is well identified (if we use a first or second order + dead time, remember that dead time represents real dead time plus small time constants).
Many thanks for your response. While I have experienced the problem with various different identification tools, the Dead time estimate when using a relay test based identification tool seemed to be particularly inconsistent at identifying Dead time. My understanding now is that while the relay test method is very good at identifying ultimate gain/ultimate period, attempts to convert to an FOPDT model can be more problematic for this method.
Model identification results can be poor due to the quality of the test/data as well as the capabilities of the model identification technology/software. Insufficient step sizes can lead to poor results. For example, not making big enough test moves relative to valve limitations (dead band and stick/slip) and noise level of the measurements you want to model. Also, to get good results, multiple steps may needed to minimize the impact of unmeasured disturbances.
Another factor is the identification algorithm itself and capabilities of the software. Not all are equivalent and there is a wide range of approaches used, including how dead time is estimated. One needs to know if the identification approach works with closed-loop data. Not all do. Some include provisions for pre-filtering the data to minimize the impact of unmeasured disturbances by removing slow trends. This is known as high pass filtering, in contrast to low pass filtering which removes higher frequency disturbances.
If sufficient number of steps is done, most identification approaches will obtain good model estimates, including dead time. Dead time estimates can usually be improved by making higher frequency moves (e.g., fractions of the estimated state-state response time).
As indicated in my response to the question by Vilson, the user will often need to specify whether the process is integrating. Estimates of process model parameters can be used to check or constrain the identification. As mentioned, may be able to obtain model estimates from historical data – either by eye ball or using selected historical data in the model identification, and thereby avoid a process test.
Digital devices including historians create a dead time that is one-half the scan time or execution rate plus latency. If the devices are executing in series and the test signal is introduced as a change in controller output, then you can simply add up the dead times. Often test setups do not have the same latency or same order of execution or process change injection point as in the actual field application. If the arrival time is different within a digital device execution, the dead time can vary by as much as the scan time or execution rate.
If there is compression, backlash or stiction, there is also a dead time equal to the dead band or resolution limit divided by the rate of change of the signal assuming the signal change is larger than dead band or resolution limit. If there is noise or disturbances, the dead time estimate could be smaller or larger depending upon the whether the induced change is in the same or opposite direction, respectively.
Some systems have a slow execution or large latency compared to the process dead time. Identification is particularly problematic for fast systems (e.g., flow, pressure) and any loop where the largest sources of dead time are in the automation system resulting in errors of several hundred percent. Electrode and thermowell lags can be incredibly large varying with velocity, direction of change, and fouling of sensor. Proven fast software directly connected to the signals designed to identify the open loop response (e.g., Entech Toolkit) and multiple tests with different size perturbations and direction and at different operating conditions (e.g., production rates, setpoints and degrees of fouling) is best.
I created a simple module in Mimic that offers a rough fast estimate of dead time and ramp rate and the integrating process gain for near-integrating and true integrating processes within 6 dead times that is accurate to about 20 percent if the process dead time is much larger than software execution rate. While the relay method is not able to identify the open loop gain and time constant, it can identify the dead time. I have done this in the Mimic “Rough n Ready” tuner I developed. Some auto tuning software may be too slow or take a conservative approach using the largest observed delay between a PV change and a MV change plus a maximum assumed update rate and possibly use a deranged algorithm thinking larger is better.
McMillan, Gregory K., Good Tuning: A Pocket Guide.
See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).
About the AuthorGregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.
Connect with Greg
The post, Variability Appearance and Disappearance originally appeared on the ControlGlobal.com Control Talk blog.
Particularly confusing is variability that seems to come out of nowhere and then disappear. There are not presently good tools to track down the sources because it turns out most of them are self-inflicted involving automation system deficiencies, dynamics and transfer of variability. You need to understand the possible causes to be able to identify and correct the problem. Here we provide fundamental knowledge needed on the major sources of these particularly confusing sources of variability.
One of the most prevalent and confusing problems are oscillations that break out and disappear in cascade control loops and loops manipulating large valves and variable frequency drives (VFD). The key thing to look for is if these oscillations only start for large changes in controller output. If you have a slow secondary loop, valve or VFD, for small changes in controller output over a period of several controller executions for small setpoint changes or small disturbances, the secondary loop, valve or VFD can keep up with the requested changes. For large changes, oscillations break out. The amplitude and settling time increase as the degree of mismatch between requested rate of change and the rate of change capability of the secondary loop, valve or VFD. The best solution is of course to make the capabilities of what is being manipulated faster. You can make a secondary loop faster by decreasing the secondary loop’s dead time and lag times (faster sensors, filters, damping, and update rates) and making the secondary loop tuning faster. You can make the control valve faster by a higher gain and no integral action in positioner, putting a volume booster on the positioner output(s) with booster bypass valve slightly open to provide booster stability and increasing the size of air supply lines and if necessary, the actuator air connections. You can make VFDs faster by making sure there is no speed rate limiting in the drive setup, keeping fast speed control with VFD in the equipment room (not putting it into a much slower control system controller) and increasing the motor rating and size as needed. If the problem persists, turning on external-reset feedback with fast accurate readback of the process variable of the manipulated secondary loop, actual position of valve and speed of VFD can stop the oscillations.
Another confusing trigger for oscillations is a low production rate. The process gain and dead time both increase at low production rates causing oscillations as explained in the Control Talk Blog “Hidden factor in Our Most Important Loops”. Also, stiction is much greater as the valve operating point approaches the closed position due to higher friction from sealing and seating surfaces. Valve actuators may also be undersized for operating with the higher pressure drops near closure. Stiction oscillations size and persistence increase with valves designed to reduce leakage. Most valve suppliers do not want to do valve response testing below 20% output because it makes the valve dead band and resolution worse. The installed flow characteristic of linear trim distorts to quick opening for a valve drop to system pressure drop ratio at maximum flow is less than 0.25. The amplification of oscillations from backlash and stiction and the instability from the steep slope (high valve gain) from the quick opening installed characteristic cause the oscillations to be larger. Even more insidious is the not commonly recognized reality that a VFD has an installed flow characteristic that becomes quick opening if the static head to system pressure drop ratio is greater than 0.25 triggering the same sort of problems. Signal characterization can help linearize the loop but you still need adaptation of the controller tuning settings for the increase in the process gain from the hidden factor and the increase in dead time from transportation delays and you are still stuck with stiction. Besides the increase in the VFD gain multiplying effect on the open loop gain, there is an amplification of oscillations from the 0.35% resolution limit of traditional of VFD I/O card and the dead band introduced in VFD setup in a misguided attempt to reduce reaction to noise. Then there are oscillations from erratic signals and noise from measurement rangeability problems discussed in last month’s Control Talk Blog “Lowdown on Turndown”. Low production rates can also cause operation near the split range point and crisscrossing of the split range point causing consequential persistent oscillations from the severe nonlinearities and discontinuities besides greater stiction. Again external-reset feedback can help but the better solution is control strategies and configurations to eliminate the unnecessary crossings of the split range point as discussed in the Control Talk Column “Ways to improve split range control”.
You can also get a comprehensive resource focused on what you really need to know for a successful automaton project including nearly a thousand best practices in the 98% new McGraw-Hill 2019 Process/Industrial Instruments and Controls Handbook Sixth Edition capturing the expertise of 50 leaders in industry.
The post Alarm Management and DCS versus PLC/SCADA Systems first appeared on the ISA Interchange blog site.
In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Aaron Doxtator.
Aaron Doxtator is a process control EIT with XPS | Expert Process Solutions. He has experience providing process engineering and controls engineering services, primarily for clients in the mining and mineral processing sector.
I am working on a project that I believe many other sites may wish to undertake, and I was looking for best practice information.
Using ISA 18.2, we are performing an alarm audit using the rationalization and documentation process for all plant alarms. This has been working well, but an examination of the plant’s “bad actors” has slowed down the process. These bad actors are a significant portion of the annunciated alarms, but many of them are considered redundant in certain scenarios and could only be mitigated with state-based alarming.
While the rationalization and documentation process is useful for examining many of the nuisance process alarms, it quickly becomes much more complicated when state-based alarming is to be considered.
The high-level process to implement these changes is as follows:
Is there a recommended best practice that one could follow in order to document the state-based alarm changes?
ISA has a series of technical reports that give guidance on implementation of the standard. TR4 is on Advanced and Enhance alarming, including state-based alarming.
There is guidance that includes:
In my experience, there are very few if any chemical or refinery units that would not benefit from state-based alarming (SBA). The basic problem is that most alarm systems are configured for only a single process condition (usually running at steady state), but real processes must operate through a variety of states: starting up, shutting down, product transitions, regeneration, partial bypass, etc. In these situations, alarm systems can and will produce multiple alarms that are meaningless to the operator (nuisance alarms). Alarm floods are the natural result. Alarm floods can be problematic in that they tend to distract the operator from the more important task at hand, can be misleading, and can hide important information.
Now, for a look at actually answering your question: ISA-18.2 TR4 as Nick cited is a good starting point for information on SBA. I will add some pointers and caveats as well:
One final observation, I note that you referred to a bad actors review as slowing down the process. My normal approach is to not concentrate on the bad actors, but to conduct a comprehensive rationalization (adding SBA in the process) that includes all tags in the control system. The bad actors will be covered in this process.
I suggest you check out the Control Talk columns with Nick Sands Alarm management is more than just rationalization and Darwin Logerot The dynamic world of alarms.
While most of my experience has involved using PLC/DCS (or just DCS) for plant control, some clients have expressed interest in shifting away from using a DCS altogether and utilizing exclusively PLC/SCADA. Aside from client preference, are there recommendations for when one solution (or one combination of solutions) may be preferred over the other?
I wanted to address your question about DCS versus PLC/SCADA. Historically DCS and PLC systems were very different solutions. A DCS was generally very expensive, had slow processing/scan times, and was specifically designed to control large, continuous processes (IE refineries, petrochemical plants) with minimal downtime. PLCs boasted very high speed processing, were designed for digital IO and sequencing, and typically utilized for machine control and smaller processes. Over the years both DCS and PLC manufacturers have modified their products to expand into that “middle ground” between the two systems. DCSs were made more scalable (to make them competitive in small applications) and added extensive sequencing logic to make them better suited for digital control. At the same time PLCs added much more analog logic, the ability to program in function blocks and other languages, and began incorporating a graphical layer to make them look and feel more like a DCS.
While the two systems are undeniably much more similar then they were in the past, there are definitely some significant differences between the two technologies that makes one better suited than the other in a variety of situations. Generally we try to remain “vendor agnostic” in our answers so I won’t specifically name names but I will say that the offerings of the DCS and PLC vendors vary widely and some systems have much more capabilities than the other. That being said I’ll try to keep my answer fairly generic.
DCS systems were specifically designed to allow online changes because they were designed for plants that can run years without a shutdown. In such a plant the ability to make programming changes, add cards and racks of IO, and even upgrade software while continuing to run is paramount. PLCs generally have some ability to make online changes but there can be extensive limitations to what can be changed while running. Unfortunately many PLC vendors will say “there are virtually no limits to the changes you can make while running” – and you typically find out the hard way that this is not true even when running redundant processors. If you are looking at installing a control system on a process that must run continuously for long periods, spend a lot of time talking with users (not salespeople) to understand what you truly can (and cannot) do while running. Sometimes the solution can be as simple as creating dummy racks and IO while you are down so you can add racks later.
DCS systems typically have much slower processing speeds/scan times than PLCs. While some very recent DCS processors boast high speeds, most controllers can only process a limited number of modules at high speeds and even that speed (50ms or so) is slow compared to a PLC. If the process is extremely fast, a PLC will likely outperform a DCS.
DCS systems are usually much better at handling various networks and fieldbuses though PLCs have been improving in this regard and several third party manufacturers are now selling PLC compatible network cards. If you have existing bus systems (ASI, Foundation Fieldbus, Profibus PA, Devicenet, Bacnet, Modbus, etc) look at the system carefully and make sure it can communicate with your network. Fortunately the IOT buzz has driven both DCS and PLC manufacturers to communicate over an increasingly large array of networks so most systems are getting better in this regard.
Batch capabilities vary significantly by manufacturer so that capability is hard to define on a PLC vs DCS level. I can name DCS manufacturers who have great batch functionality and others that have minimal capability and are very difficult to program. Similarly some PLCs have good batch capabilities and others have virtually none. If you have batch and are looking at a new control system take the time to dig deep and talk extensively with people who program those systems regularly. The better systems offer extensive aliasing capabilities, have few limits in executing logic in phases, have a good batch operator interface, have an integrated tag database, and allow changes to phases/operations even as a recipe is running. Weaker systems have limited ability to alias (you must create a copy of every phase even if they are identical other than tag names), have limitations in what logic can run in the phases, have poor interfaces, and limit online changes.
Probably the last major point is cost and what are you trying to do with the data. Historically DCSs have had a much better capability to handle classical analog control, advanced control algorithms, and batch processing. Because of that they typically utilize a tag count based pricing model. This pricing strategy can become very expensive if the system is mostly being utilized to bring in reams of data for display and historization but not using it specifically for control. If the process has very large tag counts but doesn’t require extensive control capability, a PLC/SCADA system can be a cheaper alternative.
I hope this helps. If you have any questions about a specific vendor, ask me directly and I can share my experience.
Most of the PID capability I find valuable in terms of advanced features most notably external-reset feedback and enhancements to deal with large wireless update time and analyzer cycle times are not available in PLCs. The preferred PID Standard (Ideal) Form is less common and multiple PID structures by setpoint weights for integral and derivative modes and the ability to bumplessly write to the gain setting may not exist as well. Some PLCs use the Parallel or Independent Form that negates conventional tuning practices. Even worse, computation of the PID modes in a few PLCs uses signals is in engineering units rather than percent of scale leading to bizarre tuning requirements.
A pneumatically actuated control valve in the loop is much slower than a DCS that can execute every 100 milliseconds. If the loop manipulates a variable frequency drive speed without deadband and rate limiting or speed to torque cascade control, the process deadtime is less than 100 milliseconds, and the sum of time constants from signal filtering and transmitter damping is less than 200 milliseconds, the DCS may not be fast enough but this is a lot of “Ifs” rarely seen in the process industry where fluids are flowing through a pipeline. It is a different story in parts and silicon wafer manufacturing.
Bill R. Hollifield and Eddie Habibi, Alarm Management: A Comprehensive Guide
Nicholas Sands, P.E., CAP and Ian Verhappen, P.Eng., CAP., A Guide to the Automation Body of Knowledge. To read a brief Q&A with the authors, plus download a free 116-page excerpt from the book, click this link.
The post, Lowdown on Turndown, appeared first on the ControlGlobal.com Control Talk blog.
There are a lot of misconceptions on what is the turndown capability of measurements and final control elements (e.g., control valves and variable frequency drives). Here is a very frank concise discussion of what really determines turndown and things to watch for in terms of limiting factors. For flow measurements and final control elements, the term rangeability is often used.
The turndown of vortex meters and magmeters is determined by a minimum velocity. The actual turndown experienced is typically a lot less than stated in publications because the maximum velocity for the meter size is usually greater than the maximum velocity for the process. Much larger than needed meters are often chosen because of conservative factors built into the stated requirements by process and piping engineers and the desire to minimize pressure drops across the meters. Less than optimum straight runs of upstream and downstream piping can also reduce rangeability for vortex meters and particularly differential herd meters (flow being the square root of pressure drop) because of the introduction of flow sensor noise that becomes large relative to size of sensor signal at low flows. Also, physical properties of the fluid most notably fluid kinematic viscosity for vortex meters and fluid conductivity for magmeters can significantly reduce turndown capability.
The turndown for valves more commonly stated as rangeability is severely limited by backlash and stiction that is often a factor of two or more greater near the seat than at the mid stroke range where response testing is normally done. Also, valve actuator sizes should provide at least 150% of the maximum torque or thrust requirement to deal with less than ideal conditions and tightening of stem packing. Valve rangeability is also greatly limited by the installed flow characteristic particularly if the valve to system pressure drop ratio at max flow is less than 0.25 in a misguided attempt to reduce pressure drop and provide more flow capacity than what is actually needed.
The literature does not alert users to the fact that variable frequency drives can have a very nonlinear installed flow characteristic and poor turndown. To maximize rangeability of variable frequency drives, use a pulse width modulated inverter with slip control, speed to torque cascade control in the field (not control room), a pump head that is at least 4 times the maximum static head, totally enclosed fan cooled inverter rated motor, high resolution signal card, and minimal dead band setting in drive setup.
Transmitters and some sensors have an error that is expressed as a percent of span that reduces turndown. Transmitters selected that have a range narrowed to be closer to the actual maximum consequently improve turndown. The use of thermocouples (TCs) and resistance temperature detectors (RTDs) input cards instead of transmitters introduce a huge error and resolution limit and reduction in real rangeability due to the large spans. The use of TCs instead of RTDs severely reduces rangeability due to larger sensitivity errors and drift provided the temperature is in the recommended range for RTDs.
You can achieve greater rangeability by putting small and large flow meters and control valves in parallel. The process control loop manipulates the smaller valve using the smaller flow meter for cascade control for more precise control. A valve position controller (VPC) manipulates the large valve to keep the small valve in a good throttle range. External-reset feedback is used to reduce interactions and provide a fast correction if the small valve is moving toward the lower or upper best part of its installed flow characteristic. Feedforward or flow ratio control can be used to provide quicker correction. See the Control article “Don’t Over Look PID in APC” for much more on the many uses of VPC. Note that the use of split range control is not as good because you are normally manipulating the large valve and using the large flow meter with error and resolution limitations that are large due to being a percent of span.
Going for more flow capacity, lower pressure drop or a cheaper installation generally hurts turndown. Remember bigger is not better and cheaper is not really cheaper in the long run.
You can also get a comprehensive resource focused on what you really need to know for a successful automaton project including nearly a thousand best practices in the 98% new McGraw-Hill Process/Industrial Instruments and Controls Handbook Sixth Edition capturing the expertise of 50 leaders in industry.
The post How to Manage Control Valve Response Issues in the Field first appeared on the ISA Interchange blog site.
In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Mohd Zhafran A. Hamid.
Mohd Zhafran A. Hamid is a senior instrument engineer from Malaysia working in an EPC company, Toyo Engineering Corporation. He has worked in the field of control and instrumentation for about 10 years mostly in both engineering design and involvement at site/field.
If you have selected a control valve whose installed flow characteristics significantly deviates from linear (either by mistake or forced to select due to certain circumstances), what is a practical way in the field after installation to linearize the installed flow characteristic?
You need a sensitive flow measurement to identify the installed flow characteristic online. If you have a flow measurement and make changes in the manual controller output 5 times larger than dead band or resolution limit spaced out by a time interval greater than the response time, the slope of the installed flow characteristic is the change in per cent flow divided by the change in per cent signal. You need at least 20 points identified on the installed flow characteristic.
A signal characterizer is then inserted on the controller output to convert the flow in percent of scale to percent signal to get a piecewise linear fit that would linearize the characteristic so far as the controller is concerned. The controller output and linearized signal to the valve should be displayed. This linearization can be done in a positioner, but I prefer it being done in the DCS or PLC for better visibility and maintainability. For much more on signal characterizers see my Control Talk blog Unexpected benefits of signal characterizers.
I recently read the addendum “Valve Response Truth or Consequences” in Greg’s article How to specify valves and positioners that do not compromise control. I am curious for fast loop whereby the control valve is used with volume booster but without positioner, how come you can move the stem/shaft by hand only even though the valve size is big. Would you mind sharing the overall schematic? Also, would you also share the schematic of using positioner with booster and booster bypass?
Positive feedback from a very sensitive booster outlet port is greatly assisting attempts to move the shaft either manually or due to fluid forces on a butterfly disk as described in item 5 of my Control Talk blog Missed opportunities in process control – Part 6. There is a schematic of the proper installation in slide 18 of the ISA Mentor Program webinar How to Get the Most out of Control Valves. I don’t have a schematic of the wrong thing to do where the volume booster input is connected to current to pneumatic transducer (I/P) output.
For new high pressure diaphragm actuators or boosters with lower outlet port sensitivity, this may not happen since diaphragm flexure and consequential change in pressure from change in actuator volume may be less than booster outlet port sensitivity but it is not worth the risk in my book. The rule positioners should not be used on fast loops is mostly bogus as explained in my point 4 in the same Control Talk blog. If you need a response time faster than 0.5 seconds, you should use a variable frequency drive with a pulse width modulated inverter.
Greg highlighted the importance to specify valve gain requirement. Is there any publicly available modeling software that we design engineer can utilize to perform valve gain analysis? So far, I have encountered only one valve manufacturer that provides control valve sizing software (publicly available) with feature of valve gain graph. This manufacturer calculates process model based on the principle that the pressure losses in a piping system are approximately equal to flow squared.
The Control Talk column Why and how to establish installed flow characteristic describes how one practitioner uses Excel to compute the installed flow characteristic. The analysis of all the friction losses in a piping system can be quite complicated because of the effect of process fluid properties and fouling determined by process conditions and operating history and the piping system including fittings, elbows, inline equipment (e.g., heat exchangers and filters), and valves.
A dynamic model in a Digital Twin that includes system pressure drops and the effect of fouling and the ability to enter the inherent flow characteristic perhaps by a piecewise linear fit can show how the valve gain changes for more complex and realistic scenarios. Ideally, there would be flow and pressure measurements to show key pressure drops particularly where fouling is a concern so that resistance coefficients can be back calculated.
The fouling of heat transfer surfaces can be detected by an increase in the difference needed between the process and utility temperature to compensate for the decrease in heat transfer coefficient. A slow ramp of the valve signal followed by a slow ramp in a flow measurement could reveal the installed flow characteristic by a plot of flow ramp versus the signal ramp assuming there are no pressure disturbances and flow measurement has sufficient signal to noise ratio and rangeability.
The post How to Use Industrial Simulation to Increase Learning and Innovation first appeared on the ISA Interchange blog site.
In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Damien Hurley.
Damien Hurley is a control and instrumentation (C&I) engineer for Fluor in the UK. He is currently involved in the detailed design phase of a project to build a new energy plant in an existing refinery in Scotland. His chief responsibility is C&I interface coordinator with construction, the existing site C&I contractor and the client.
How can I begin implementing process simulations in my learning? My background is in drone control where all learning has a significant emphasis on simulation and testing, usually via programs such as MATLAB. Upon starting in the oil and gas engineering, procurement and construction (EPC) industry, I began getting to grips with the wide array of final elements and my knowledge of Process simulation has suffered as a result.
I’m also not exposed to simulations on a daily basis, as I was previously in the unmanned aerial vehicle (UAV) industry. How can I get started with simulation again? Specifically is the simulation of processes relevant to our industry? Can you point me in the direction of a good resource to begin getting to grips with this worthwhile subject?
Dynamic simulation is the key to most of deep learning and significant innovation in my 50-year career. Simulation has played a big role in industrial processes, especially in refining and energy plants. There are a lot of basic and advanced modeling objects for the unit operations in these plants. You can learn a lot about what process inputs and parameters are important in the building of first principle models. Even if the simulations are built for you, the practice of changing process inputs and seeing the effect on process outputs is a great learning experience. You are free to experiment and see results where your desire to learn is the main limit.
You can also learn a lot about what affects process control. Here it is critical to include all of the automation system dynamics often ignored in the literature despite most often being the biggest source of control loop dead time with also a significant contributing effect to the open loop gain and nonlinearity by way of the installed flow characteristic of control valves and variable frequency drives (VFDs).
You need to add variable filter times to simulate sensors particularly thermowell and electrode lags, transmitter damping, and signal filters. You need to add variable dead time blocks to simulate transportation delays associated with injection of manipulated fluids into the unit operation and to the sensor for measurement of the controlled variables. The variable deadtime block is also needed for simulating the effect of positioners with poor sensitivity where the response time increases by two orders of magnitude for changes in signal less than 0.25 percent. You need backlash-stiction blocks to simulate the deadband and resolution limits of control valves as detailed in the Control article How to specify control valves and positioners that don’t compromise control.
VFDs can have a surprisingly large deadband introduced in the setup in a misguided attempt to reduce reaction to noise and a resolution limit caused by an 8-bit signal input card. You also need to add rate of change limits to model slewing rates for large control valves and introduced in the VFD setup in a misguided attempt to reduce motor overload instead of properly sizing the motor. You need software that will provide PID tuning settings with proper identification of total loop dead time. Finally, a performance metrics block to identify the integrated and peak error for load disturbances and the rise time, overshoot, undershoot, and settling time for disturbances is a way of judging how well you are doing.
A couple of years ago I helped develop a dynamic simulation of the control system and the many headers, boilers, and users at a large plant to optimize the cogeneration and minimize the disruption to the steam system from large changes in the steam use and generation in all the headers for the whole plant. ISA Mentor Program resource James Beall and protégé Syed Misbahuddin were part of the team. Over 30 feedforward and decouple signals were developed and thoroughly tested by dynamic simulation resulting in a smooth implementation of much more efficient and safe system. I learned via the simulation in one case that the feedforward I thought was needed for a boiler caused more harm than good due to changes in header pressure preceding the supposedly proactive feedforward to a header letdown valve to compensate for the effect of a change in firing rate demand.
First principle process models material and energy balances of volumes in series can model the many unanticipated changes. I recently was alerted to the fact that the use of a bypass valve around a heat exchanger provides first a fast response from a change in flow bypassing and going through the exchanger but is followed by a delayed response in the opposite direction caused by the same utility flow rate heating or cooling a different flow rate through the exchanger. Unless a feedforward changes the utility flow, the tuning of the PID for temperature of the blended stream must not overreact to the initial temperature change.
Often there are leads besides lags in the temperature response associated with inline temperature control loops for jackets. For heat exchangers in a recirculation line for a volume, the self-regulating response of the exchanger outlet temperature controller is followed by a slow integrating response from recirculation of the changes in the volume temperature. Also, feedforward signals that arrive too soon can create an inverse response or that arrive too late create a second disturbance that makes control worse than the original feedback control. Getting the dynamics right by inclusion of automation besides process dynamic is critical.
We learn the most by our mistakes. To avoid the price of making them in field, we can use dynamic simulation as a safe way of hands-on learning for exploration and prototyping of existing and new systems finding good and bad effects that offers much more flexibility and is non-intrusive to the process. Dynamic models using the digital twin enables a deeper process understanding to be gained and used to make much more intelligent automation. See the Control Talk blog Simulation breeds innovation for an insightful history and future of opportunities for a safe sandbox allowing creativity by synergy of process and automation system knowledge.
Often simulation fidelity is simply stated as low, medium or high. I prefer defining at least five levels as seen below in the chapter Tip #98: How to Achieve Process Simulation Fidelity in the ISA book 101 Tips for a Successful Automation Career. Note that the term “virtual plant” I have been using for decades should be replaced with the term “digital twin” in my books and articles prior to 2018 to be in tune with the terminology for digitalization and digital transformation.
A lot of learning is possible by using Fidelity Levels 3 models. Fidelity Level 4 and 5 simulations with advanced modeling objects are generally needed for complex unit operations where components are being separated or formed, such as biological and chemical reactors and distillation columns, or to match the dynamic response of trajectories to detail advanced process control including PID control that involves feedforwards, decouplers, and state based control. Developing and testing inferential measurements, data analytics, performance metrics, and MPC and RTO applications, generally requires Level 5.
In all cases I recommend a digital twin that has the blocks addressing nearly every type of automation system dynamics and metrics often neglected in dynamic simulation packages. The digital twin should have the same PID Form, Structure and options used in the process industry and a tool like the Mimic Rough-n-Ready tuner to get started with reasonable PID tuning settings.
Many software packages that were not developed by automation professionals may unfortunately seriously mess you up by not having the many sources of dead time, lags, and nonlinearities, and by employing a PID with a Parallel (Independent) Form working in engineering units instead of percent signals. A fellow protégé also in the UK who is now an automation engineer at Phillips 66 can relate his experiences in using Mimic software. If you pursue this dynamic simulation opportunity, we can do articles and Control Talk blogs together to share the understanding gained to help advance our profession.
McMillan, Gregory K., and Vegas, Hunter, 101 Tips for a Successful Automation Career.
The post, Biggest Valve Sizing Mistake, appeared first on ControlGlobal.com's Control Talk blog.
There is a common mistake made in the sizing of most control valves. The intentions that lead to this mistake may be good but the results are insidiously bad. While you would think that the proliferation of improvements in technology and communications would lead to better awareness, the problem appears to getting worse because of pervasive persistent misconceptions fostered by missing fields on valve specification forms.
Presently, valve specification forms have fields for maximum flow, available pressure drop and leakage. Most people filling out the form would think that a valve that can easily handle a greater flow with a lower pressure drop and less leakage would be better. This leads often to rotary valves with tight shutoff seals. These valves are cheaper than sliding stem valves and the actuators often included are designed for a rotary stem and can handle greater shutoff pressures. The resulting ball and butterfly valves have piston actuators designed more for on-off action. These valves are usually already in the piping specification extensively for automated sequential actions and shutdown. While rangeability may not be on the valve specification, it is thought to be extraordinarily great for these valves due to a prevalent definition of rangeability being a maximum flow divided by a minimum flow whose Cv is within the specified inherent flow characteristic. Gosh, you don’t even need to be concerned with piping reducers. What seals the deal is the very attractive price. Not understood is that these on-off valves posing as throttling valves are a disaster. A low valve pressure drop to system pressure drop ratio causes distortion of the installed flow characteristic making the characteristic much more nonlinear. The backlash of the actuator linkages and the keylock actuator shaft to stem to ball or disk connections is excessive, the stiction from the ball or disk seal and shaft packing is terrible, and the resolution of the piston actuator poor. The result is limit cycles and a real rangeability that is lousy to the point of being a disaster for any loop where control better than within 5% of setpoint is desired. Often the oscillations are blamed on other sources due to lack of understanding. The real rangeability is drastically reduced to perhaps 10% of the stated rangeability due to distortion of the nonlinear installed flow characteristic and the backlash and stiction that gets worse near the closed position. Operating valve positions are much less than expected due to conservative factors built into pump sizing and maximum flow specified. Most valve suppliers will not do response testing and if requested, the testing will not be done below 10% valve position because of the deterioration in response. The user is setup for a terrible scenario of limit cycling.
So what can we do? Please, add in a backlash and resolution (e.g., 0.5%) and response time (e.g., 2 sec) and installed flow characteristic valve gain (e.g., 0.5 to 2.0 % flow per % signal) requirement at 10%, 50% and 90% positions for step changes of 0.25%, 0.5%, 1%, and 2% for all valves plus 50% step change for surge valves and gas pressure valves. To achieve these specification requirements, use splined shaft connections, integrally cast stem and ball or disk, v-notch balls and contoured disks, and low friction seals for rotary valves and a valve pressure drop that is at least 25% of the system pressure drop for maximum flow, low friction packing (e.g., ultra low friction (ULF) packing), sensitive diaphragm actuators (now available for much higher actuator pressures) and digital positioners tuned with maximum gain and no integral action for all valves. The installed flow characteristic should be plotted with the help of process and piping engineers for the worst case. When asked why the valve cost is higher, tell them the cheaper valve will cause sloppy control putting the plant safety seriously at risk.
Remember bigger is not better and cheaper is not really cheaper in the long run.
For much more than you have ever wanted on valve response checkout the Control article “How to specify valves and positioners that don’t compromise control" and the associated white paper “Valve Response - Truth or Consequences”.
For much more on valve specification see the ISA Mentor Program Q&A post “Basic Guidelines for Control Valve Selection and Sizing”
You can also get a comprehensive resource focused on what you really need to know for a successful automaton project including nearly a thousand best practices in the 98% new 2019 Process/Industrial Instruments and Controls Handbook Sixth Edition capturing the expertise of 50 leaders in industry.
The post How to Avoid Using Multivariable Flow Transmitters first appeared on the ISA Interchange blog site.
In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Jeff Downen.
Jeff Downen is an I&C commissioning engineer with cross-training in DCS and high voltage electrical testing. His expertise is in start-up and commissioning of natural gas, combined cycle, power plants.
Our multivariable flow transmitters on new construction sites fail a lot. If the transmitter loses the RTD, the whole 4-20 loop goes bad quality along with the HART variables. I like the three devices being separate and their signals joined in the DCS logic much more. I understand that it is more expensive. I want to see if there was any other reasoning behind it on the engineering side and how I can help get a better up front design.
How can we avoid the increasing use of multivariable flow transmitters as an industry standard despite a significant loss in reliability, accuracy, and diagnostic and computational capability from not having individual separate pressure, temperature and flow sensors and transmitters?
I like Jeff’s question on multivariable flow transmitters, as it would be relevant to control engineers, maintenance/reliability engineers, as well as maintenance personnel. What is the application? What are the accuracy requirements? Can you bring the individual variables back to the DCS/PLC through additional variable assignment? Would the increased cost of infrastructure justify the increased expense of a true mass flowmeter? This could be addressed from so many different viewpoints it could be a great discussion topic.
I suggest you explain to plant and project personnel the advantages of separate measurements and true mass flowmeters. Separate flow, temperature and pressure measurements offer better diagnostics, reliability, sensors, and installation location that is particularly important for temperature (e.g., RTD in tapered thermowell with tip centered in pipe with good velocity profile). They can provide faster and perhaps more accurate and maintainable measurements that could be used for personalized performance monitoring calculations and safety instrumented systems.
Coriolis meters provide the only true mass flow measurements offering an incredibly accurate density measurement as well. Most people don’t realize that pressure and temperature compensation of volumetric flow meters to get a mass flow measurement only works if the concentration is constant and known. The Coriolis mass flow is not affected by component concentrations or physical properties in the same phase. Density can provide an inferential measurement of concentration for a two component process fluid. The Coriolis meter accuracy and rangeability is the best by far as noted in the Control Talk column Knowing the best is the best.
Using dedicated and separated measurements also allows for the use of hybrid virtual flowmeters in complex process applications where, for example, the technology for inline multiphase flow metering is not yet mature enough, or where physical units will greatly increase the cost of the associated facilities.
With the digital transformation initiatives associated with Industry 4.0, the use of distributed instrumentation, data-driven learning algorithms, and physical flow models, are being tested and explored more and more in the process industries, especially in upstream oil & gas wellsite applications.
The post How to Avoid Multi-Variable Flow Transmitters first appeared on the ISA Interchange blog site.
Our multi-variable flow transmitters on new construction sites fail a lot. If the transmitter loses the RTD, the whole 4-20 loop goes bad quality along with the HART variables. I like the 3 devices being separate and their signals joined in the DCS logic much more. I understand that it is more expensive. I want to see if there was any other reasoning behind it on the engineering side and how I can help get a better up front design.
How can we avoid the increasing use of multi-variable flow transmitters as an industry standard despite a significant loss in reliability, accuracy, and diagnostic and computational capability from not having individual separate pressure, temperature and flow sensors and transmitters?
I like Jeff’s question on multi-variable flow transmitters, as it would be relevant to control engineers, maintenance/reliability engineers, as well as maintenance personnel. What is the application? What are the accuracy requirements? Can you bring the individual variables back to the DCS/PLC through additional variable assignment? Would the increased cost of infrastructure justify the increased expense of a true mass flowmeter? This could be addressed from so many different viewpoints it could be a great discussion topic.
Using dedicated and separated measurements, also allows for the use of hybrid virtual flowmeters in complex process applications where, for example, the technology for inline multiphase flow metering is not yet mature enough, or where physical units will greatly increase the cost of the associated facilities.
With the digital transformation initiatives associated with Industry 4.0, the use of distributed instrumentation, data-driven learning algorithms, and physical flow models, are being tested and explored more and more in the process industries, especially in upstream oil & gas well site applications.
The post Webinar Recording: Lessons Learned During the Migration to a New DCS first appeared on the ISA Interchange blog site.
This educational ISA webinar was presented by Greg McMillan. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).
This ISA webinar on control valves is introduced by Greg McMillan and presented by Hector Torres, in conjunction with the ISA Mentor Program. Hector is a recipient of ISA’s John McCarney Award for the article on opportunities and challenges for enabling new automation engineers. Hector has been a member of the ISA Mentor Program since its inception. In this webinar, he provides a detailed view of how to use key PID controller features that can greatly expand what you can achieve. The setting of anti-reset windup ARW limits, dynamic reset limit, eight different structures, integral dead band, and set-point filter. Feedforward and rate limiting are covered with some innovative application examples
Principal ISA Mentor Program mentee Hector Torres shares his extensive knowledge gained after migrating a plant from an 1980s vintage DCS to a state-of-the art, new DCS. The following important topics are covered: the proper setting of tuning parameters, controller output scales, anti-reset windup limits, and the many grounding, wiring, and configuration practices found to be essential in a migration project that exceeded expectations.
Did you find this information of value? Want more? Click this link to view other ISA Mentor Program blog posts, technical discussions and educational webinars.
The post, Missed Opportunities in Process Control - Part 6, first appeared on the ControlGlobal.com Control Talk blog.
Here is the Sixth part of a point blank decisive comprehensive list of what we really need to know in a detailed attempt to reduce the disparity between theory and practice. Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail. If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts.
The post How Often Do Measurements Need to Be Calibrated? first appeared on the ISA Interchange blog site.
In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Greg Breitzke.
Greg Breitzke is an E&I reliability specialist – instrumentation/electrical for Stepan. Greg has focused his career on project construction and commissioning as a technician, supervisor, or field engineer. This is his first in-house role, and he is tasked with reviewing and updating plant maintenance procedures for I&E equipment.
I am working through an issue that can be beneficial to other Mentor Program Participants. NFPA 70B provides a detailed description for the prescribed maintenance and frequency based on equipment type, making the electrical portion fairly straight forward. The instrumentation is another matter. We are working to consolidate an abundance of current procedures based on make/model, to a reduced list based on technology. The strategy is to “right size” frequencies for calibration and functional testing; decreasing non-value maintenance to have the ability to increase value added activities, within the existing head count.
My current plan for the instrumentation consists of:
I am trying to provide a reference baseline for review of these frequencies, but having little luck in the industry standards I have access to. Is there a standard or RAGAGEP for calibration and functional testing frequency min/max by technology, that I can reference for a baseline?
The ISA recommended practice is not on the process of calibration but on a calibration management system: ISA-RP105.00.01-2017, Management of a Calibration Program for Industrial Automation and Control Systems. While I contributed, Leo Staples would be a good person for more explanation.
For SIS, there is a requirement to perform calibration, which is comparison against a standard device, within a documented frequency and with documented limits, and correction when outside of limits. This is also required by OSHA for critical equipment under the PSM regulation. EPA has similar requirements under PSRM of course. Correction when out of limits is considered a failed proof test of the instrument in some cases, potentially affecting the reliability of the safety function. Paul Gruhn would be a good person for more explanation.
The ISA/IEC 61511 is performance based and does not mandate specific frequencies. Devices must be tested as some interval to make sure they perform as intended. The frequency required will be based on many different factors (e.g., SIL (performance) target, failure rate of the device in that service, diagnostic coverage, redundancy used (if any), etc.).
Section 5.6 ISA of the ISA technical report ISA-RP105.00.01-2017 addresses in detail calibration verification intervals or frequencies. Users should establish calibration intervals for a loop/component based on the following:
Exceptions include SIS related devices where calibration intervals are established to meet SIL requirements. Other factors that can drive calibration intervals include contracts regulatory requirements.
The idea for the technical report came about after years of frustration dealing with ambiguous gas measurement contracts and government regulations. In many cases these simply stated users should follow good industry practices when addressing all aspects of calibrations.
Calibration intervals alone do not address the other major factors that affect measurement accuracy. These include the accuracy of the calibration equipment, knowledge of the calibration personnel, adherence to defined calibration procedures, and knowledge of the personnel responsible for the calibration program. I have lots of war stories if anyone is interested.
One of the last things that I did at my company before I retired was develop a Calibration Program Standard Operating Procedure (SOP) based on ISA-RP105.00.01-2017. The SOP was designed for use in the Generation, Transmission & Distribution, and other Division of the Company. Some of you may find this funny, but it was even used to determine the calibration frequency for NERC CIP physical security entry control point devices. Initially personnel from the Physical Security Department were testing these devices monthly only because that was what they had always done. While this was before the SOP was established my team used the concepts in establishing the calibration intervals for these devices. This work was well received by the auditors. As a side note, the review of monthly calibration intervals for these devices found the practices caused more problems than it prevented.
The ISA recommended practice is not on the process of calibration, but on a calibration management system: ISA-RP105.00.01-2017, Management of a Calibration Program for Industrial Automation and Control Systems.
The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program.
The measurement drift can provide considerable guidance in that when the number of months between calibrations multiplied by drift per month approaches the allowable error, it is time for a calibration check. Most transmitters today have a low drift rate but thermocouples and most electrodes have a drift rate much larger than the transmitter. The past records of calibration results will provide an update on actual drift for an application. Also, fouling of sensors, particularly electrodes, is an issue revealed in 86% response time during calibration tests (often overlooked). The sensing element is the most vulnerable component in nearly all measurements. Calibration checks should be made more frequently at the beginning to establish a drift rate and near the end of the sensor life when drift and failure rates accelerate. Sensor life for pH electrodes can decrease from a year to a few weeks due to high temperature, solids, strong acids and bases (e.g., caustic) and poisonous ions (e.g., cyanide). For every 25 oC increase in temperature, the electrode life is cut in half unless a high temperature glass is used.
Accuracy is particularly important for primary loops (e.g., composition, pH, and temperature) to ensure you are at the right operating point. For secondary loops whose setpoint is corrected by a primary loop, accuracy is less of an issue. For all loops, the 5 Rs (reliability, resolution, repeatability, rangeability and response time) are important for measurements and valves.
Drift in a primary loop sensor shows up as a different average controller output for a given production rate assuming no changes in raw materials, utilities, or equipment. Fouling of a sensor shows up as an increase in dead time and oscillation loop period.
Middle signal selection using 3 separate sensors provides an incredible amount of additional intelligence and reliability reducing unnecessary maintenance. Drift shows up as a sensor with a consistently increasing average deviation from the middle value. The resulting offset is obvious. Coating shows up as a sensor lagging changes in the middle value. A decrease in span shows up as a sensor falling short of middle value for a change in setpoint.
The installed accuracy greatly depends upon installation details and process fluid particularly taking into account sensor location in terms of seeing a representative indication of the process with minimal measurement noise. Changes in phase can be problematic for nearly all sensors. Impulse lines and capillary systems are a major source of poor measurement performance as detailed in the Control Talk columns Prevent pressure transmitter problems and Your DP problems could be a result of improper use of purges, fills, capillaries and seals.
At the end of this post, I give a lot more details on how to minimize drift and maximize accuracy and repeatability by better temperature and pH sensors and through middle signal selection.
For an additional educational resource, download Calibration Essentials, an informative eBook produced by ISA and Beamex. The free e-book provides vital information about calibrating process instruments today. To download the eBook, click this link.
There is no easy answer to this very complicated question. Unfortunately the answer is ‘it depends’ but I’ll do my best to cover the main points in this short reply.
1) Yes there are some instrument technologies that have a tendency to drift more than others. A partial list of ‘drifters’ might include:
2) Some instrument technologies don’t drift as much. I’ve had good success with Coriolis and radar. (Radar doesn’t usually drift as much as it just cuts out. Coriolis usually works or it doesn’t. Obviously there are situations where either can drift but they are better than most.) DP in clean service with no diaphragm seals is usually pretty trouble free, especially the newer transmitters that are much more stable.
3) The criticality of the service obviously impacts how often one needs to calibrate. Any of these issues could dramatically impact the frequency:
4) Obviously if a frequency is dictated by the service then that is the end of that. Once those are out of the way one can usually look at the service and come up with at least a reasonable calibration frequency as a starting point. Start calibrating at that frequency and then monitor history. If you are checking a meter every six months and have checked a meter 4 times in the last two years and the drift has remained less than 50% of the tolerance, then dropping back to a 12 month calibration cycle make perfect sense. Similarly if you calibrate every 6 months and find the meter drift is > 50% every calibration then you probably need to calibrate more often. However if the meter is older it may be cheaper to replace the meter with a new transmitter which is more stable.
5) The last comment I’ll make is to make sure you are actually calibrating something that matters. I could go on for pages about companies who are diligently calibrating their instrumentation but aren’t actually calibrating their instrumentation. In other words they go through the motions, fill out the paperwork, and can point to reams of calibration logs yet they aren’t adequately testing the instrument loop and it could still be completely wrong. (For instance, shooting a temperature transmitter loop but not actually checking the RTD or thermocouple that feeds it, using a simulator to shoot a 4-20mA signal into the DCS to check the DCS reading but not actually testing the instrument itself, etc. They often check one small part of the loop and after a successful test, consider the whole loop ‘calibrated’.
The Process/Industrial Instruments and Controls Handbook Sixth Edition 2019 edited by me and Hunter Vegas provide insight on how to maximize accuracy and minimize drift for most types of measurements. The following excerpt written by me is for temperature:
The repeatability, accuracy and signal strength are two orders of magnitude better for an RTD compared to a TC. The drift for a RTD below 400 oC is also two orders of magnitude less than a TC. The 1 to 20oC drift per year of a TC is of particular concern for biological and chemical reactor and distillation control because of the profound effect on product quality from control at the wrong operating point. The already exceptional accuracy for a Class A RTD of 0.1oC can be improved to 0.02 oC by “sensor matching” where the four constants of a Callendar-Van-Dusen (CVD) equation provided by the supplier for the sensor are entered into the transmitter. The main limit to accuracy of an RTD is the wiring.
The use of three extension lead wires between the sensor and transmitter or input card can enable the measurement to be compensated for changes in resistance in the lead wires due to temperature assuming the change is exactly the same for both lead wires. The use of four extension lead wires enables total compensation that accounts for the inevitable uncertainty in resistance of lead wires. Standard lead wires have a tolerance of 10% in resistance. For 500 feet of 20 gauge lead wire, the error could be as large as 26oC for a 2-wire RTD and 2.6oC a 3-wire RTD. The “best practice” is to use a 4 wire RTD unless the transmitter is located close to the sensor, preferably on the sensor. The transmitter accuracy is about 0.1oC.
A handheld signal generator of resistance and voltage can be used to simulate the sensor to check or change a transmitter calibration. The sensor connected to the transmitter with linearization needs to be inserted in a dry block simulator. A bath can be used for low temperatures to test thermowell response time but a dry block is better for calibration. The accuracy of the reference temperature sensor in the block or bath should be 4 times more accurate than the sensor being tested. The block or bath readout resolution must be better than the best possible precision of the sensor. The block or bath calibration system should have accuracy traceable to the National Metrology Institute of the user country (NIST in USA).
The accuracy at the normal setpoint to ensure the proper process operating point must be confirmed by a temperature test with a block. For factory assembled and calibrated sensor and thermowell with integral temperature transmitter, a single point temperature test in a dry block is usually sufficient with minimal zero or offset adjustment needed. For an RTD with “sensor matching,” adjustment is often not needed. For field calibration, the temperature of the block must be varied to cover the calibration range to set the linearization, span and zero adjustments. For field assembly, it would be wise to check the 63% response time in a bath.
The best solution in terms of increasing reliability, maintainability, and accuracy for all sensors with different durations of process service is automatic selection of the middle value for the loop process variable (PV). A very large chemical intermediates plant extended middle signal selection to all measurements that in combination with triple redundant controller essentially eliminated the one or more spurious trips per year. Middle signal selection was a requirement for all pH loops in Monsanto and Solutia.
The return on investment for the additional electrodes from improved process performance and reduced life cycle costs is typically more than enough to justify the additional capital costs for biological and chemical processes if the electrode life expectancy has been proven to be acceptable in lab tests for harsh conditions. The use of the middle signal inherently ignores a single failure of any type including the most insidious failure that gives a pH value equal to the set point. The middle value reduces noise without the introduction of the lag from damping adjustment or signal filter and facilitates monitoring the relative speed of the response and drift, which are indicative of measurement and reference electrode coatings, respectively. The middle value used as the loop PV for well-tuned loops will reside near the set point regardless of drift.
A drift in one of the other electrodes is indicative of a plugging or poisoning of its reference. If both of the other electrodes are drifting in the same direction, the middle value electrode probably has a reference problem. If the change in pH for a set point change is slower or smaller for one of the other electrodes, it indicates a coating or loss in efficiency, respectively for the subject glass electrode. Loss of pH glass electrode efficiency results from deterioration of glass surface due to chemical attack, dehydration, non-aqueous solvents, and aging accelerated by high process temperatures. Decreases in glass electrode shunt resistance caused by exposure of O-rings and seals to a harsh or hot process can also cause a loss in electrode efficiency.
Here is some detailed guidance on pH electrode calibration from the ISA book Essentials of Modern Measurements and Final Control Elements.
Buffer calibrations use two buffer solutions, usually at least 3 pH units apart, which allow the pH analyzer to calculate a new slope and zero value, corresponding to the particular characteristics of the sensor to more accurately derive pH from the milliVolt and temperature signals.
Standardization is a simple zero adjustment of a pH analyzer to match the reading of a sample of the process solution made using a laboratory or portable pH analyzer. Standardization eliminates the removal and handling of electrodes and the upset to the equilibrium of the reference electrode junction. Standardization also takes into account the liquid junction potential from high ionic strength solutions and non-aqueous solvents in chemical reactions that would not be seen in buffer solutions. For greatest accuracy, samples should be immediately measured at the sample point with a portable pH meter.
If a lab sample measurement value is used, it must be time stamped and the lab value compare to a historical online value for a calibration adjustment. The middle signal selected value from three electrodes of different ages can be used instead of a sample pH provided that a dynamic response to load disturbances or setpoint changes of at least two electrodes is confirmed. If more than one electrode is severely coated, aged, broken or poisoned, the middle signal is no longer representative of actual process pH.
The calibration of pH electrodes for non-aqueous solutions is even more challenging as discussed in the Control Talk column The wild side of pH measurement.
I am working through an issue that can be beneficial to other Mentor Program Participants. NFPA 70B provides a detailed description for the prescribed maintenance and frequency based on equipment type, making the electrical portion fairly straight forward. The instrumentation is another matter. We are working to consolidate an abundance of current procedures based on make/model, to a reduced list based on technology. The strategy is to “right size” frequencies for calibration and functional testing; decreasing non-value maintenance to have the ability to increase value added activities, within the existing head count.
Section 5.6 ISA of the ISA technical report ISA-RP105.00.01-2017 addresses in detail calibration verification intervals or frequencies. Users should establish calibration intervals for a loop/component based on the following:
For an additional educational resource, download Calibration Essentials, an informative e-book produced by ISA and Beamex. The free e-book provides vital information about calibrating process instruments today. To download the e-book, click this link.
Here is the Fifth part of a point blank decisive comprehensive list of what we really need to know in a detailed attempt to reduce the disparity between theory and practice. Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail. If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts.
The post Basic Guidelines for Control Valve Selection and Sizing first appeared on the ISA Interchange blog site.
In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Hiten Dalal.
Hiten Dalal, PE, PMP, is senior automation engineer for products pipeline at Kinder Morgan, Inc. Hiten has extensive experience in pipeline pressure and flow control.
Are there basic rule of thumb guidelines for control valve sizing outside of relying on the valve supplier and using the valve manufacturer’s sizing program?
Selecting and sizing control valves seems to have become a lost art. Most engineers toss it over the fence to the vendor along with a handful of (mostly wrong) process data values, and a salesperson plugs the values into a vendor program which spits out a result. Control valves often determine the capability of the control system, and a poorly sized and selected control valve will make tight control impossible regardless of the control strategy or tuning employed. Selecting the right valve matters!
There are several aspects of sizing/selecting a control valve that must be addressed:
Note that gathering this data is probably the hardest to do. It often takes a sketch of the piping, an understanding of the process hydraulics, and examination of the system pump curves to determine the real pressure drops under various conditions. Note too that the DP may change when you select a valve since it might require pipe reducers/expanders to be installed in a pipe that is sized larger.
This can be another difficult task. Ideally the control valve response should be linear (from the control system’s perspective). If the PID output changes 5%, the process should respond in a similar fashion regardless of where the output is. (In other words 15% to 20% or 85% to 90% should ideally generate the same process response). If the valve response is non-linear, control becomes much more difficult. (You can tune for one process condition but if conditions change the dynamics change and now the tuning doesn’t work nearly as well.) The valve response is determined by a number of items including:
The user has to understand all of these conditions so he/she can pick the right valve plug. Ideally you pick a valve characteristic that will offset the non-linear effects of the process and make the overall response of the system linear.
That complicates matter still further because now you’ll need to know a lot more about the process fluid itself. If you are faced with cavitation or flashing you may need to know the vapor pressure and critical pressure of the fluid. This information may be readily available or not if the fluid is a mix of products. Choked flow conditions are usually accompanied with noise problems and will also require additional fluid data to perform the calculations. Realize too that the selection of the valve internals will have a big impact on the flow rates, response, etc. (You’ll be looking at anti-cav trim, diffusers, etc.)
Usually the vendor’s program is a good place to start, but some programs are much better than others because some have more process data ‘built in’ and have the advanced calculations required to handle cavitation, flashing, choked flow, and noise calculations. Others are very simplistic and may not handle the more advanced conditions. Theoretically you could use any vendor’s program to do any valve but obviously the vendor program will typically have only its valve data built in so if you use a different program you’ll have to enter that data (if you can find it!) One caution about this – some vendors have different valve constants which can be difficult to convert.
Hope this helped. It was probably a bit more than you were wanting but control valve selection and sizing is a lot more complicated than most realize.
Hunter did a great job of providing detailed concise advice. My offering here is to help avoid the common problems from an inappropriate focus on maximizing valve capacity, minimizing valve pressure drop, minimizing valve leakage and minimizing valve cost. All these things have resulted in “on-off valves” posing as “throttling valves” creating problems of poor actuator and positioner sensitivity, excessive backlash and stiction, unsuspected nonlinearity, poor rangeability, and smart positioners giving dumb diagnostics.
While certain applications, such as pH control, are particularly sensitive to these valve problems, nearly all loops will suffer from backlash and stiction exceeding 5% (quite common with many “on-off valves”) causing limit cycles that can spread through the process. These “on-off valves” are quite attractive because of the high capacity and low pressure drop, leakage and cost. To address leakage requirements, a separate tight shutoff valve should be used in series with a good throttling valve and coordinated to open and close to enable a good throttling valve to smoothly do its job.
Unfortunately there is nothing on a valve specification sheet that requires the valve have a reasonably precise and timely response to signals and not create oscillations from a loop simply being in automatic making us extremely vulnerable to common misconceptions. The most threatening one that comes to mind in selection and sizing is that rangeability is determined by how well a minimum Cv matches the theoretical characteristic. In reality, the minimum Cv cannot be less than the backlash and stiction near the seat. Most valve suppliers will not provide backlash and stiction for positions less than 40% because of the great increase from the sliding stem valve plug riding the seat or the rotary disk or ball rubbing the seal. Also, tests by the supplier are for loose packing. Many think piston actuators are better than diaphragm actuators.
Maybe the physical size and cost is less and the capability for thrust and torque higher, but the sensitivity is an order of magnitude less and vulnerability to actuator seal problems much greater. Higher pressure diaphragm actuators are now available enabling use on larger valves and pressure drops. One more major misconception is that boosters should be used instead of positioners on fast loops. This is downright dangerous due to positive feedback between flexure of diaphragm slightly changing actuator pressure and extremely high booster outlet port sensitivity. To reduce response time, the booster should be put on the positioner output with a bypass valve opened just enough to stop high frequency oscillations by allowing the positioner to see the much greater actuator and booster volume.
The following excerpt from the Control Talk blog Sizing up valve sizing opportunities provides some more detailed warnings:
We are pretty diligent about making sure the valve can supply the maximum flow. In fact, we can become so diligent we choose a valve size much greater than needed thinking bigger is better in case we ever need more. What we often do not realize is that the process engineer has already built in a factor to make sure there is more than enough flow in the given maximum (e.g., 25% more than needed). Since valve size and valve leakage are prominent requirements on the specification sheet if the materials of construction requirements are clear, we are setup for a bad scenario of buying a larger valve with higher friction.
The valve supplier is happy to sell a larger valve and the piping designer is happier that not much or any of a pipe reducer is needed for valve installation and the pump size may be smaller. The process is not happy. The operators are not happy looking at trend charts unless the trend chart time and process variable scales are so large the limit cycle looks like noise. Eventually everyone will be unhappy.
The limit cycle amplitude is large because of greater friction near the seat and the higher valve gain. The amplitude in flow units is the percent resolution (e.g., % stick-slip) multiplied by the valve gain (e.g., delta pph per delta % signal). You get a double whammy from a larger resolution limit and a larger valve gain. If you further decide to reduce the pressure drop allocated to the valve as a fraction of total system pressure drop to less than 0.25, a linear characteristic becomes quick opening greatly increasing the valve gain near the closed position. For a fraction much less than 0.25 and an equal percentage trim you may be literally and figuratively bottoming out for the given R factor that sets the rangeability for the inherent flow characteristic (e.g., R=50).
What can you do to lead the way and become the “go to” resource for intelligent valve sizing?
You need to compute the installed flow characteristic for various valve and trim sizes as discussed in the Jan 2016 Control Talk post Why and how to establish installed valve flow characteristics. You should take advantage of supplier software and your company’s mechanical engineer’s knowledge of the piping system design and details.
You must choose the right inherent flow characteristic. If the pressure drop available to the control valve is relatively constant, then linear trim is best because the installed flow characteristic is then the inherent flow characteristic. The valve pressure drop can be relatively constant due to a variety of reasons most notably pressure control loops or changes in pressure in the rest of the piping system being negligible (fictional losses in system piping negligible). For more on this see the 5/06/2015 Control Talk blog Best Control Valve Flow Characteristic Tips.
On the installed flow characteristic you need to make sure the valve gain in percent (% flow per % signal) from minimum to maximum flow does not change by more than a factor of 4 (e.g., 0.5 to 2.0) with the minimum gain greater than 0.25 and the maximum gain less than 4. For sliding stem valves, this valve gain requirement corresponds to minimum and maximum valve positions of 10% and 90%. For many rotary valves, this requirement corresponds to minimum and maximum disk or ball rotations of 20 degrees and 50 degrees.
Furthermore, the limit cycle amplitude being the resolution in percent multiplied by the valve gain in flow units (e.g., pph per %) and by the process gain in engineering units (e.g., pH per pph) must be less than the allowable process variability (e.g., pH). The amplitude and conditions for a limit cycle from backlash is a bit more complicated but still computable. For sliding stem valves, you have more flexibility in that you may be able to change out trim sizes as the process requirements change. Plus, sliding stem valves generally have a much better resolution if you have a sensitive diaphragm actuator with plenty of thrust or torque and a smart positioner.
The books Tuning and Control Loop Performance Fourth Edition and Essentials of Modern Measurements and Final Elements have simple equations to compute the installed flow characteristic and the minimum possible Cv for controllability based on the theoretical inherent flow characteristic, valve drop to total system drop pressure ratio and the resolution limit.
Here is some guidance from “Chapter 4 – Best Control Valves and Variable Frequency Drives” of Process/Industrial Instruments and Controls Handbook Sixth Edition that Hunter and I just finished with the contributions of 50 experts in our profession to address nearly all aspects of achieving the best automation project performance.
The effect of resolution limits from stiction and dead band from backlash are most noticeable for changes in controller output less than 0.4% and the effect of rate limiting is greatest for changes greater than 40%. For PID output changes of 2%, a poor valve or VFD design and setup are not very noticeable. An increase in PID gain resulting in changes in PID output greater than 0.4% can reduce oscillations from poor positioner design and dead band.
The requirements in terms of 86% response time and travel gain (change in valve position divided by change in signal) should be specified for small, medium and large signal changes. In general, the travel gain requirement is relaxed for small signal changes due to effect of backlash and stiction, and the 86% response time requirement is relaxed for large signal changes due to the effect of rate limiting. The measurement of actual valve travel is problematic for on-off valves posing as throttling valves because the shaft movement is not disk or ball movement. The resulting difference between shaft position and actual ball or disk position has been observed in several applications to be as large as 8 percent.
Use sizing software with physical properties for worst case operating conditions. The minimum valve position must be greater than backlash and deadband. Based on a relatively good installed flow characteristic valve gains (valve drop to system pressure drop ratio greater than 0.25), there are minimum and maximum positions during sizing to minimize nonlinearity to less than 4:1. For sliding stem valves, the minimum and maximum valve positions are typically 10% and 90%, respectively. For many rotary valves, the minimum and maximum disk or ball rotations are typically 20 degrees and 50 degrees, respectively. The range between minimum and maximum positions or rotations can be extended by signal characterization to linearize the installed flow characteristic.
For much more on valve response see the Control feature article How to specify valves and positioners that do not compromise control.
The best book I have for understanding the many details of valve design is Control Valves for the Chemical Process Industries written by Bill Fitzgerald and published by McGraw-Hill. The book that is specifically focused on this Q&A topic is Control Valve Selection and Sizing written by Les Driskell and published by ISA. Most of my books in my office are old like me. Sometimes newer versions do not exist or are not as good.
Image Credit: Wikipedia
The post What Are the Opportunities for Nonlinear Control in Process Industry Applications? first appeared on the ISA Interchange blog site.
In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. These questions come from Flavio Briquente and Syed Misbahuddin.
Model predictive control (MPC) has a proven successful history of providing extensive multivariable control and optimization. The applications in refineries are extensive forcing the PID in most cases to take a backseat. These processes tend to employ very large MPC matrices and employ extensive optimization by Linear Programs (LP). The models are linear and may be switched for different product mixtures. The plants tend to have a more constant production rates and greater linearity than seen in specialty chemical and biological processes.
MPC is also widely used in petrochemical plants. The applications in other parts of the process industry are increasing but tend to use much smaller MPC matrices focused on a unit operation. MPC offers dynamic decoupling, disturbance and constraint control. To do the same in PID requires dynamic compensation of decoupling and feedforward signals and override control. The software to accomplish dynamic compensation for the PID is not explained or widely used. Also, interactions and override control involving more than two process variables is more challenging than practitioners can address. MPC is easier to tune and has an integrated LP for optimization.
Flavio Briguente is an advanced process control consultant at Evonik in North America, and is one of the original protégés of the ISA Mentor Program. Flavio has expertise in model predictive control and advanced PID control. He has worked at Rohm, Haas Company, and Monsanto Company. At Monsanto, he was appointed to the manufacturing technologist program, and served as the process control lead at the Sao Jose dos Campos plant in Brazil and a technical reference for the company’s South American sites. During his career, Flavio focused on different manufacturing processes, and made major contributions in optimization, advanced control strategies, Six Sigma and capital projects. He earned a chemical engineering degree from the University of São Paulo, a post-graduate degree in environmental engineering from FAAP, a master’s degree in automation and robotics from the University of Taubate, and a PhD in material and manufacturing processes from Aeronautics Institute of Technology.
Syed Misbahuddin is an advanced process control engineer for a major specialty chemicals company with experience in model predictive control and advanced PID control. Before joining industry, he received a master’s degree in chemical engineering with a focus on neural network-based controls. Additionally, he is trained as a Six Sigma Black Belt, which focuses on utilizing statistical process controls for variability reduction. This combination helps him implement controls utilizing physics-based, as well as, data-driven methods.
The considerable experience and knowledge of Flavio and Syed blurs the line between protégé and resource leading to exceptionally technical and insightful questions and answers.
Can the existing MPC/APC techniques be applied for batch operation? Is there a non-linear MPC application available? Is there a known case in operation for chemical industry? What are the pros and cons of linear versus nonlinear MPC?
MPC was originally developed for continuous or semi-continuous processes. It is based on a receding horizon where the prediction and control horizons are fixed and shifted forwarded each execution of the controller. Most MPCs include an optimizer that optimizes the steady state at the end of the horizon, which the dynamic part of the MPC steers towards.
Batch processes are by definition non-steady-state and typically have an end-point condition that must be met at batch end and usually have a trajectory over time that controlled variables (CVs) are desired to follow. As a result, the standard MPC algorithm is not appropriate for batch processes and must be modified (note: there may be exceptions to this based on the application). I am aware of MPC batch products available in the market, but I have no experience with them. Due to the nonlinear nature of batch processes, especially those involving exothermic reaction, a nonlinear MPC may be necessary.
By far, the majority of MPCs applied industrially utilize a linear model. Many of the commercial linear packages include previsions for managing nonlinearities, such as using linearizing transformations, changing the gain, dynamics, or the models themselves. A typical approach is to apply a nonlinear static transformation to a manipulated variable or a controlled variable, commonly called Hammerstein and Wiener transformations. An example is characterizing the valve-flow relationship or controlling the logarithm of a distillation composition. Transformations are performed before or after the MPC engine (optimization) so that a linear optimization problem is retained.
Given the success of modeling chemical processes it may be surprising that linear, empirically developed models are still the norm. The reason is that it is still quicker and cheaper to develop an empirical model and linear models most often perform well for the majority of processes, especially with the nonlinear capabilities mentioned previously.
Nonlinear MPC applications tend to be reserved for those applications where nonlinearities are present in both system gains and dynamic responses and the controller must operate at significantly different targets. Nonlinear MPC is routinely applied in polymer manufacturing. These applications typically have less than five manipulated variables (MVs). A range of models have been used in nonlinear MPC, including neural nets, first principles, and hybrid models that combine first principle and empirical models.
A potential disadvantage of developing a nonlinear MPC application is the time necessary to develop and validate the model. If a first principle model is used, lower level PID loops must also be modeled if the dynamics are significant (i.e., cannot be ignored). With empirical modeling, the dynamics of the PID loops are embedded in the plant responses. Compared to a linear model, a nonlinear model will also require more computation time, so one would need to ensure that the controller can meet the required execution period based on the dynamics of the process and disturbances. In addition, there may be decisions around how to update the mode, i.e., which parameters or biases to adjust. For these reasons, nonlinear MPC is reserved for those applications that cannot be adequately controlled with linear MPC.
My opinion is that we’ll be seeing more nonlinear applications once it becomes easier to develop nonlinear models. I see hybrid models being critical to this. Known information would be incorporated and unknown parts would be described using empirically models using a range of techniques that might include machine learning. Such an approach might actually reduce the time of model development compared to linear approaches.
MPC for batch operations can be achieved by the translation of the controlled variable from batch temperature or composition with a unidirectional response (e.g., increasing temperature or composition) to the slope of the batch profile (temperature or composition rate of change) as noted in my article Get the Most out of Your Batch you then have a continuous type of process with a bi-directional response. There is still potentially a nonlinearity issue. For a perspective on the many challenges see my blog Why batch processes are difficult.
I agree with Mark Darby that the use of hybrid systems where nonlinear models are integrated could be beneficial. My preference would be in the following order in terms of ability to understand and improve:
There is an opportunity to use principle components for neural network inputs to eliminate correlations between inputs and to reduce the number of inputs. You are much more vulnerable with black box approaches like neural networks to inadequacies in training data. More details about the use of NN and recent advances will be discussed in a subsequent question by Syed.
There is some synergy to be gained by using the best of what each of the above have to offer. In the literature and in practices, experts in a particular technology often do not see the benefit of other technologies. There are exceptions as seen in papers referenced in my answer to the next question. I personally see benefits in running a first principle model (FPM) to understand causes and effects and to identify process gains. Not realized is that the FPM parameters in a virtual plant that uses a digital twin running real time using the same setpoints as the actual plant can be adapted by use of a MPC. In the next section we will see how NN can be used to help a FPM.
Signal characterization is a valuable tool to address nonlinearities in the valve and process as detailed in my blog Unexpected benefits of signal characterizers. I tried using NN to predict pH for a mixture of weak acids and bases and found better results from the simple use of a signal characterizer. Part of the problem is that the process gain is inversely proportional to production rate as detailed in my blog Hidden factor in our most important control loops.
Since dead time mismatch has a big effect on MPC performance as detailed in the ISA Mentor Post How to Improve Loop Performance for Dead Time Dominant Systems, an intelligent update of dead time simply based on production rate for a transportation delay can be beneficial.
Recently, there has been an increased focus on the use of deep neural networks for artificial intelligence (AI) applications. Deep signifies many hidden layers. Recurrent neural networks have also been able in some cases to insure relationships are cause and effect rather than just correlations. They use a rather black box approach with models built from training data. How successful are deep neural networks in process control?
Pavilion Technologies in Austin has integrated Neural Networks with Model Predictive Control. Successful applications in the optimization of ethanol processes have been reported a decade ago. In the Pavilion 1996 white paper “The Process Perfector: The next step to Multivariable Control and Optimization” it appears that process gains possibly, from step testing of FPM or bump testing of actual process for an MPC, were used as the starting point. The NN was then able to provide a nonlinear model of the dynamics given the steady state gains. I am not sure what complexity of dynamics can be identified. The predictions of NN for continuous processes have the most notable successes in plug flow processes where there is no appreciable process time constant and the process dynamics simplify to a transportation delay. Examples of successes of NN for plug flow include dryer moisture, furnace CO, and kiln or catalytic reactor product composition prediction. Possible applications also exist for inline systems and sheets in pulp and paper processes and for extruders and static mixers.
While the incentive is greater for high value biologic products, there are challenges with models of biological processes due to multiplicative effects (neural networks and data analytic models assume additive effects). Almost, every first principle model (FPM) has specific growth rate and product formation the result of a multiplication of factors each between 0 and 1 to detail the effect of temperature, pH, dissolved oxygen, glucose, amino acid (e.g., glutamine), and inhibitors (e.g., lactic acid). Thus, each factor changes the effect of every other factor. You can understand this by realizing that if the temperature is too high, cells are not going to grow and may in fact die. It does not matter if there is enough oxygen or glucose. Similarly if there is not enough oxygen, it does not matter if the all the other conditions are fine. One way to address this problem is to make all factors as close to one and as constant as possible except for the factor of greatest interest. It has been shown data analytics can be used to identify the limitation and/or inhibition FPM parameter for one condition, such as the effect of glucose concentration via the Michaelis-Menten equation if all other factors are constant and nearly one.
Process control is about changes in process inputs and consequential changes in process outputs. If there is no change, you cannot identify the process gain or dynamics. We know this is necessary in the identification of models for MPC and PID tuning and feedforward control. We often forget this in the data sets used to develop data models. A smart Design of Experiments (DOE) is really best to get the data sets to show changes in process outputs for changes in process inputs and to cover the range of interest. If setpoints are changed for different production rates and products, existing historical data may be rich enough if carefully pruned. Remember neural network models like statistical models are correlations and not cause and effect. Review by people knowledgeable in the process and control system is essential.
Time synchronization of process inputs with process outputs is needed for continuous but not necessarily for batch models, explaining the notable successes in predicting batch end points. Often delays are inserted on continuous process inputs. This is sufficient for plug flow volumes, such as dryers, where the dynamics are principally a transport delay. For back mixed volumes such as vessels and columns a time lag and delay should be used that is dependent upon production rate. Neural network (NN) models are more difficult to troubleshoot than data analytic models and are vulnerable to correlated inputs (data analytics benefits from principle component analysis and drill down to contributors). NN models can introduce localized reversal of slope and bizarre extrapolation beyond training data not seen in data analytics. Data analytics’ piecewise linear fit can successfully model nonlinear batch profiles. To me this is similar in principle to the use of signal characterizers to provide a piecewise fit of titration curves.
Process inputs and outputs that are coincidental are an issue for process diagnostics and predictions by MVSPC and NN models. Coincidences can come and go and never even appear again. They can be caused by unmeasured disturbances (e.g., concentrations of unrealized inhibiters and contaminants), operator actions (e.g., largely unpredictable and unrepeatable), operating states (e.g., controllers not in highest mode or at output limits), weather (e.g., blue northerners), poor installations (e.g., unsecured capillary blowing in wind), and just bad luck.
I found a 1998 Hydrocarbon Processing article by Aspen Technology Inc. “Applying neural networks” that provides practical guidance and opportunities for hybrid models.
The dynamics can be adapted and cause and effect relationships increased by advancements associated with recurrent neural networks as discussed in Chapter 2 Neural Networks with Feedback and Self-Organization in The Fundamentals of Computational Intelligence: System Approach by Mikhail Z. Zgurovsky and Yuriy P. Zaychenko (Springer 2016).
The companies best known for neural net-based controllers are Pavilion (now Rockwell) and AspenTech. There have been multiple papers and presentations by these companies over the past 20 years with many successful applications in polymers. It’s clear from reading these papers that their approaches have continued to evolve over time and standard approaches have been developed. Today both approaches incorporate first principles models and make extensive use of historical data. For polymer reactor applications, the FPM involves dynamic reaction heat and mass balance equations and historical data is used to develop steady-state property predictions. Process testing time is needed only to capture or confirm dynamic aspects of the models.
Enhancements to the neural networks used in control applications have been reported. AspenTech addressed the extrapolation challenges of neural nets with bounded derivatives. Pavilion makes use of constrained neural nets in their fitting of models.
Rockwell describes a different approach to the modeling and control of a fed-batch ethanol process in a presentation made at the 2009 American Control Conference, titled “Industrial Application of Nonlinear Model Predictive Control Technology for Fuel Ethanol Fermentation.” The first step was the development of a kinetic model based on the structure of a FPM. Certain reaction parameters in the nonlinear state space model were modeled using a neural net. The online model is a more efficient non-linear model, fit from the initial model that handles nonlinear dynamics. Parameters are fit by a gain constrained neural-net. The nonlinear model is described in a Hydrocarbon Processing article titled Model predictive control for nonlinear processes with varying dynamics.
To Syed’s follow-up question about deep neural networks, Deep neural networks require more parameters, but techniques have been developed that help deal with this. I have not seen results in process control applications, but it will be interesting to see if these enhancements developed and used by the google-types will be useful for our industries.
In addition to Greg’s citings, I wanted to mention a few other articles that describe approaches to nonlinear control. A FPM-based nonlinear controller was developed by ExxonMobil, primarily for polymer applications. It is described in a paper presented at the Chemical Process Control VI conference (2001) titled “Evolution of a Nonlinear Model Predictive Controller,” and in a subsequent paper presented at another conference, Assessment and future directions of nonlinear model predictive control (2005), entitled NLMPC: A Platform for Optimal Control of Feed- or Product-Flexible Manufacturing. The motivation for a first principles model-based MPC for polymers included the nonlinearity associated with both gains and dynamics, constraint handling, control of new grades not previous produced, and the portability of the model/controller to other plants. In the modeling step, the estimation of model parameters in the FPM (parameter estimation) was a cited as a challenge. State estimation of the CVs, in light of unmeasured disturbances, is considered essential for the model update (feedback step). Finally, the increased skills necessary to support and maintain the nonlinear controller was mentioned, in particular, to diagnosis and correct convergence problems.
A hybrid modeling approach to batch processes is described in a 2007 conference presentation at the 8th International IFAC Symposium on Dynamics and Control of Process Systems by IPCOS, titled “An Efficient Approach for Efficient Modeling and Advanced Control of Chemical Batch Processes.” The motivation for the nonlinear controller is the nonlinear behavior of many batch processes. Here, fundamental relationships were used for mass and energy balances and an empirical model for the reaction energy (which includes the kinetics), which was fit from historical data. The controller used the MPC structure, modified for the batch process. Future prediction of the CVs in the controller were made using the hybrid model, whereas the dynamic controller incorporated linearizations of the hybrid model.
I think it is fair to say that there is a lack of nonlinear solvers tailored to hybrid modeling. An exception is the freely available software environments APMonitor and GEKKO developed by John Hedengren’s group at BYU. It solves dynamic optimization problems with first principle or hybrid models. It has built-in functions for model building, updating, and control. Here is a link to the website that contain references and videos for a range of nonlinear applications, including a batch distillation application.
I worked with neutral networks quite a bit when they first came out in the late 1990s. I have not tried working with them much since but I will pass on my findings which I expect are as applicable now as they were then.
Neural networks sound useful in principle. Give a neural network a pile of training data, let it ‘discover’ correlations between the inputs and the output data, then reverse those correlations in order to create a model which can be used for control. Unfortunately actually creating such a neural network and using it for control is much harder than it looks. Some reasons for this are:
I am not saying neutral networks do not work – I actually had very good success with them. However when all was said and done I pretty much figured out the correlations myself through trial and error and was able to utilize that information to improve control. I wrote a paper on the topic and won an ISA award because neural networks were all the rage at that time, but the reality was I just used the software to reinforce what I learned during the ‘network training’ process.
The post, Missed Opportunities in Process Control - Part 4, first appeared on the ControlGlobal.com Control Talk blog.
Here is the Fourth part of a point blank decisive comprehensive list of what we really need to know in a detailed attempt to reduce the disparity between theory and practice. Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail. If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts.
Eliminate air gap in thermowells to make a temperature response much faster. Contrary to popular opinion, the type of sensor is not a significant factor in the speed of the temperature response in the process industry. While an RTD may be a few seconds slower than a TC, the annular clearance around the sheath can cause an order of magnitude larger measurement time lag. Additionally, a tip not touching the bottom of the thermowell can be even worse. Air is a great insulator as seen in the design of more energy efficient windows. Spring loaded tight fitting sheathed sensors in stepped metal thermowells of the proper insertion length are best. Ceramic protection tubes cause a large measurement lag due to poor thermal conductivity. Low fluid velocities can cause an increase in the lag as well. See the Control Talk column “A meeting of minds” on how to get the most precise and responsive temperature measurement.
Use the best glass and sufficient velocity to keep pH measurement fast by reducing aging and coatings. A aged glass electrode due to even moderately high temperature (e.g., > 30 oC), chemical attack from strong acids or strong bases (e.g., caustic) or dehydration from not being wetted or exposed to non-aqueous solvents can increase the sensor lag time by orders of magnitude. High temperature glass and specific ion resistant glasses are incredibly beneficial to sustain accuracy and a clean healthy electrode sensor lag of just a few seconds. Velocities must be greater than 1 fps for fast response and greater than 5 fps to prevent fouling that can also increase sensor lag by orders of magnitude by almost imperceptible coatings. This is helpful for thermowells as well but the adverse effects in terms of slower response time are not as dramatic as seen for pH. Electrodes must be kept wetted and exposure to non-aqueous solvents and harsh process conditions reduced by automatically retractable assemblies with ability to soak in buffer solutions. See the Control Talk column “Meeting of minds encore” on how to get the most precise and responsive pH measurement.
Avoid the measurement lag becoming the primary lag. If the measurement lag becomes larger than the largest process time constant, the trend charts may look better due to attenuation of oscillation amplitude by the filtering effect. The PID gain may even be able to be increased because the PID does not know where the primary lag came from. The key is that the actual amplitude of the process oscillation and the peak error is larger (often unknown unless a special separate fast measurement is installed). Seen on the trend charts is the fact the period of oscillation is larger possibly to the point of creating a sustained oscillation. Besides slow electrodes and thermowells, this situation can occur simply due to transmitter damping or signal filter time settings. For compressor surge and many gas pressure control systems, the filter time and transmitter damping settings must not exceed 0.2 sec. For a much greater understanding, see the Control Talk Blog “Measurement Attenuation and Deception Tips”.
Real rangeability of a control valve depends upon ratio of valve drop to system pressure drop, actuator and positioner sensitivity, backlash, and stiction. Often rangeability is based on the deviation from an inherent flow characteristic leading to statements that a rotary valve often designed for on-off control as having the greatest rangeability. The real definition should depend upon minimum controllable flow that is a function of the installed flow characteristic, sensitivity, backlash and stiction near the closed position, all of which are generally worse for these on-off valves that supposedly have the best rangeability. The best valve rangeability is achieved with a valve drop to system pressure drop ratio greater than 0.25, generously sized diaphragm actuators, digital positioner tuned with high gain and no integral action, low friction packing (e.g., Enviro-Seal), and a sliding stem valve. If a rotary valve must be used, there should be a splined shaft to stem connection and stem integrally cast with ball or disk to minimize backlash and a low friction seal or ideally no seal to minimize stiction. A graduated v-notch ball or contoured butterfly should be used to improve flow characteristic. Equations to compute the actual valve rangeability based on pressure drop ratio and resolution are given in Tuning and Control Loop Performance Fourth Edition.
Real rangeability of a variable frequency drive (VFD) depends upon ratio of static pressure to system pressure drop, motor design, inverter type, input card resolution and dead band setting. The best VFD rangeability and response is achieved by a static pressure to system pressure drop ratio less than 0.25, a generously sized TEFC motor with 1.15 service factor and Class F insulation, pulse width modulated inverter, speed to torque cascade control in inverter, and no dead band or rate limiting in the inverter setup.
Identify and minimize transportation delays. The delay for a temperature or composition change to propagate from the point of manipulated change to the process or sensor is simply the process volume divided by the process flow rate. Normal process design procedures do not recognize the detrimental effect of dead time. The biggest example is equipment design guidelines that have a dip tube designed to be large in diameter extending down toward the impeller. Missing is the understanding the incredibly large dead time for pH control where the reagent flow is a gph or less and the dip tube volume is a gallon or more. When the reagent valve is closed, the dip tube is back filled with process fluid from migration of high to low concentrations. To get the reagent to displace the process fluid takes more than an hour. When the reagent valve shuts off, it may take hours before reagent stops dripping and migrating into the process. To go from acid to base in split range control may take hours to displace the acid in the dip tube. The same thing happens to go from base to acid. The stiction is also highest at the closure position. When you consider pH is so sensitive, it is no wonder that pH systems oscillate across the split range point.
The real rangeability of flow meters depends upon the signal to noise ratio at low flows, minimum velocity, and whether accuracy is a percent of scale or reading. The best flow rangeability is achieved by meters with accuracy in percent of reading, minimal noise at low flows and least effect of low velocities including the possible transition to laminar flow. Consequently Coriolis flow meters have the best rangeability (e.g., 200:1) and magmeters have the next best rangeability (e.g., 50:1). Most rangeability statements for other meters is based on a ratio of maximum to minimum meter velocity and turbulent flow and do not take into account the actual maximum flow experienced is much less than meter capacity.
Use Coriolis flow meters for stoichiometric control and heating value control. The Coriolis flowmeter has the greatest accuracy with a mass flow measurement independent of composition. This capability is key to keeping flows in the right ratio particularly for reactants per the factors in the stoichiometric equation for the reaction (mole flow rate is simply mass flow rate divided by molecular weight of reactant). For waste fuels, the heat release rate upon combustion is a strong function of the mass flow greatly facilitating optimization of supplemental fuel use. Nearly all ratio control systems could benefit from true mass flow measurements with great accuracy and rangeability. For more on what you need to know to achieving what the Coriolis meter is capable of, see the Control Talk Column “Knowing the best is the best”.
Identify and minimize the total dead time. Dead time is easily identified on a properly scaled trend chart as simply the time delay between a manual output change or setpoint change and the start of the change in the process variable being controlled. The least disruption is usually by simply putting PID momentarily in manual and making a small output change simulating a load disturbance. The test should be done at different production rates and run times. The dead time tends to be largest at low production rates due to larger transportation delays and slower heat transfer rates and sensor response. Dead time also tends to increase with production run time due to fouling or frosting of heat transfer surfaces. See the Control Talk Blog “Deadtime, the Simple Easy Key to Better Control” for a more extensive explanation of why I would be out of a job if the dead time was zero.
Identify and minimize the ultimate period. This goes hand in hand with knowing and reducing the total loop dead time. The ultimate period in most loops is simply 4 times the dead time in a first order approximation where a secondary time constant is taken as creating additional dead time. Dead time dominant loops have a smaller ultimate period that approaches 2 times the dead time for a pure dead time loop (extremely rare). Input oscillations with a period between ½ and twice the ultimate period result in resonance requiring less aggressive tuning. Input oscillations less than ½ the ultimate period can be considered to be noise requiring filtering and less aggressive tuning. Oscillations periods greater than twice the ultimate period, are attenuated by more aggressive tuning. Note that input oscillations persist when the PID is in manual. For damped oscillations that only appear when the PID is in auto, an oscillation period close to ultimate period indicates too high a PID gain and more than twice the ultimate period indicates too low a PID reset time. A damped oscillation period approaching or exceeding 10 times the ultimate period indicates a violation of the gain window for near-integrating, true integrating or runaway processes. Oscillations greater than four times the ultimate period with constant amplitude are limit cycles due to backlash (dead band) or stiction (resolution limit). See the Control Talk Blogs “Controller Attenuation and Resonance Tips” and “Processes with no Steady State in PID Time Frame Tips” for more guidance.
The post How to Implement Effective Safety Instrumented Systems for Process Automation Applications first appeared on the ISA Interchange blog site.
In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Hariharan Ramachandran.
Hariharan starts an enlightening conversation introducing platform independent key concepts for an effective safety instrumented system with the Mentor Program resource Len Laskowski, a principal technical SIS consultant, and Hunter Vegas, co-founder of the Mentor Program.
Hariharan Ramachandran, a recent resource added to the ISA Mentor Program, is a control and safety systems professional with various levels of experience in the field of Industrial control, safety and automation. He has worked for various companies and executed global projects for oil and gas and petrochemical industries gaining experience in the entire life cycle of industrial automation and safety projects.
Len Laskowski is a principal technical SIS consultant for Emerson Automation Solutions, and is a voting member of ISA84, Instrumented Systems to Achieve Functional Safety in the Process Industries.
Hunter Vegas, P.E., has worked as an instrument engineer, production engineer, instrumentation group leader, principal automation engineer, and unit production manager. In 2001, he entered the systems integration industry and is currently working for Wunderlich-Malec as an engineering project manager in Kernersville, N.C. Hunter has executed thousands of instrumentation and control projects over his career, with budgets ranging from a few thousand to millions of dollars. He is proficient in field instrumentation sizing and selection, safety interlock design, electrical design, advanced control strategy, and numerous control system hardware and software platforms. Hunter earned a B.S.E.E. degree from Tulane University and an M.B.A. from Wake Forest University.
How is the safety integrity level (SIL) of a critical safety system maintained throughout the lifecycle?
The answer might sound a bit trite by the simple answer is by diligently following the lifecycle steps from beginning to end. Perform the design correctly and verify that it has been executed correctly. The SIS team should not blindly accept HAZOP and LOPA results at face value. The design that the LOPAs drive is no better than the team that determined the LOPA and the information they were provided. Often the LOPA results are based on incomplete or possibly misleading information. I believe a good SIS design team should question the LOPA and seek to validate its assumptions. I have seen LOPA’s declare that there is no hazard because XYZ equipment protects against it. But a walk in the field later discovered that equipment was taken out of service a year ago and had not yet been replaced. Obviously getting the LOPA/Hazop right is the first step.
The second step is to make sure one does a robust design and specifies good quality instruments that are a good fit for the application. For example, a vortex meter may be a great meter for some applications but a poor choice for others. Similarly certain valve designs may have limited value as a safety shutdown valve. Inexperienced engineers may specify Class VI shutoff for on-off valves thinking they are making the system safer, but Class V metal seat valves would stand up to the service much better in the long run since the soft elastomer seats can easily be destroyed in less than month of operation. The third leg of this triangle is using the equipment by exercising it and routinely testing the loop. Partial stroke testing the valves is a very good idea to keep valves from sticking. Also for new units that do not have extensive experience with a process, the SIF components (valves and sensors) should be inspected at the first shutdown to assess their condition. This needs to be done until a history with the installation can be established. Diagnostics also fall into this category, deviation alarms, stroke time and any other diagnostics that can help determine the SIS health is important.
The safety instrumented function has to be monitored and managed throughout its lifecycle. Each layer in a safety protection system must have the ability to be audited. SIS verification and validation process provides a high level of assurance that the SIS will operate in accordance with its safety requirements specification (SRS). The proof testing must be carried out periodically at the intervals specified in the safety requirement specification. There should be a mechanism for recording of SIF life event data (proof test results, failures, and demands) for comparison of actual to expected performance. Continuous evaluation and improvement is the key concept here in maintaining the SIS efficiently.
What is the best approach to eliminate the common cause failures in a safety critical system?
There are many ways that common cause failures can creep into a safety system design. Some of the more common ways include:
Both, random and systematic events can induce common cause failure (CCF) in the form of single points of failure or the failure of redundant devices.
Random hardware failures are addressed by Design architecture, diagnostics, estimation (analysis) of probabilistic failures, design techniques and measures (to IEC 61508‐7).
Systematic failures are best addressed through the implementation of a protective management system, which overlays a quality management system with a project development process. A rigorous system is required to decrease systematic errors and enhance safe and reliable operation. Each verification, functional assessment, audit, and validation is aimed at reducing the probability of systematic error to a sufficiently low level.
The management system should define work processes, which seek to identify and correct human error. Internal guidelines and procedures should be developed to support the day-to-day work processes for project engineering and on-going plant operation and maintenance. Procedures also serve as a training tool and ensure consistent execution of required activities. As errors or failures are detected, their occurrence should be investigated, so that lessons can be learned and communicated to potentially affected personnel.
An incident happened at a process plant, what are all the engineering aspects that needs to be verified during the Investigation?
I would start at the beginning of the lifecycle look at Hazop and LOPA’s to see that they are done properly. Look to see that documentation is correct; P&IDs, SRS, C&Es, MOC and test logs and procedures. Look to see where the break down occurred. Were things specified correctly? Were the designs verified? Was the System correctly validated? Was proper training given? Look for test records once the system was commissioned.
Usually the first step is to determine exactly what happened separating conjecture from facts. Gather alarm logs, historian data, etc. while it is available. Individually interview any personnel involved as soon as possible to lock in the details. With that information in hand, begin to work backwards determining exactly what initiated the event and what subsequent failures occurred to allow it to happen. In most cases there will be a cascade of failures that actually enabled the event to happen. Then examine each failure to understand what happened and how it can be avoided in the future. Often there will be a number of changes implemented. If the SIS system failed, then Len’s answer provides a good list of items to check.
Also verify if the device/equipment is appropriately used within the design intent.
What are all the critical factors involved in decommissioning a control systems?
The most critical factor is good documentation. You need to know what is going to happen to your unit and other units in the plant once an instrument, valve, loop or interlock is decommissioned. A proper risk and impact assessment has to be carried out prior to the decommissioning. One must ask very early on in a project’s development if all units controlled by the system are planning to shut down at the same time. This is needed for maintenance and upgrades. Power distribution and other utilities are critical. One may not be able to demo a system because it would affect other units. In many cases, a system cannot be totally decommissioned until the next shutdown of the operating unit and it may require simultaneous shutdowns of neighboring units as well. Waste management strategy, regulatory framework and environmental safety control are the other factors to be considered.
A proper risk and impact assessment has to be carried out prior to the decommissioning. Waste management strategy, regulatory framework and environmental safety control are the other factors to be considered.
The post Webinar Recording: The Amazing World of ISA Standards first appeared on the ISA Interchange blog site.
This educational ISA webinar was presented by Greg McMillan in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).
Historically, predictive maintenance required very expensive technology and resources, like data scientists and domain experts, to be effective. Thanks to artificial intelligence (AI) methods such as machine learning making its way into the mainstream, predictive maintenance is now more achievable than ever. Our webinar will explore how machine learning is changing the game and greatly reducing the need for data scientists and domain experts. These technologies self-learn and autonomously monitor for data pattern anomalies. Not only does this make predictive maintenance far more practical than what was historically possible, but now predictions 30 days in advance are the norm. Don’t let the old way of doing predictive maintenance cause you loss in productivity any longer.
This webinar covers:
About the Featured PresenterNicholas P. Sands, P.E., CAP, serves as senior manufacturing technology fellow at DuPont, where he applies his expertise in automation and process control for the DuPont Safety and Construction business (Kevlar, Nomex, and Tyvek). During his career at DuPont, Sands has worked on or led the development of several corporate standards and best practices in the areas of automation competency, safety instrumented systems, alarm management, and process safety. Nick is: an ISA Fellow; co-chair of the ISA18 committee on alarm management; a director of the ISA101 committee on human machine interface; a director of the ISA84 committee on safety instrumented systems; and secretary of the IEC (International Electrotechnical Commission) committee that published the alarm management standard IEC62682. He is a former ISA Vice President of Standards and Practices and former ISA Vice President of Professional Development, and was a significant contributor to the development of ISA’s Certified Automation Professional program. He has written more than 40 articles and papers on alarm management, safety instrumented systems, and professional development, and is co-author of the new edition of A Guide to the Automation Body of Knowledge. Nick is a licensed engineer in the state of Delaware. He earned a bachelor of science degree in chemical engineering at Virginia Tech.
About the PresenterGregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.
Here is the third part of a point blank decisive comprehensive list of what we really need to know in a detailed attempt to reduce the disparity between theory and practice. Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail. If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts.
The following list reveals common misconceptions that need to be understood to seek real solutions that actually address the opportunities.
The post Solutions for Unstable Industrial Processes first appeared on the ISA Interchange blog site.
In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Caroline Cisneros.
Negative resistance also known as positive feedback can cause processes to jump, accelerate and oscillate confusing the control system and the operator. These are characterized as open loop unstable processes. Not properly addressing these situations can result in equipment damage and plant shutdowns besides the loss of process efficiency. Here we first develop a fundamental understanding of the causes and then quickly move on to the solutions to keep the process safe and productive.
Caroline Cisneros, a recent graduate of the University of Texas who became a protégé about a year ago, is gaining significant experience working with some of the best process control engineers in an advanced control applications group. Caroline asks a question about the dynamics that cause unstable processes. The deeper understanding gained as to the sources of instability can lead to process and control system solutions to minimize risk and to increase process performance.
What causes processes to be unstable when controllers are in manual?
Fortunately, most processes are self-regulating by virtue of having negative feedback that provides a resistance to excursions (e.g., flow, liquid pressure, and continuous composition and temperature). These processes come to a steady state when the controller is in manual. Somewhat less common are processes that have no feedback that will result in a ramp (e.g., batch composition and temperature, gas pressure and level). Fortunately, the ramp rate is quite slow except for gas pressure giving the operator time to intervene.
There are a few processes where the deviation from setpoint can accelerate when in manual due to positive feedback. These processes should never be left in manual. We can appreciate how positive feedback causes problems in sound systems (e.g., microphones too close to speakers). We can also appreciate from circuit theory how negative resistance and positive feedback would cause an acceleration of a change in current flow. We can turn this insight into an understanding of how a similar situation develops for compressor, steam-jet ejector, exothermic reactor and parallel heat exchanger control.
The compressor characteristic curves from the compressor manufacturer that are a plot of compressor pressure rise versus suction flow shows a curve of decreasing pressure rise for each speed or suction vane position whose slope magnitude increases as the suction flow increases in the normal operating region. The pressure rise consequently decreases more as the flow increases opposing additional increases in compressor flow creating a positive resistance to flow. Not commonly seen is that compressor characteristic curve slope to the left of the surge point becomes zero as you decrease flow, which denotes a point on the surge curve, and then as the flow decreases further, the pressure rise decreases causing a further decrease in compressor flow creating a negative resistance to a decrease in flow.
When the flow becomes negative, the slope reverses sign creating a positive resistance with a shape similar to that seen in the normal operating region to the right of the surge point. The compressor flow then increases to a positive flow at which point the slope reverses sign creating negative resistance. The compressor flow jumps in about 0.03 seconds from the start of negative resistance to some point of positive resistance. The result is a jump in 0.03 seconds to negative flow across the negative resistance, a slower transition along positive resistance to zero flow, than a jump in 0.03 seconds across the negative resistance to a positive flow well to the right of the surge curve. If the surge valve is not open far enough, the operating point walks about 0.5 to 0.75 seconds along the positive resistance to the surge point. The whole cycle repeats itself with an oscillation period of 1 to 2 seconds. If this seems confusing, don’t feel alone. The PID controller is confused as well.
Once a compressor gets into surge, the very rapid jumps and oscillations are too much for a conventional PID loop. Even a very fast measurement, PID execution rate and control valve response can’t deal with it alone. Consequently, the oscillation persists until an open loop backup activates and holds open the surge valves till the operating point is sustained well to the right of the surge curve for about 10 seconds at which point there is a bumpless transfer back to PID control. The solution is a very fast valve and PID working bumplessly with an open loop backup that detects a zero slope indicating an approach to surge or a rapid dip in flow indicating an actual surge. The operating point should always be kept well to the right of the surge point.
For much more on compressor surge control see the article Compressor surge control: Deeper understanding, simulation can eliminate instabilities.
The same shape but with much less of a dip in the compressor curve, sometimes occurs just to the right of the surge point. This local dip causes a jumping back and forth called buzzing. While the oscillation is much less severe than surge, the continual buzzing is disruptive to users.
A similar sort of dip in a curve occurs in a plot of pumping rate versus absolute pressure for a steam-jet ejector. The result is a jumping across the path of negative resistance. The solution here is a different operating pressure or nozzle design, or multiple jets to reduce the operating range so that operation to one side or the other of the dip can be assured.
Positive feedback occurs in exothermic reactors when the heat of reaction exceeds the cooling rate causing an accelerating rise in temperature that further increases the heat of reaction. The solution is to always insure the cooling rate is larger than the heat of reaction. However, in polymerization reactions the rate of reaction can accelerate so fast the cooling rate cannot be increased fast enough causing a shutdown or a severe oscillation. For safety and process performance, an aggressively tuned PID is essential where the time constants and dead time associated with heat transfer in cooling surface and thermowell and loop response are much less than the positive feedback time constant.
Derivative action must be maximized and integral action must be minimized. In some cases a proportional plus derivative controller is used. The runway response of such reactors is characterized by a positive feedback time constant as shown in Figure 1 for an open loop response. The positive feedback time constant is calculated from the ordinary differential equations for the energy balance as shown in Appendix F of 101 Tips for a Successful Automation Career. The point of acceleration cannot be measured in practice because it is unsafe to have the controller in manual. A PID gain too low will allow a reactor to runaway since the PID controller is not adding enough negative feedback. There is a window of allowable PID gains that closes as the time constants from heat transfer surface and thermowell and the total loop dead time approach the positive feedback time constant.
Figure 1: 1 Positive Feedback Process Open Loop Response
Positive feedback can also occur when parallel heat exchanges have a common process fluid input each with outlet temperature controller(s) with a setpoint close to the boiling point or temperature resulting in vaporization of a component in the process fluid. Each temperature controller is manipulating a utility stream providing heat input. The control system is stable if the process flow is exactly the same to all exchangers. However, a sudden reduction in one process flow causes overheating causing bubbles to form expanding back into the exchanger causing an increase in back pressure and hence a further decrease in process flow thru this hot exchanger.
The increasing back pressure eventually forces all of the process flow into the colder heat exchanger making it colder. The high velocity in the hot exchanger from boiling and vaporization causes vibration and possibly damage to any discontinuity in its path from slugs of water. When nearly all of the water is pushed out of the hot exchanger, its temperature drops drawing feed that was going to the cold heat exchanger that causes the hot exchanger to overheat repeating the whole cycle. The solution is separate flow controllers and pumps for all streams so that changes in the flow to one exchanger do not affect another and a lower temperature setpoint.
To summarize, to eliminate oscillations, the best solution is a process and equipment design that eliminates negative resistance and positive feedback. When this cannot provide the total solution, operating points may need to be restricted, loop dead time and thermowell time constant minimized and the controller gain increased with integral action decreased or suspended.
The post, Missed Opportunities in Process Control - Part 2, first appeared on the ControlGlobal.com Control Talk blog.
Here is the second part of a point blank decisive comprehensive list of what we really need to know in a detailed attempt to reduce the disparity between theory and practice. Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail.
If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts.
The post What Skill Sets Do You Need to Excel at IIoT Applications in an Automation Industry Career? first appeared on the ISA Interchange blog site.
In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Angela Valdes.
The Industrial Internet of Things (IIoT) is the hot topic as seen in the many feature articles published. The much greater availability of data is hoped to provide the knowledge needed to sustain and improve plant safety, reliability and performance. Here we look at what are some of the practical issues and resources in achieving the expected IIoT benefits.
Angela Valdes is a recently added resource in the ISA Mentor Program. Angela is the automation manager of the Toronto office for SNC-Lavalin. She has over 12 years of experience in project leadership and execution, framed under PMI, lean, agile and stage-gate methodologies. Angela seeks to apply her knowledge in process control and automation in different industries such as pharmaceutical, food and beverage, consumer packaged products and chemicals.
What skill sets and ISA standards shall I start building/referencing in order to grow in the IIoT space and work field?
The ISA communication division is forming a technical interest group in IIoT. The division has had presentations on the topic for several years at conferences. The leader will be announced in InTech magazine. The ISA95 standard committee is working on updating the enterprise – control system communication to better support IIoT concepts.
One tremendous resource would be to read most of Jonas Berge’s LinkedIn blog posts. He writes about IIoT and digital communications and the impact they can have on reliability, safety, efficiency and production. I recommend you send him a connection request to see when he has new things to post. One other person to connect with includes Terrance O’Hanlon of ReliabilityWeb.com. Searching on the #IIoT hashtag in Twitter and LinkedIn is also a very good way to discover new articles and influencers in these areas.
One of the things we need to be careful about is to make sure there are people with the expertise to use the data and associated software, such as data analytics. There was a misrepresentation in a feature article that IIoT would make the automation engineer obsolete when in fact the opposite is true. We need more process control engineers besides process analytical technology and IIoT experts to make the most out of the data. The data by itself can be overwhelming as seen in the series of articles “Drowning in Data; Starving for Information”: Part 1, Part 2, Part 3, and Part 4.
Process control engineers with a fundamental knowledge of the process and the automation system need to intelligently analyze and make the associated improvements in instrumentation, valves, setpoints, tuning, control strategies, and use of controller features whether PID or MPC. Often lacking is the recognition of the importance of dynamics in the process and particularly the automation system. The process inputs must be synchronized with the process outputs for continuous processes before true correlations can be identified.
Knowledge of process first principles is also needed to determine whether correlations are really cause and effect. While the solution would seem to be employing expert rules to the IIoT results, a word of caution here is that the attempts to develop and use real time expert systems in the 1980s and 1990s were largely failures wasting an incredible amount of time and money. Deficiencies in conditions, interrelationships and knowledge in the rules of logic implemented plus lack of visibility of interplay between rules and ability to troubleshoot rules led to a lot of false alerts resulting in the systems being turned off and eventually abandoned.
There have been multiple “data revolutions” over the years, and I consider IIoT to be just another wave where new information is made available that wasn’t available before. Unfortunately the problem that bedeviled the previous data revolutions still remains today. More data is not necessarily useful unless the right information is delivered at the right time to a person who can act on it. In many cases the operators have too much information now – when something goes wrong they get 1000 alarms and have to wade through the noise to try to figure out what went wrong and how to fix it.
IIoT data can undoubtedly be useful, but it takes a huge amount of time and effort to create an interface than can effectively present that information and still more time and effort to keep it up. All too often management reads a few trendy articles and thinks IIoT is something you buy or install and savings should just appear. Unfortunately most fail to appreciate the effort required to implement such a system and keep it working and adding value. Usually money is spent, people celebrate the glorious new system, then it falls out of favor and use and gets eliminated a short time later.
As far as I know there aren’t any specific standards associated with IIoT. I do think that there are several skill sets that can you help you implement it:
The post Webinar Recording: Practical Limits to Control Loop Performance first appeared on the ISA Interchange blog site.
Part 2 provides a quick review of Part 1 and then discusses the contribution of each PID mode, why reset time is orders of magnitude too small for most composition and temperature loops, the ultimate and practical limits to control loop performance, the critical role of dead time, and when PID gain that is too high or too low causes more oscillation.
This is the official online community site of the Emerson Global Users Exchange, a forum for the free exchange of non-proprietary information among the global user community of all Emerson Automation Solution's products and services. Our goal is to improve the efficiency and use of automation systems and solutions employed at members’ facilities by sharing our knowledge, experiences, and application information.
User Groups |
World Areas |
Community Guidelines |
Legal Information |
Contact Community Manager
Website translation provided by
© 2015-2020 Emerson Global Users Exchange. All rights reserved.