*Posts on this page are from the Control Talk blog, which is one of the ControlGlobal.com blogs for process automation and instrumentation professionals and Greg McMillan’s contributions to the ISA Interchange blog.

Tips for New Process Automation Folks
By date
Descending
  • IEEE Tech Talk – control system device cyber security is missing in government and engineering societies

    IEEE (www.IEEE.org) is one of the most influential professional engineering societies in the world. It is well-regarded and has played an important role in the development of many control system standards. On December 14, 2021, I was honored to give an IEEE Tech Talk to the Seattle Chapter of the IEEE Power and Energy Society on control system cyber security. The link to the recording is at https://www.youtube.com/watch?v=eZqkCC6wqcE The theme of the presentation was: - Focus of cyber security is IT malware and ransomware Not addressing physical damage and injuries - All physical infrastructures (“critical” or not) are monitored and controlled using instrumentation and control systems - For physical infrastructure cyber security, Measurements are the input, but instrumentation is ignored Control systems are used to control physics, but physics is ignored Cyber forensics and attribution do not exist for control system devices Cyber security training generally not available for control system engineers Control system cyber security is about 5-10 years behind IT cyber security Culture is broken between Engineering and IT/OT In the QA session following my presentation, it was mentioned that IEEE-USA issued a position paper on cyber security and control systems were not addressed. According to the representative from IEEE-USA, this a major hole that needs to be addressed (around the 1:15:00 time frame on the tape). Addressing this gap will require collaboration between the International Society of Automation (ISA), IEEE, and other industry organizations. It will also require CISA (including TSA), DOE, FERC, NRC, and other government organizations to address the control systems gap. The continuing singular focus on networks is making the US very vulnerable to extended outages, equipment damage, and deaths. Gaps in Standards Even the most sophisticated engineering associations can suffer from blind spots when it comes to the complex interactions of multi-level systems. This is especially true when situations like cyber security bring together organizations with different goals, nomenclature, etc. In this case, networking and engineering attempting to secure engineering networks have not addressed the Level 0,1 engineering devices (process sensors, actuators, drives, etc.). Examples where gaps occur include: - Electric – NERC CIP excludes Level 0,1 devices - Water/wastewater – AWWA doesn’t address Level 0,1 devices - IEC TC57 – Doesn’t address Level 0,1 devices - Food – FSMA doesn’t address control system cyber security - TSA Pipelines – Doesn’t address control system pipeline issues - TSA Transportation – Doesn’t address control system issues - ISA 62443 – Doesn’t address process sensor integrity - NIST 800-53, 800-82, 160 doesn’t address Level 0,1 - NIST Cyber Security Framework – Can’t “detect” - IEC TC65 - Functional safety communication protocols do not address cyber security Why Level 0,1 matters It is not possible to cyber secure, or assure safety, of the physical infrastructures when the Level 0,1 devices have no cyber security, authentication, or cyber logging.  Yet, cyber security of Level 0,1 devices continues to be ignored by the IT and OT networking communities. The following examples from multiple sectors illustrate why we are at such high risk from insecure process sensors: - The Oak Ridge National Laboratory (ORNL), Pacific Northwest National Laboratory PNNL), and National Renewable Energy Laboratory (NREL) issued a report on sensor issues in Buildings. A typical situation could include sensor data being modified by hackers and sent to the control loops, resulting in extreme control actions. To the best of the authors’ knowledge, no such study has examined this challenge. - “We have several temperature, pressure and flow sensors on a new medical-device cleaning skid that we are developing. These instruments are connected to a PLC as 4-20 mA inputs, and there is also a 4-20 mA output used to control a pump motor speed. A recent failure of a flow sensor brought the process skid instrumentation to my company’s quality manager’s attention. He asked how do we know that the temperatures, pressures, and flow are accurate, and how do we know that we are cleaning properly?”  FDA has not addressed these issues. - Process sensors not reliable or safe in refinery operation. Almost 50% of the nuisance alarms were reranged sensors (that is, the sensor settings have been changed so they have lost their ability to initiate safety systems when needed). Because this was from insiders, it was assumed the reranged sensors were unintentional. - One sensor failure in combined cycle plant in Florida caused a 200MW load swing at the plant that rippled across the Eastern Interconnect causing a 50MW load swing in New England. - Russia, China, and Iran are aware of the gap in cyber security of process sensors, and, in some cases, are already exploiting this gap. Moreover, monitoring the electrical characteristics of the process sensors (sensor health monitoring) provides benefits beyond cyber security. As the process sensors are the “eyes of the process”, sensor health monitoring provides a predictive maintenance capability, improved performance and productivity, and improved safety.  Additionally, sensor monitoring becomes a check of the network monitoring systems. If the sensor health monitoring does not directly match the network monitoring, the network monitoring needs to be examined. The presentation included actual control system cyber incidents including pipeline ruptures that are not addressed by the TSA pipeline cyber security directive and train crashes not covered by the TSA rail cyber security directive. What makes critical infrastructures different than retail and other IT-centric organizations are the control system devices. It is also what makes them dangerous. So, why is TSA ignoring the control system devices? The lack of addressing control system issues can be seen by the government and industry’s response to the Log4j (Apache open-source software) vulnerability disclosed December 10, 2021. It is similar to the government and industry’s response to SolarWinds with the entire focus on the networks to the exclusion of control systems. The 2004 ICS Cyber Security Conference in Idaho Falls was held in conjunction with the ribbon cutting for the INL SCADA Testbed. As part of the Conference, INL did a cyberattack demonstration that exploited a zero-day (it wasn’t called zero-day in 2004) buffer overflow in Apache open-source software. The attack sent exploited code from the Sandia National Laboratory (SNL) business network to the INL business network to the INL SCADA Testbed network. The firewalls did not block the compromised scripts. The attack demonstration - Remotely opened and closed a relay (Aurora) - Remotely opened and closed multiple relays (2015 Ukrainian cyberattack) - Remotely opened a relay but without indication the relay was open (2003 NE outage) - Relay not changed, but the status indication was changed (Stuxnet) The Apache Log4j vulnerability could potentially cause the same issues, yet control systems are being ignored in the government and industry guidance currently issued. Off-line monitoring of the sensors would not be affected by ransomware or the Log4j types of vulnerabilities.   Recommendations Each Society has to put together a section call out the other just like what was done for the NEC (National Electric Code – Power) The NEC added a whole section 800 on communications a few years back. IEEE has to at least recognize IT Networking in their IEEE SCADA/Control Systems and ISA has to recognize the existing of power/Physics IEEE. The same can be said for ASME, AICHE, ASCE, SAE, INCOSE, and other industry Societies. The IT networking societies should not be making recommendations for securing control systems without assuring that recommendations for IT will not cause harm to control systems as has happened in the past. As an aside, Nadine Miller and Rob Stephens from JDS Energy and Mining and myself will have a paper in the January issue of IEEE Computer magazine titled: “Control System Cyber Incidents Are Real—and Current Prevention and Mitigation Strategies Are Not Working”. Joe Weiss
  • June 8th and 9th virtual keynotes to cyber security conferences – gaps between networking and engineering

    Given the virtual world we live in, I am able to support two important cyber security conferences – June 8th is the Cyber Observatory IOT and ICS Conference and June 8th and 9th is the New York State Cyber Security Conference. As control system-unique cyber issues are still misunderstood by many in the mainstream cyber security community, my presentations will be an engineer’s view of control system cyber security with a focus on actual impacts. There have been almost 12 million control system cyber incidents. Yet there has been an alarming reticence for government and industry to identify control system cyber incidents as being “cyber”. Examples include the 2003 Davis Besse Slammer worm incident where NRC wouldn’t use the word “cyber” or the more than 350 control system cyber incidents in the North American electric system that NERC wouldn’t identify as “cyber”. Even the most recent NERC Lessons Learned refuses to call a power plant control system incident that affected the entire Eastern Interconnect a cyber incident. This event started with 200MW swings because a sensor and control system problem at a power plant in Florida and ended up with 50 MW swings in New England!  ( https://www.controlglobal.com/blogs/unfettered/process-sensor-issues-continue-to-be-ignored-and-are-placing-the-country-at-extreme-risk ). Consequently, it should be evident that government initiatives that require identification of control system cyber incidents aren’t being met. This should be of concern given the increasing cyber oversight by insurance and credit rating agencies. To date, the government guidance provided following control system cyber incidents has been generic such as don’t connect IT and OT networks or do good cyber hygiene but does not address the root cause of the incidents. The lack of providing guidance for the root cause has two ramifications: a false sense of security by only doing the basics and not addressing the root cause leaves the facilities open for the incidents to recur. Moreover, most of the root causes were not unique to just one facility. Control system devices such as process sensors, actuators, and drives have no cyber security, authentication, or cyber logging and so it takes more than just network security to address them. Additionally, these devices are not capable of meeting the requirements in the Cybersecurity Executive Order (EO) 14028 or the TSA pipeline cyber security requirements. Understanding control system cyber security is critical as Russia, China, and Iran are aware of these deficiencies and some of these gaps are currently being exploited. June 8th, I will be giving a keynote at the Cyber Observatory IOT and ICS conference ( https://www.cyberinnovationsummits.com/industrial-cybersecurity-iiot-event/ ). I also will be participating in an executive roundtable – “The critical infrastructure supply chain: how can this massive operational and cyber security challenge be addressed?”  The Chinese hardware backdoors in large electric transformers bring up hardware challenges that do not appear to be addressed in the ongoing supply chain initiatives. June 8th, I will also be participating in a panel session at the New York State (NYS) Cyber Security Conference at 11AM Eastern with Matt Nielsen of GE R&RD and Sanjay Goel from SUNY Albany. The panel will address: Threats to the Energy Infrastructure of the United States”. The panel will discuss some of the recent cyberattacks on our power grid and what if should we be doing to mitigate the threat to our power infrastructure. June 9th I will be giving a keynote at the 2021 NYS Cyber Security Conference ( 2021 NYS Cyber Security Conference ) is held in conjunction with the Annual Symposium on information Assurance ( http://www.albany.edu/iasymposium ).  My presentation will provide an engineer’s complement to Kevin Mandia’s Tuesday June 9th keynote on the state of cyber security. The presentations will address some of the most significant recent control system cyber security incidents: SolarWinds and its impact on control systems, the Chinese hardware backdoors in large electric transformers, Chinese hidden control system networks in a pharma facility, the Colonial Pipeline hack, the Oldsmar water hack, counterfeit process sensors, and building hacks. It will identify some of the gaps between real incidents and EO 14028 and the TSA pipeline requirements. The presentation will provide recommendations to improve control system cyber security. Joe Weiss
  • Residing on Residence Time

    The post, Residing on Residence Time , first appeared on ControlGlobal.com's Control Talk blog. The time spent residing on this column is time well spent if you want to become famous for improving process performance with the side benefit of becoming best buds with the process engineer. The implications are enormous in terms of process efficiency and capacity from the straightforward concept of how much time a fluid resides in process equipment. The residence time is simply the equipment volume divided by the fluid volumetric flow rate. The fluid can be back mixed (swirling in opposite direction of flow) in the volume due to agitation, recirculation or boiling. A lot of back mixing makes nearly all of the residence time a process time constant. If there is hardly any back mixing, we have plug flow and nearly all of the residence time becomes deadtime (transportation delay). Deadtime is always bad. The ultimate limit to the peak and integrate errors for a load disturbance is proportional to the deadtime and deadtime squared, respectively. A particular process time constant can be good or bad. If the process time constant in question is the largest time constant in the loop, it slows down disturbances on the process input and enables a larger PID gain. The process variability can be dramatically reduced for a process time constant much larger than the total loop deadtime. The slower time to reach setpoint can be speeded up by the higher PID gain provided there is proportional action on error and not just on PV in the PID structure. If the process time constant in question is smaller than another process time constant possibly due volumes in series or heat transfer lags, a portion of the smaller time constants become effectively deadtime in a first order approximation. Thus, heat transfer lags and volumes between the manipulated variable and controlled variable create detrimental time constants. A time constant due to transmitter damping or signal filtering will add effective deadtime and should be just large enough to keep fluctuations in the PID output due to noise from exceeding the valve or variable speed driver deadband and resolution, whichever is largest. At low production rates, the residence time gets larger, which is helpful if the volume is back mixed, but the process gain increases dramatically for temperature and composition control. If the volume is plug flow, we are in dire straits because the larger residence time creates a larger transportation delay resulting in a double whammy of high process gain and high deadtime causing oscillations as explained in the Control Talk Blog “ Hidden factor in Our Most Important Loops ”. For gas volumes (e.g., catalytic reactors), the residence time is usually very small (e.g., few seconds) and the effect is mitigated. If you want more information on opportunities to learn what is really important, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts. You can also get a comprehensive resource focused on what you really need to know for a successful automaton project including nearly a thousand best practices in the 98% new McGraw-Hill 2019 Process/Industrial Instruments and Controls Handbook Sixth Edition capturing the expertise of 50 leaders in industry.
  • What Factors Affect Dead Time Identification for a PID Loop?

    The post What Factors Affect Dead Time Identification for a PID Loop? first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. In the  ISA Mentor Program , I am providing guidance for extremely talented individuals from Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Adrian Taylor. Adrian Taylor is an industrial control systems engineer at Phillips 66 . Adrian Taylor’s Question One would expect dead time to be the easiest parameter to estimate, yet when using software tools that identify the process model in closed loop I find identification of dead time is inconsistent. Furthermore when using software identification tools on simulated processes where the exact dead time is actually known, I find on occasions the estimate of dead time is very inaccurate. What factors affect dead time identification for a PID loop? Russ Rhinehart’s Answer When we identify dead time associated with tuning a PID loop, it is normally part of a model such as First Order Plus Dead Time (FOPDT), or a slightly more complicated Second Order model (SOPDT). Normally, and traditionally, we generate the data by step-testing (start at steady state, SS, and make a step-and-hold in the manipulated variable, MV, (the controller output), then observe the process-controlled variable, CV, response. We pretend that there were no uncontrolled disturbances, and make the simple linear model best fit the data. This procedure has served us well for 80 years, or so, that we’ve used models for tuning PID feedback controllers or setting up feedforward devices; but there are many issues that would lead to inconsistent or unexpected results. One of the issues is that these models do not exactly match the process behavior. The process may be of higher order than the model. Consider a simple flow rate response. If the i/p device driving the valve has a first-order response, and the valve has a first-order response, and there is a noise filter on the measurement, then the flow rate measurement has a third-order response to the controller output. The distributed nature of heat exchangers and thermowells, and the multiple trays in distillation all lead to high order responses. So, the FOPDT model will not exactly represent the process, the optimization algorithm, the modeling approach, seeks the simple model that best fits overall the process response. In a zero dead time, high-order process, the best model will delay the modeled response so that the subsequent first-order part of the model can best fit the remaining data. The best model will report a dead time even if there is none. The model does not report the process dead time, but provides a pseudo-delay that makes the rest of the model best fit the process response. The model dead time is not the time point where one can first observe the CV change. A second issue is that processes are usually nonlinear, and the linear FOPDT model cannot match the process. Accordingly, steps up or down from a nominal MV value, or testing at alternate operating conditions will experience different process gains and dynamics, which will lead to linear models of different pseudo-dead time values. A third issue is that the best fit might be in a least squares sense over all the response data, or it might be on a two-point fit of mid-response data. The classic hand-calculated “reaction curve” models use the point of highest slope of the response to get the delay and time-constant by extrapolating the slope from that point to where it intersects the initial and final CV values. A “parametric” method might use the points when the CV rose one-quarter and three-quarters of the way from the initial to the final steady state values and estimate delay and time-constant from those two points. By contrast, a least squares approach would seek to make the model best fit all the response data not just a few points. The two-point methods will be more sensitive to noise or uncontrolled disturbances.  My preference is to use regression to best fit the model over all the data to minimize the confounding aspects of process noise. A fourth issue is that the step testing might not have started at steady-state, SS, nor ended at SS. If the process was initially changing because of its response to prior adjustments, then the step test CV response might initially be moving up or down. This will confound estimating the pseudo-delay and time-constant of any modeling approach. If the process does not settle to a SS, but is continuing to slowly rise, then the gain will be in error, and if gain is used in the estimation procedure for the pseudo-delay, it will also include that error. If replicate trials have a different background beginning, a different residual trend, then the models will be inconsistent. A fifth issue relates to the assumption of no disturbances. If a disturbance is affecting the process then, similar the case of not starting at SS, the model will be affected by the disturbance, not just the MV. Here is a sixth. Delay is nonlinear, and it is an integer. If the best value for the pseudo-delay was 8.7 seconds, but the sample interval was on a 1-sec interval, the delay would either be rounded or truncated. It might be reported as 8 or as 9 sec. This is a bit inconsistent. Further, even if the model is linear in differential equation terminology, the search for an optimum pseudo-delay is nonlinear. Most optimizers end up in a local minimum, which depends on the initialization values. In my explorations, the 8.7-sec ideal value might be reported within a 0- to 10-sec range on any one particular optimization trial. Optimizers need to be run from many initial values to find the global. So, there are many reasons for the inconsistent and inaccurate results. You might sense that I don’t particularly like the classic single step response approach. But I have to admit that it is fully functional. Even if a control action is only 70 percent right because the model was in error, the next controller correction will reduce the 30 percent error by 70 percent. And, after several control actions, the feedback aspect will get the controller on track. Although fully functional, I think that the classic step-and-hold modeling approach can be improved. I used to recommend 4 MV steps – up-down-down-up. This keeps the CV in the vicinity of the nominal value, and the 4 steps temper the effect of noise, nonlinearity, disturbances, and a not-at-SS beginning. However, it takes time to complete 4 steps, production usually gets upset with the extended CV deviations, and it requires an operator monitoring to determine when to start each new test. My preference now is to use a “skyline” MV sequence, which is patterned after the MV sequence used to develop models for model-predictive control, MPC, also termed advanced process control, APC. In the skyline testing, the MV makes steps to random values within a desired range, at random time intervals ranging from about ½ to 2 time-constants. In this way, in the same time interval for the 4-step up-down-down-up response, the skyline generates about 10 responses, can be automated, and does not push the process as far or for an extended period from the nominal value as traditional step testing. The large number of responses does a better job of tempering noise and disturbances, while requiring less attention and causing smaller process upsets. Because the skyline input sequence does not create step-and-hold responses from one SS to another, the two-point methods for reaction curve modeling cannot be used. But regression certainly can be used. What is needed is an approach to nonlinear regression (to find the global minimum in the presence of local optima), and a nonlinear optimizer that can handle the integer aspects of the delay. I offer open-code software on my web site in Visual Basic for Applications, free to any visitor. Visit r3eda.com and use the menu item “Regression” then the sub-item “FOPDT Modeling.” You can enter your data in the Excel spread sheet and press the run button to let the optimizer find the best model. The model includes both the reference values for the MV and CV (FOPDT models are deviations from a reference) and initial values (in the case the data does not start at an ideal SS).  The optimizer is Leapfrogging, one of the newer multiplayer direct search algorithms that can cope with multi-optima, nonlinearity, and discontinuities. It seeks to minimize the sum of squared deviations, SSD, over all the data. The optimizer is reinitialized as many times as you wish to ensure that the global is found, and the software reports the cumulative distribution of SSD values to reveal confidence that the global best has been found. ISA Mentor Program Posts & Webinars Did you find this information of value? Want more? Click this link to view other ISA Mentor Program blog posts, technical discussions and educational webinars. Adrian Taylor’s Reply to Russ Rhinehart Many thanks for your very detailed response. I look for to having a play with your skyline test + regression method. I have previously set up a spreadsheet to carry out the various two point methods using points at 25 percent/75 percent, 35.3 percent/85.3 percent and 28.3 percent/63.2 percent. As your recursive method minimizes the errors and should always give the best fit, it will also be interesting to compare to the various two point methods to see which of these methods most closely match your recursive best fit method for various different type of process dynamics. For the example code given in your guidance notes: I presume r is a random number between 0 and 1? I note the open settling time is required. Is the procedure to still carry out an open loop step test initially to establish open loop setting time, and then in turn use this to generate the skyline test? Russ Rhinehart’s Reply to Adrian Taylor Yes, RND is a uniform distributed random number on the 0-1 interval.  It is not necessary to have an exact number for the settling time.  In a nonlinear process, it changes with operating conditions; and the choice of where the process settles is dependent on the user’s interpretation of a noisy or slowly changing signal.  An intuitive estimate from past experience is fully adequate.  If you have any problems with the software, let me know.  Michel Ruel’s Answer See the ISA Mentor Program webinar Loop Tuning and Optimization for tips. Usually the dead time is easily identified in closed loop techniques but in open loop you can miss a chunk of it. Most modern tools analyze the process response in the frequency domain and in this case, dead time corresponds to high frequencies. Tests using a series of pulses (or double pulses) are rich in high frequencies and in this case dead time is well identified (if we use a first or second order + dead time, remember that dead time represents real dead time plus small time constants). Adrian Taylor’s Reply to Michel Ruel Many thanks for your response. While I have experienced the problem with various different identification tools, the Dead time estimate when using a relay test based identification tool seemed to be particularly inconsistent at identifying Dead time. My understanding now is that while the relay test method is very good at identifying ultimate gain/ultimate period, attempts to convert to an FOPDT model can be more problematic for this method. Mark Darby’s Answer Model identification results can be poor due to the quality of the test/data as well as the capabilities of the model identification technology/software. Insufficient step sizes can lead to poor results.  For example, not making big enough test moves relative to valve limitations (dead band and stick/slip) and noise level of the measurements you want to model.  Also, to get good results, multiple steps may needed to minimize the impact of unmeasured disturbances. Another factor is the identification algorithm itself and capabilities of the software.  Not all are equivalent and there is a wide range of approaches used, including how dead time is estimated.  One needs to know if the identification approach works with closed-loop data.  Not all do.  Some include provisions for pre-filtering the data to minimize the impact of unmeasured disturbances by removing slow trends.  This is known as high pass filtering, in contrast to low pass filtering which removes higher frequency disturbances. If sufficient number of steps is done, most identification approaches will obtain good model estimates, including dead time.  Dead time estimates can usually be improved by making higher frequency moves (e.g., fractions of the estimated state-state response time). As indicated in my response to the question by Vilson, the user will often need to specify whether the process is integrating.  Estimates of process model parameters can be used to check or constrain the identification.  As mentioned, may be able to obtain model estimates from historical data – either by eye ball or using selected historical data in the model identification, and thereby avoid a process test. Greg McMillan’s Answer Digital devices including historians create a dead time that is one-half the scan time or execution rate plus latency. If the devices are executing in series and the test signal is introduced as a change in controller output, then you can simply add up the dead times. Often test setups do not have the same latency or same order of execution or process change injection point as in the actual field application. If the arrival time is different within a digital device execution, the dead time can vary by as much as the scan time or execution rate. If there is compression, backlash or stiction, there is also a dead time equal to the dead band or resolution limit divided by the rate of change of the signal assuming the signal change is larger than dead band or resolution limit. If there is noise or disturbances, the dead time estimate could be smaller or larger depending upon the whether the induced change is in the same or opposite direction, respectively. Some systems have a slow execution or large latency compared to the process dead time.  Identification is particularly problematic for fast systems (e.g., flow, pressure) and any loop where the largest sources of dead time are in the automation system resulting in errors of several hundred percent. Electrode and thermowell lags can be incredibly large varying with velocity, direction of change, and fouling of sensor. Proven fast software directly connected to the signals designed to identify the open loop response (e.g., Entech Toolkit) and multiple tests with different size perturbations and direction and at different operating conditions (e.g., production rates, setpoints and degrees of fouling) is best. I created a simple module in Mimic that offers a rough fast estimate of dead time and ramp rate and the integrating process gain for near-integrating and true integrating processes within 6 dead times that is accurate to about 20 percent if the process dead time is much larger than software execution rate. While the relay method is not able to identify the open loop gain and time constant, it can identify the dead time. I have done this in the Mimic “Rough n Ready” tuner I developed. Some auto tuning software may be too slow or take a conservative approach using the largest observed delay between a PV change and a MV change plus a maximum assumed update rate and possibly use a deranged algorithm thinking larger is better. For Additional Reference: McMillan, Gregory K., Good Tuning: A Pocket Guide . Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • Variability Appearance and Disappearance

    The post, Variability Appearance and Disappearance originally appeared on the ControlGlobal.com Control Talk blog. Particularly confusing is variability that seems to come out of nowhere and then disappear. There are not presently good tools to track down the sources because it turns out most of them are self-inflicted involving automation system deficiencies, dynamics and transfer of variability. You need to understand the possible causes to be able to identify and correct the problem. Here we provide fundamental knowledge needed on the major sources of these particularly confusing sources of variability. One of the most prevalent and confusing problems are oscillations that break out and disappear in cascade control loops and loops manipulating large valves and variable frequency drives (VFD). The key thing to look for is if these oscillations only start for large changes in controller output. If you have a slow secondary loop, valve or VFD, for small changes in controller output over a period of several controller executions for small setpoint changes or small disturbances, the secondary loop, valve or VFD can keep up with the requested changes. For large changes, oscillations break out. The amplitude and settling time increase as the degree of mismatch between requested rate of change and the rate of change capability of the secondary loop, valve or VFD. The best solution is of course to make the capabilities of what is being manipulated faster. You can make a secondary loop faster by decreasing the secondary loop’s dead time and lag times (faster sensors, filters, damping, and update rates) and making the secondary loop tuning faster. You can make the control valve faster by a higher gain and no integral action in positioner, putting a volume booster on the positioner output(s) with booster bypass valve slightly open to provide booster stability and increasing the size of air supply lines and if necessary, the actuator air connections. You can make VFDs faster by making sure there is no speed rate limiting in the drive setup, keeping fast speed control with VFD in the equipment room (not putting it into a much slower control system controller) and increasing the motor rating and size as needed. If the problem persists, turning on external-reset feedback with fast accurate readback of the process variable of the manipulated secondary loop, actual position of valve and speed of VFD can stop the oscillations. Another confusing trigger for oscillations is a low production rate. The process gain and dead time both increase at low production rates causing oscillations as explained in the Control Talk Blog “ Hidden factor in Our Most Important Loops ”. Also, stiction is much greater as the valve operating point approaches the closed position due to higher friction from sealing and seating surfaces. Valve actuators may also be undersized for operating with the higher pressure drops near closure. Stiction oscillations size and persistence increase with valves designed to reduce leakage. Most valve suppliers do not want to do valve response testing below 20% output because it makes the valve dead band and resolution worse. The installed flow characteristic of linear trim distorts to quick opening for a valve drop to system pressure drop ratio at maximum flow is less than 0.25. The amplification of oscillations from backlash and stiction and the instability from the steep slope (high valve gain) from the quick opening installed characteristic cause the oscillations to be larger. Even more insidious is the not commonly recognized reality that a VFD has an installed flow characteristic that becomes quick opening if the static head to system pressure drop ratio is greater than 0.25 triggering the same sort of problems. Signal characterization can help linearize the loop but you still need adaptation of the controller tuning settings for the increase in the process gain from the hidden factor and the increase in dead time from transportation delays and you are still stuck with stiction. Besides the increase in the VFD gain multiplying effect on the open loop gain, there is an amplification of oscillations from the 0.35% resolution limit of traditional of VFD I/O card and the dead band introduced in VFD setup in a misguided attempt to reduce reaction to noise. Then there are oscillations from erratic signals and noise from measurement rangeability problems discussed in last month’s Control Talk Blog “ Lowdown on Turndown ”. Low production rates can also cause operation near the split range point and crisscrossing of the split range point causing consequential persistent oscillations from the severe nonlinearities and discontinuities besides greater stiction. Again external-reset feedback can help but the better solution is control strategies and configurations to eliminate the unnecessary crossings of the split range point as discussed in the Control Talk Column “ Ways to improve split range control ”. If you want more information on opportunities to learn what is really important, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts . You can also get a comprehensive resource focused on what you really need to know for a successful automaton project including nearly a thousand best practices in the 98% new McGraw-Hill 2019 Process/Industrial Instruments and Controls Handbook Sixth Edition capturing the expertise of 50 leaders in industry.
  • Alarm Management and DCS versus PLC/SCADA Systems

    The post Alarm Management and DCS versus PLC/SCADA Systems first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. In the  ISA Mentor Program , I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Aaron Doxtator. Aaron Doxtator is a process control EIT with XPS | Expert Process Solutions . He has experience providing process engineering and controls engineering services, primarily for clients in the mining and mineral processing sector. Aaron Doxtator’s First Question I am working on a project that I believe many other sites may wish to undertake, and I was looking for best practice information. Using ISA 18.2, we are performing an alarm audit using the rationalization and documentation process for all plant alarms. This has been working well, but an examination of the plant’s “bad actors” has slowed down the process. These bad actors are a significant portion of the annunciated alarms, but many of them are considered redundant in certain scenarios and could only be mitigated with state-based alarming. While the rationalization and documentation process is useful for examining many of the nuisance process alarms, it quickly becomes much more complicated when state-based alarming is to be considered. The high-level process to implement these changes is as follows: Identify the need for a state-based alarm; Perform a risk review/MOC process with all stakeholders; Implement the changes; and Document the state-based alarms. Is there a recommended best practice that one could follow in order to document the state-based alarm changes? Nick Sands’ Answer ISA has a series of technical reports that give guidance on implementation of the standard. TR4 is on Advanced and Enhance alarming, including state-based alarming . There is guidance that includes: Use redundant indications of states to minimize state-based logic failures. Be cautious about suppression of alarms that can indicate the transfer of material or energy to an undesired location (example is high temperature alarms on columns, high levels on tanks…) ISA Mentor Program Posts & Webinars Did you find this information of value? Want more? Click this link to view other ISA Mentor Program blog posts, technical discussions and educational webinars. Darwin Logerot’s Answer In my experience, there are very few if any chemical or refinery units that would not benefit from state-based alarming (SBA). The basic problem is that most alarm systems are configured for only a single process condition (usually running at steady state), but real processes must operate through a variety of states: starting up, shutting down, product transitions, regeneration, partial bypass, etc. In these situations, alarm systems can and will produce multiple alarms that are meaningless to the operator (nuisance alarms). Alarm floods are the natural result. Alarm floods can be problematic in that they tend to distract the operator from the more important task at hand, can be misleading, and can hide important information. Now, for a look at actually answering your question: ISA-18.2 TR4 as Nick cited is a good starting point for information on SBA. I will add some pointers and caveats as well: SBA at its core is a relatively simple concept – determine the current operating state of the process or system, then apply appropriate alarm attribute modifications.  But, as in many situations, “the devil is in the details”. So, best advice is to consult with a knowledgeable practitioner before embarking on a SBA project. Apply SBA only to a well-rationalized alarm system, or in parallel with a thorough and principles-based rationalization.  Apply state transition techniques to prevent nuisance alarms when the process transitions into a running state. Utilize commercially available software for SBA, rather than trying to develop custom logic and coding in the control system. Another available resource is the Alarm Management chapter in the Instrument and Controls Handbook recently published. One final observation, I note that you referred to a bad actors review as slowing down the process. My normal approach is to not concentrate on the bad actors, but to conduct a comprehensive rationalization (adding SBA in the process) that includes all tags in the control system. The bad actors will be covered in this process. Greg McMillan’s Answer I suggest you check out the Control Talk columns with Nick Sands Alarm management is more than just rationalization and Darwin Logerot The dynamic world of alarms . Aaron Doxtator’s Second Question While most of my experience has involved using PLC/DCS (or just DCS) for plant control, some clients have expressed interest in shifting away from using a DCS altogether and utilizing exclusively PLC/SCADA. Aside from client preference, are there recommendations for when one solution (or one combination of solutions) may be preferred over the other? Hunter Vegas’ Answer I wanted to address your question about DCS versus PLC/SCADA.  Historically DCS and PLC systems were very different solutions.  A DCS was generally very expensive, had slow processing/scan times, and was specifically designed to control large, continuous processes (IE refineries, petrochemical plants) with minimal downtime.  PLCs boasted very high speed processing, were designed for digital IO and sequencing, and typically utilized for machine control and smaller processes.  Over the years both DCS and PLC manufacturers have modified their products to expand into that “middle ground” between the two systems.  DCSs were made more scalable (to make them competitive in small applications) and added extensive sequencing logic to make them better suited for digital control.  At the same time PLCs added much more analog logic, the ability to program in function blocks and other languages, and began incorporating a graphical layer to make them look and feel more like a DCS.  While the two systems are undeniably much more similar then they were in the past, there are definitely some significant differences between the two technologies that makes one better suited than the other in a variety of situations.  Generally we try to remain “vendor agnostic” in our answers so I won’t specifically name names but I will say that the offerings of the DCS and PLC vendors vary widely and some systems have much more capabilities than the other.  That being said I’ll try to keep my answer fairly generic. DCS systems were specifically designed to allow online changes because they were designed for plants that can run years without a shutdown.  In such a plant the ability to make programming changes, add cards and racks of IO, and even upgrade software while continuing to run is paramount.  PLCs generally have some ability to make online changes but there can be extensive limitations to what can be changed while running.  Unfortunately many PLC vendors will say “there are virtually no limits to the changes you can make while running” – and you typically find out the hard way that this is not true even when running redundant processors.  If you are looking at installing a control system on a process that must run continuously for long periods, spend a lot of time talking with users (not salespeople) to understand what you truly can (and cannot) do while running.  Sometimes the solution can be as simple as creating dummy racks and IO while you are down so you can add racks later. DCS systems typically have much slower processing speeds/scan times than PLCs.  While some very recent DCS processors boast high speeds, most controllers can only process a limited number of modules at high speeds and even that speed (50ms or so) is slow compared to a PLC.  If the process is extremely fast, a PLC will likely outperform a DCS. DCS systems are usually much better at handling various networks and fieldbuses though PLCs have been improving in this regard and several third party manufacturers are now selling PLC compatible network cards.  If you have existing bus systems (ASI, Foundation Fieldbus, Profibus PA, Devicenet, Bacnet, Modbus, etc) look at the system carefully and make sure it can communicate with your network.  Fortunately the IOT buzz has driven both DCS and PLC manufacturers to communicate over an increasingly large array of networks so most systems are getting better in this regard. Batch capabilities vary significantly by manufacturer so that capability is hard to define on a PLC vs DCS level.  I can name DCS manufacturers who have great batch functionality and others that have minimal capability and are very difficult to program.  Similarly some PLCs have good batch capabilities and others have virtually none.  If you have batch and are looking at a new control system take the time to dig deep and talk extensively with people who program those systems regularly.  The better systems offer extensive aliasing capabilities, have few limits in executing logic in phases, have a good batch operator interface, have an integrated tag database, and allow changes to phases/operations even as a recipe is running.  Weaker systems have limited ability to alias (you must create a copy of every phase even if they are identical other than tag names), have limitations in what logic can run in the phases, have poor interfaces, and limit online changes.  Probably the last major point is cost and what are you trying to do with the data.  Historically DCSs have had a much better capability to handle classical analog control, advanced control algorithms, and batch processing.  Because of that they typically utilize a tag count based pricing model.  This pricing strategy can become very expensive if the system is mostly being utilized to bring in reams of data for display and historization but not using it specifically for control.  If the process has very large tag counts but doesn’t require extensive control capability, a PLC/SCADA system can be a cheaper alternative. I hope this helps.  If you have any questions about a specific vendor, ask me directly and I can share my experience. Greg McMillan’s Answer Most of the PID capability I find valuable in terms of advanced features most notably external-reset feedback and enhancements to deal with large wireless update time and analyzer cycle times are not available in PLCs.  The preferred PID Standard (Ideal) Form is less common and multiple PID structures by setpoint weights for integral and derivative modes and the ability to bumplessly write to the gain setting may not exist as well. Some PLCs use the Parallel or Independent Form that negates conventional tuning practices. Even worse, computation of the PID modes in a few PLCs uses signals is in engineering units rather than percent of scale leading to bizarre tuning requirements. A pneumatically actuated control valve in the loop is much slower than a DCS that can execute every 100 milliseconds. If the loop manipulates a variable frequency drive speed without deadband and rate limiting or speed to torque cascade control, the process deadtime is less than 100 milliseconds, and the sum of time constants from signal filtering and transmitter damping is less than 200 milliseconds, the DCS may not be fast enough but this is a lot of “Ifs” rarely seen in the process industry where fluids are flowing through a pipeline.  It is a different story in parts and silicon wafer manufacturing. For Additional Reference: Bill R. Hollifield and Eddie Habibi, Alarm Management: A Comprehensive Guide Nicholas Sands, P.E., CAP and Ian Verhappen, P.Eng., CAP.,  A Guide to the Automation Body of Knowledge .  To read a brief Q&A with the authors, plus download a free 116-page excerpt from the book,  click this link . Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • Lowdown on Turndown

    The post, Lowdown on Turndown , appeared first on the ControlGlobal.com Control Talk blog . There are a lot of misconceptions on what is the turndown capability of measurements and final control elements (e.g., control valves and variable frequency drives). Here is a very frank concise discussion of what really determines turndown and things to watch for in terms of limiting factors. For flow measurements and final control elements, the term rangeability is often used. The turndown of vortex meters and magmeters is determined by a minimum velocity. The actual turndown experienced is typically a lot less than stated in publications because the maximum velocity for the meter size is usually greater than the maximum velocity for the process. Much larger than needed meters are often chosen because of conservative factors built into the stated requirements by process and piping engineers and the desire to minimize pressure drops across the meters. Less than optimum straight runs of upstream and downstream piping can also reduce rangeability for vortex meters and particularly differential herd meters (flow being the square root of pressure drop) because of the introduction of flow sensor noise that becomes large relative to size of sensor signal at low flows. Also, physical properties of the fluid most notably fluid kinematic viscosity for vortex meters and fluid conductivity for magmeters can significantly reduce turndown capability. The turndown for valves more commonly stated as rangeability is severely limited by backlash and stiction that is often a factor of two or more greater near the seat than at the mid stroke range where response testing is normally done. Also, valve actuator sizes should provide at least 150% of the maximum torque or thrust requirement to deal with less than ideal conditions and tightening of stem packing. Valve rangeability is also greatly limited by the installed flow characteristic particularly if the valve to system pressure drop ratio at max flow is less than 0.25 in a misguided attempt to reduce pressure drop and provide more flow capacity than what is actually needed. The literature does not alert users to the fact that variable frequency drives can have a very nonlinear installed flow characteristic and poor turndown. To maximize rangeability of variable frequency drives, use a pulse width modulated inverter with slip control, speed to torque cascade control in the field (not control room), a pump head that is at least 4 times the maximum static head, totally enclosed fan cooled inverter rated motor, high resolution signal card, and minimal dead band setting in drive setup. Transmitters and some sensors have an error that is expressed as a percent of span that reduces turndown. Transmitters selected that have a range narrowed to be closer to the actual maximum consequently improve turndown. The use of thermocouples (TCs) and resistance temperature detectors (RTDs) input cards instead of transmitters introduce a huge error and resolution limit and reduction in real rangeability due to the large spans. The use of TCs instead of RTDs severely reduces rangeability due to larger sensitivity errors and drift provided the temperature is in the recommended range for RTDs. You can achieve greater rangeability by putting small and large flow meters and control valves in parallel. The process control loop manipulates the smaller valve using the smaller flow meter for cascade control for more precise control. A valve position controller (VPC) manipulates the large valve to keep the small valve in a good throttle range. External-reset feedback is used to reduce interactions and provide a fast correction if the small valve is moving toward the lower or upper best part of its installed flow characteristic. Feedforward or flow ratio control can be used to provide quicker correction. See the Control article “ Don’t Over Look PID in APC ” for much more on the many uses of VPC. Note that the use of split range control is not as good because you are normally manipulating the large valve and using the large flow meter with error and resolution limitations that are large due to being a percent of span. Going for more flow capacity, lower pressure drop or a cheaper installation generally hurts turndown. Remember bigger is not better and cheaper is not really cheaper in the long run. If you want more information on opportunities to learn what is really important, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts . You can also get a comprehensive resource focused on what you really need to know for a successful automaton project including nearly a thousand best practices in the 98% new McGraw-Hill Process/Industrial Instruments and Controls Handbook Sixth Edition capturing the expertise of 50 leaders in industry.
  • How to Manage Control Valve Response Issues in the Field

    The post How to Manage Control Valve Response Issues in the Field first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. In the  ISA Mentor Program , I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Mohd Zhafran A. Hamid. Mohd Zhafran A. Hamid is a senior instrument engineer from Malaysia working in an EPC company, Toyo Engineering Corporation . He has worked in the field of control and instrumentation for about 10 years mostly in both engineering design and involvement at site/field. Mohd Zhafran’s First Question If you have selected a control valve whose installed flow characteristics significantly deviates from linear (either by mistake or forced to select due to certain circumstances), what is a practical way in the field after installation to linearize the installed flow characteristic? Greg McMillan’s Answer You need a sensitive flow measurement to identify the installed flow characteristic online. If you have a flow measurement and make changes in the manual controller output 5 times larger than dead band or resolution limit spaced out by a time interval greater than the response time, the slope of the installed flow characteristic is the change in per cent flow divided by the change in per cent signal. You need at least 20 points identified on the installed flow characteristic. A signal characterizer is then inserted on the controller output to convert the flow in percent of scale to percent signal to get a piecewise linear fit that would linearize the characteristic so far as the controller is concerned. The controller output and linearized signal to the valve should be displayed. This linearization can be done in a positioner, but I prefer it being done in the DCS or PLC for better visibility and maintainability. For much more on signal characterizers see my Control Talk blog Unexpected benefits of signal characterizers . ISA Mentor Program Posts & Webinars Did you find this information of value? Want more? Click this link to view other ISA Mentor Program blog posts, technical discussions and educational webinars. Mohd Zhafran’s Second Question I recently read the addendum “Valve Response Truth or Consequences” in Greg’s article How to specify valves and positioners that do not compromise control . I am curious for fast loop whereby the control valve is used with volume booster but without positioner, how come you can move the stem/shaft by hand only even though the valve size is big. Would you mind sharing the overall schematic? Also, would you also share the schematic of using positioner with booster and booster bypass? Greg McMillan’s Answer Positive feedback from a very sensitive booster outlet port is greatly assisting attempts to move the shaft either manually or due to fluid forces on a butterfly disk as described in item 5 of my Control Talk blog Missed opportunities in process control – Part 6 . There is a schematic of the proper installation in slide 18 of the ISA Mentor Program webinar How to Get the Most out of Control Valves . I don’t have a schematic of the wrong thing to do where the volume booster input is connected to current to pneumatic transducer (I/P) output. For new high pressure diaphragm actuators or boosters with lower outlet port sensitivity, this may not happen since diaphragm flexure and consequential change in pressure from change in actuator volume may be less than booster outlet port sensitivity but it is not worth the risk in my book. The rule positioners should not be used on fast loops is mostly bogus as explained in my point 4 in the same Control Talk blog.  If you need a response time faster than 0.5 seconds, you should use a variable frequency drive with a pulse width modulated inverter. Mohd Zhafran’s Third Question Greg highlighted the importance to specify valve gain requirement. Is there any publicly available modeling software that we design engineer can utilize to perform valve gain analysis? So far, I have encountered only one valve manufacturer that provides control valve sizing software (publicly available) with feature of valve gain graph. This manufacturer calculates process model based on the principle that the pressure losses in a piping system are approximately equal to flow squared. Greg McMillan’s Answer The Control Talk column Why and how to establish installed flow characteristic describes how one practitioner uses Excel to compute the installed flow characteristic. The analysis of all the friction losses in a piping system can be quite complicated because of the effect of process fluid properties and fouling determined by process conditions and operating history and the piping system including fittings, elbows, inline equipment (e.g., heat exchangers and filters), and valves. A dynamic model in a Digital Twin that includes system pressure drops and the effect of fouling and the ability to enter the inherent flow characteristic perhaps by a piecewise linear fit can show how the valve gain changes for more complex and realistic scenarios. Ideally, there would be flow and pressure measurements to show key pressure drops particularly where fouling is a concern so that resistance coefficients can be back calculated.  The fouling of heat transfer surfaces can be detected by an increase in the difference needed between the process and utility temperature to compensate for the decrease in heat transfer coefficient. A slow ramp of the valve signal followed by a slow ramp in a flow measurement could reveal the installed flow characteristic by a plot of flow ramp versus the signal ramp assuming there are no pressure disturbances and flow measurement has sufficient signal to noise ratio and rangeability. Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • How to Use Industrial Simulation to Increase Learning and Innovation

    The post How to Use Industrial Simulation to Increase Learning and Innovation first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. In the  ISA Mentor Program , I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Damien Hurley. Damien Hurley is a control and instrumentation (C&I) engineer for Fluor in the UK. He is currently involved in the detailed design phase of a project to build a new energy plant in an existing refinery in Scotland. His chief responsibility is C&I interface coordinator with construction, the existing site C&I contractor and the client. Damien Hurley’s Question How can I begin implementing process simulations in my learning? My background is in drone control where all learning has a significant emphasis on simulation and testing, usually via programs such as MATLAB. Upon starting in the oil and gas engineering, procurement and construction (EPC) industry, I began getting to grips with the wide array of final elements and my knowledge of Process simulation has suffered as a result. I’m also not exposed to simulations on a daily basis, as I was previously in the unmanned aerial vehicle (UAV) industry. How can I get started with simulation again? Specifically is the simulation of processes relevant to our industry? Can you point me in the direction of a good resource to begin getting to grips with this worthwhile subject? Greg McMillan’s Answer Dynamic simulation is the key to most of deep learning and significant innovation in my 50-year career. Simulation has played a big role in industrial processes, especially in refining and energy plants. There are a lot of basic and advanced modeling objects for the unit operations in these plants. You can learn a lot about what process inputs and parameters are important in the building of first principle models. Even if the simulations are built for you, the practice of changing process inputs and seeing the effect on process outputs is a great learning experience. You are free to experiment and see results where your desire to learn is the main limit. You can also learn a lot about what affects process control. Here it is critical to include all of the automation system dynamics often ignored in the literature despite most often being the biggest source of control loop dead time with also a significant contributing effect to the open loop gain and nonlinearity by way of the installed flow characteristic of control valves and variable frequency drives (VFDs). You need to add variable filter times to simulate sensors particularly thermowell and electrode lags, transmitter damping, and signal filters. You need to add variable dead time blocks to simulate transportation delays associated with injection of manipulated fluids into the unit operation and to the sensor for measurement of the controlled variables. The variable deadtime block is also needed for simulating the effect of positioners with poor sensitivity where the response time increases by two orders of magnitude for changes in signal less than 0.25 percent. You need backlash-stiction blocks to simulate the deadband and resolution limits of control valves as detailed in the Control article How to specify control valves and positioners that don’t compromise control . VFDs can have a surprisingly large deadband introduced in the setup in a misguided attempt to reduce reaction to noise and a resolution limit caused by an 8-bit signal input card. You also need to add rate of change limits to model slewing rates for large control valves and introduced in the VFD setup in a misguided attempt to reduce motor overload instead of properly sizing the motor. You need software that will provide PID tuning settings with proper identification of total loop dead time. Finally, a performance metrics block to identify the integrated and peak error for load disturbances and the rise time, overshoot, undershoot, and settling time for disturbances is a way of judging how well you are doing. A couple of years ago I helped develop a dynamic simulation of the control system and the many headers, boilers, and users at a large plant to optimize the cogeneration and minimize the disruption to the steam system from large changes in the steam use and generation in all the headers for the whole plant. ISA Mentor Program resource James Beall and protégé Syed Misbahuddin were part of the team. Over 30 feedforward and decouple signals were developed and thoroughly tested by dynamic simulation resulting in a smooth implementation of much more efficient and safe system. I learned via the simulation in one case that the feedforward I thought was needed for a boiler caused more harm than good due to changes in header pressure preceding the supposedly proactive feedforward to a header letdown valve to compensate for the effect of a change in firing rate demand. First principle process models material and energy balances of volumes in series can model the many unanticipated changes. I recently was alerted to the fact that the use of a bypass valve around a heat exchanger provides first a fast response from a change in flow bypassing and going through the exchanger but is followed by a delayed response in the opposite direction caused by the same utility flow rate heating or cooling a different flow rate through the exchanger. Unless a feedforward changes the utility flow, the tuning of the PID for temperature of the blended stream must not overreact to the initial temperature change. Often there are leads besides lags in the temperature response associated with inline temperature control loops for jackets. For heat exchangers in a recirculation line for a volume, the self-regulating response of the exchanger outlet temperature controller is followed by a slow integrating response from recirculation of the changes in the volume temperature. Also, feedforward signals that arrive too soon can create an inverse response or that arrive too late create a second disturbance that makes control worse than the original feedback control. Getting the dynamics right by inclusion of automation besides process dynamic is critical. ISA Mentor Program Posts & Webinars Did you find this information of value? Want more? Click this link to view other ISA Mentor Program blog posts, technical discussions and educational webinars. We learn the most by our mistakes. To avoid the price of making them in field, we can use dynamic simulation as a safe way of hands-on learning for exploration and prototyping of existing and new systems finding good and bad effects that offers much more flexibility and is non-intrusive to the process. Dynamic models using the digital twin enables a deeper process understanding to be gained and used to make much more intelligent automation. See the Control Talk blog Simulation breeds innovation for an insightful history and future of opportunities for a safe sandbox allowing creativity by synergy of process and automation system knowledge. Often simulation fidelity is simply stated as low, medium or high. I prefer defining at least five levels as seen below in the chapter Tip #98: How to Achieve Process Simulation Fidelity in the ISA book 101 Tips for a Successful Automation Career . Note that the term “virtual plant” I have been using for decades should be replaced with the term “digital twin” in my books and articles prior to 2018 to be in tune with the terminology for digitalization and digital transformation. Fidelity Level 1 : measurements can match setpoints and respond in the proper direction to loop outputs; for operator training. Fidelity Level 2 : measurements can match setpoints and respond in the proper direction when control and block valves open and close and prime movers (e.g., pumps, fans, and compressors) start and stop; for operator training. Fidelity Level 3 : loop dynamics (e.g., process gain, time constant, and deadtime) are sufficiently accurate to tune loops, prototype process control improvements, and see process interactions; for basic process control demonstrations. Fidelity Level 4 : measurement dynamics (e.g., response to valves, prime movers, and disturbances) are sufficiently accurate to track down and analyze process variability and quantitatively assess control system capability and improvement opportunities; for rating control system capability, and conducting control system research and development. Fidelity Level 5 : process relationships and metrics (e.g., yield, raw material costs, energy costs, product quality, production rate, production revenue) and process optimums are sufficiently accurately modeled for the design and implementation of advanced control, such as model predictive control (MPC) and real time optimization (RTO), and in some cases virtual experimentation. A lot of learning is possible by using Fidelity Levels 3 models. Fidelity Level 4 and 5 simulations with advanced modeling objects are generally needed for complex unit operations where components are being separated or formed, such as biological and chemical reactors and distillation columns, or to match the dynamic response of trajectories to detail advanced process control including PID control that involves feedforwards, decouplers, and state based control. Developing and testing inferential measurements, data analytics, performance metrics, and MPC and RTO applications, generally requires Level 5. In all cases I recommend a digital twin that has the blocks addressing nearly every type of automation system dynamics and metrics often neglected in dynamic simulation packages. The digital twin should have the same PID Form, Structure and options used in the process industry and a tool like the Mimic Rough-n-Ready tuner to get started with reasonable PID tuning settings. Many software packages that were not developed by automation professionals may unfortunately seriously mess you up by not having the many sources of dead time, lags, and nonlinearities, and by employing a PID with a Parallel (Independent) Form working in engineering units instead of percent signals. A fellow protégé also in the UK who is now an automation engineer at Phillips 66 can relate his experiences in using Mimic software. If you pursue this dynamic simulation opportunity, we can do articles and Control Talk blogs together to share the understanding gained to help advance our profession. For Additional Reference: McMillan, Gregory K., and Vegas, Hunter, 101 Tips for a Successful Automation Career . Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • Biggest Valve Sizing Mistake

    The post, Biggest Valve Sizing Mistake , appeared first on ControlGlobal.com's Control Talk blog. There is a common mistake made in the sizing of most control valves. The intentions that lead to this mistake may be good but the results are insidiously bad. While you would think that the proliferation of improvements in technology and communications would lead to better awareness, the problem appears to getting worse because of pervasive persistent misconceptions fostered by missing fields on valve specification forms. Presently, valve specification forms have fields for maximum flow, available pressure drop and leakage. Most people filling out the form would think that a valve that can easily handle a greater flow with a lower pressure drop and less leakage would be better. This leads often to rotary valves with tight shutoff seals. These valves are cheaper than sliding stem valves and the actuators often included are designed for a rotary stem and can handle greater shutoff pressures. The resulting ball and butterfly valves have piston actuators designed more for on-off action. These valves are usually already in the piping specification extensively for automated sequential actions and shutdown. While rangeability may not be on the valve specification, it is thought to be extraordinarily great for these valves due to a prevalent definition of rangeability being a maximum flow divided by a minimum flow whose Cv is within the specified inherent flow characteristic. Gosh, you don’t even need to be concerned with piping reducers. What seals the deal is the very attractive price. Not understood is that these on-off valves posing as throttling valves are a disaster. A low valve pressure drop to system pressure drop ratio causes distortion of the installed flow characteristic making the characteristic much more nonlinear. The backlash of the actuator linkages and the keylock actuator shaft to stem to ball or disk connections is excessive, the stiction from the ball or disk seal and shaft packing is terrible, and the resolution of the piston actuator poor. The result is limit cycles and a real rangeability that is lousy to the point of being a disaster for any loop where control better than within 5% of setpoint is desired. Often the oscillations are blamed on other sources due to lack of understanding. The real rangeability is drastically reduced to perhaps 10% of the stated rangeability due to distortion of the nonlinear installed flow characteristic and the backlash and stiction that gets worse near the closed position. Operating valve positions are much less than expected due to conservative factors built into pump sizing and maximum flow specified. Most valve suppliers will not do response testing and if requested, the testing will not be done below 10% valve position because of the deterioration in response. The user is setup for a terrible scenario of limit cycling. So what can we do? Please, add in a backlash and resolution (e.g., 0.5%) and response time (e.g., 2 sec) and installed flow characteristic valve gain (e.g., 0.5 to 2.0 % flow per % signal) requirement at 10%, 50% and 90% positions for step changes of 0.25%, 0.5%, 1%, and 2% for all valves plus 50% step change for surge valves and gas pressure valves. To achieve these specification requirements, use splined shaft connections, integrally cast stem and ball or disk, v-notch balls and contoured disks, and low friction seals for rotary valves and a valve pressure drop that is at least 25% of the system pressure drop for maximum flow, low friction packing (e.g., ultra low friction (ULF) packing), sensitive diaphragm actuators (now available for much higher actuator pressures) and digital positioners tuned with maximum gain and no integral action for all valves. The installed flow characteristic should be plotted with the help of process and piping engineers for the worst case. When asked why the valve cost is higher, tell them the cheaper valve will cause sloppy control putting the plant safety seriously at risk. Remember bigger is not better and cheaper is not really cheaper in the long run. For much more than you have ever wanted on valve response checkout the Control article “ How to specify valves and positioners that don’t compromise control " and the associated white paper “Valve Response - Truth or Consequences”. For much more on valve specification see the ISA Mentor Program Q&A post “ Basic Guidelines for Control Valve Selection and Sizing ” If you want more information on opportunities to learn what is really important, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts . You can also get a comprehensive resource focused on what you really need to know for a successful automaton project including nearly a thousand best practices in the 98% new 2019 Process/Industrial Instruments and Controls Handbook Sixth Edition capturing the expertise of 50 leaders in industry.
  • How to Avoid Using Multivariable Flow Transmitters

    The post How to Avoid Using Multivariable Flow Transmitters first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. In the  ISA Mentor Program , I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Jeff Downen. Jeff Downen is an I&C commissioning engineer with cross-training in DCS and high voltage electrical testing. His expertise is in start-up and commissioning of natural gas, combined cycle, power plants.  Jeff Downen’s Question Our multivariable flow transmitters on new construction sites fail a lot. If the transmitter loses the RTD, the whole 4-20 loop goes bad quality along with the HART variables. I like the three devices being separate and their signals joined in the DCS logic much more.  I understand that it is more expensive. I want to see if there was any other reasoning behind it on the engineering side and how I can help get a better up front design. How can we avoid the increasing use of multivariable flow transmitters as an industry standard despite a significant loss in reliability, accuracy, and diagnostic and computational capability from not having individual separate pressure, temperature and flow sensors and transmitters? Greg Brietzke’s Answer I like Jeff’s question on multivariable flow transmitters, as it would be relevant to control engineers, maintenance/reliability engineers, as well as maintenance personnel. What is the application? What are the accuracy requirements? Can you bring the individual variables back to the DCS/PLC through additional variable assignment? Would the increased cost of infrastructure justify the increased expense of a true mass flowmeter? This could be addressed from so many different viewpoints it could be a great discussion topic. ISA Mentor Program Posts & Webinars Did you find this information of value? Want more? Click this link to view other ISA Mentor Program blog posts, technical discussions and educational webinars. Greg McMillan’s Answer I suggest you explain to plant and project personnel the advantages of separate measurements and true mass flowmeters. Separate flow, temperature and pressure measurements offer better diagnostics, reliability, sensors, and installation location that is particularly important for temperature (e.g., RTD in tapered thermowell with tip centered in pipe with good velocity profile). They can provide faster and perhaps more accurate and maintainable measurements that could be used for personalized performance monitoring calculations and safety instrumented systems. Coriolis meters provide the only true mass flow measurements offering an incredibly accurate density measurement as well. Most people don’t realize that pressure and temperature compensation of volumetric flow meters to get a mass flow measurement only works if the concentration is constant and known. The Coriolis mass flow is not affected by component concentrations or physical properties in the same phase. Density can provide an inferential measurement of concentration for a two component process fluid. The Coriolis meter accuracy and rangeability is the best by far as noted in the Control Talk column Knowing the best is the best . David De Sousa’s Answer Using dedicated and separated measurements also allows for the use of hybrid virtual flowmeters in complex process applications where, for example, the technology for inline multiphase flow metering is not yet mature enough, or where physical units will greatly increase the cost of the associated facilities. With the digital transformation initiatives associated with Industry 4.0 , the use of distributed instrumentation, data-driven learning algorithms, and physical flow models, are being tested and explored more and more in the process industries, especially in upstream oil & gas wellsite applications. Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • How to Avoid Multi-Variable Flow Transmitters

    The post How to Avoid Multi-Variable Flow Transmitters first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. In the  ISA Mentor Program , I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Jeff Downen. Jeff Downen is an I&C commissioning engineer with cross-training in DCS and high voltage electrical testing. His expertise is in start-up and commissioning of natural gas, combined cycle, power plants.  Jeff Downen’s Question   Our multi-variable flow transmitters on new construction sites fail a lot. If the transmitter loses the RTD, the whole 4-20 loop goes bad quality along with the HART variables. I like the 3 devices being separate and their signals joined in the DCS logic much more.  I understand that it is more expensive. I want to see if there was any other reasoning behind it on the engineering side and how I can help get a better up front design. How can we avoid the increasing use of multi-variable flow transmitters as an industry standard despite a significant loss in reliability, accuracy, and diagnostic and computational capability from not having individual separate pressure, temperature and flow sensors and transmitters? Greg Brietzke’s Answer I like Jeff’s question on multi-variable flow transmitters, as it would be relevant to control engineers, maintenance/reliability engineers, as well as maintenance personnel. What is the application? What are the accuracy requirements? Can you bring the individual variables back to the DCS/PLC through additional variable assignment? Would the increased cost of infrastructure justify the increased expense of a true mass flowmeter? This could be addressed from so many different viewpoints it could be a great discussion topic. ISA Mentor Program Posts & Webinars Did you find this information of value? Want more? Click this link to view other ISA Mentor Program blog posts, technical discussions and educational webinars. Greg McMillan’s Answer I suggest you explain to plant and project personnel the advantages of separate measurements and true mass flowmeters. Separate flow, temperature and pressure measurements offer better diagnostics, reliability, sensors, and installation location that is particularly important for temperature (e.g., RTD in tapered thermowell with tip centered in pipe with good velocity profile). They can provide faster and perhaps more accurate and maintainable measurements that could be used for personalized performance monitoring calculations and safety instrumented systems. Coriolis meters provide the only true mass flow measurements offering an incredibly accurate density measurement as well. Most people don’t realize that pressure and temperature compensation of volumetric flow meters to get a mass flow measurement only works if the concentration is constant and known. The Coriolis mass flow is not affected by component concentrations or physical properties in the same phase. Density can provide an inferential measurement of concentration for a two component process fluid. The Coriolis meter accuracy and rangeability is the best by far as noted in the Control Talk column Knowing the best is the best . David De Sousa’s Answer Using dedicated and separated measurements, also allows for the use of hybrid virtual flowmeters in complex process applications where, for example, the technology for inline multiphase flow metering is not yet mature enough, or where physical units will greatly increase the cost of the associated facilities. With the digital transformation initiatives associated with Industry 4.0, the use of distributed instrumentation, data-driven learning algorithms, and physical flow models, are being tested and explored more and more in the process industries, especially in upstream oil & gas well site applications. Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • Webinar Recording: Lessons Learned During the Migration to a New DCS

    The post Webinar Recording: Lessons Learned During the Migration to a New DCS first appeared on the ISA Interchange blog site. This educational ISA webinar was presented by  Greg McMillan . Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now  Eastman Chemical ). This ISA webinar on control valves is introduced by Greg McMillan and presented by Hector Torres , in conjunction with the ISA Mentor Program . Hector is a recipient of ISA’s John McCarney Award for the article on opportunities and challenges for enabling new automation engineers . Hector has been a member of the ISA Mentor Program since its inception. In this webinar, he provides a detailed view of how to use key PID controller features that can greatly expand what you can achieve. The setting of anti-reset windup ARW limits, dynamic reset limit, eight different structures, integral dead band, and set-point filter. Feedforward and rate limiting are covered with some innovative application examples Principal ISA Mentor Program mentee Hector Torres shares his extensive knowledge gained after migrating a plant from an 1980s vintage DCS to a state-of-the art, new DCS. The following important topics are covered: the proper setting of tuning parameters, controller output scales, anti-reset windup limits, and the many grounding, wiring, and configuration practices found to be essential in a migration project that exceeded expectations. ISA Mentor Program Posts & Webinars Did you find this information of value? Want more? Click this link to view other ISA Mentor Program blog posts, technical discussions and educational webinars. About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • Missed Opportunities in Process Control - Part 6

    The post, Missed Opportunities in Process Control - Part 6 , first appeared on the ControlGlobal.com Control Talk blog. Here is the Sixth part of a point blank decisive comprehensive list of what we really need to know in a detailed attempt to reduce the disparity between theory and practice. Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail. If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts . You can also get a comprehensive resource focused on what you really need to know for a successful automaton project including nearly a thousand best practices in the 98% new 2019 Process/Industrial Instruments and Controls Handbook Sixth Edition capturing the expertise of 50 leaders in industry. Add small amounts of dissolved carbon dioxide (DCO2) and conjugate salts to make computed titration curves match laboratory titration curves . The great disparity between theoretical and actual titration curves is due to conjugate salts and incredibly small amounts of DCO2 from simple exposure to air and a corresponding amount of carbonic acid created. Instead of titration curve slopes and thus process gains increasing by 6 orders of magnitude as you go from 0 to 7 pH for strong acids and strong bases, in reality the slope increases by 2 orders of magnitude, still a lot but 4 orders of magnitude off. Thus, control system analysis and supposed linearization by translation of the controlled variable from pH to hydrogen ion concentration by the use of theoretical equations for a strong acid and strong base is off by 4 orders of magnitude. I made this mistake early in my career (about 40 years ago) but learned at the start of the 1980s that DCO2 was the deal breaker. I have seen the theoretical linearization published by others about 20 years ago and most recently just last year. For all pH systems, the slope between 4 and 7 pH is greatly moderated due to the carbonic acid pKa = 6.35 at 25 degrees Centigrade. The titration curve is also flattened within two pH of logarithmic acid dissociation constant (pKa) of an acid or base that has a conjugate salt. To match computer generated titration curves to laboratory titration curves, add small amounts DCO2 and conjugate salts as detailed in the Chemical Processing feature article “ Improve pH control ”. Realize there is a multiplicative effect for biological process kinetics that creates restrictions on experimental methods to analyze or predict cell growth or product formation. While the incentive is greater for high value biologic products, there are challenges with models of biological processes due to multiplicative effects (neural networks and data analytic models assume additive effects). Almost every first principle model (FPM) has specific growth rate and product formation the result of a multiplication of factors each between 0 and 1 to detail the effect of temperature, pH, dissolved oxygen, glucose, amino acid (e.g., glutamine), and inhibitors (e.g., lactic acid). Thus, each factor changes the effect of every other factor. You can understand this by realizing that if the temperature is too high, cells are not going to grow and may in fact die. It does not matter if there is enough oxygen or glucose. Similarly if there is not enough oxygen, it does not matter if the all the other conditions are fine. One way to address this problem is to make all factors as close to 1 and as constant as possible except for the one of interest. It has been shown data analytics can be used to identify the limitation and/or inhibition FPM parameter for one condition, such as the effect of glucose concentration via the Michaelis-Menten equation if all other factors are constant and nearly 1. Take advantage of the great general applicability and ease of parameter adjustment in Michaelis-Menten equations for the effect of concentrations and Convenient Cardinal equations for the effect of temperature and pH on biological processes . The Mimic bioreactor model in a digital twin takes advantage of these breakthroughs in first principle modeling. For temperature and pH, Convenient Cardinal equations are used where the optimum temperature for growth and production phases is simply the temperature or pH setpoint including any shifts for batch phases. The minimum and maximum temperatures complete the parameter settings. This is a tremendous advancement to traditional uses of Arrhenius equations for temperature and Villadsen-Nielsen equations for pH that required parameters not readily available set with a precision to the sixth or seventh decimal place. Generalized Michaelis-Menten equations shown to be useful for modeling intracellular dynamics can model the extracellular limitation and inhibition effects of concentrations. The equations provide a link between macroscopic and microscopic kinetic pathways. If the limiting or inhibition effect is negligible or needs to be temporarily removed, the limitation and inhibition parameter is simply set to 0 g/L and 100 g/L, respectively. The biological significance and ease of setting parameters is particularly important since most kinetics are not completely known and what is defined can be quite subject to restrictions on operating conditions. These revolutionary equations enable the same generalized kinetic model to be used for all types of cells. Previously, yeast cells (e.g., ethanol production), fungal cells (e.g., antibiotic production), bacterial cells (e.g., simple proteins), and mammalian cells (e.g., complex proteins) had specialized equations developed that did not generally carry over to different cells and products. Always use smart sensitive valve positioners with good feedback of actual position tuned with a high gain and no integral action on true throttling valves (please read the following despite its length since misunderstandings are pervasive and increasing). A very big and potentially dangerous mistake persists today from a decades old rule that positioners should not be used on fast loops. The omission of a sensitive and tuned valve positioner can increase limit cycle period and amplitude by an order of magnitude and severely jeopardize rangeability and controllability. Without a positioner, some valves may require a 25% change in signal to open meaning that controlling below 25% signal is unrealistic. As a young lead I&E engineer for the world’s largest acrylonitrile plant back in the mid-1970s, I used the rule about fast loops. The results were disastrous. I had to hurriedly install positioners on all the loops during startup. A properly designed, installed and tuned positioner should have a response time of less than 0.5 seconds. A positioner gain greater than 10 is sought with rate action added when offered. Positioners developed in the 1960s were proportional only with a gain of 50 or more and a high sensitivity (0.1%). Since then spool positioners with extremely poor sensitivity (2%) have been offered and integral action included and even recommended in some misguided documents by major suppliers. Do not use integral action in the positioner despite default settings to the contrary. A volume booster can be added to the positioner output to make the response time faster. Using a volume booster instead of a positioner is dangerous as explained in next point. If you cannot achieve the 0.5% response time, you have something wrong with type of valve, packing, positioner, installation and/or tuning of the positioner and is not a reason for saying that you should not use positioners on fast loops. An increasing threat ever since the 1970s has been on-off valves posing as throttling valves. They are much less expensive, in the piping spec and have much tighter shutoff. The actuator shaft feedback may change even though the actual ball or disk has not changed for a signal change of 8% or more. In this case, even the best positioner is of little help since it is being lied to as to actual position. Valve specifications have an entry for leakage but typically have nothing on valve backlash, stiction, and response time and actuator sensitivity. I can’t seem to get even a discussion started as to how to get this changed and how rangeability and controllability are so adversely affected. If you need a faster response or are stuck with an on-off valve, then you need to consider a variable frequency drive with a pulse width modulated inverter (see point 35 in part 4 of this series). Also be aware that theoretical studies based solely on process dynamics are seriously flawed since most fast loops, sensors, transmitters, signal filter, scan and execution times are a larger source of time constants and dead time than the actual process making the loop much slower than what is shown in studies based on process response as noted in point 9 in part 1. For much more on how to deal with this increasing threat read the Control articles “ Is your control valve an imposter?” and “ How to specify valves and positioners that don’t compromise control ”. Put a volume booster on output of positioner with a bypass valve opened just enough to make the valve stroking time much faster recognizing that replacing a positioner with a booster poses a major safety risk . Another decades old rule said to replace a positioner with a booster on fast loops. For piston actuators, a positioner is required to work at all. For diaphragm actuators, a volume booster instead of a positioner creates a positive feedback (flexure of diaphragm changes volume and consequently pressure seen by booster high outlet port sensitivity) causing a fail open butterfly valve to slam shut from fluid forces on the disk. This has happened to me on a huge compressor and was subsequently avoided on the next project when I showed I could position 24 inch fail open butterfly valves for furnace pressure control by simply grabbing the shaft due to positive feedback from booster and diaphragm actuator combination. Since properly sized diaphragm actuators generally have an order of magnitude better sensitivity than piston actuators and the operating pressure of newer diaphragm actuators has been increased, diaphragm actuators are increasingly the preferred solution. Understand and address the reality that there are processes that have either a dead time dominant and balanced self-regulating, near-integrating, true integrating, or runaway response. Most of the literature studies balanced self-regulating processes where the process time constant is about the same size as the process dead time. Some studies address dead time dominant processes where the dead time is much greater than the process time constant. Dead time dominant processes are less frequent and mostly occur when there is a large dead time from process transportation delay (e.g., plug flow volumes or conveyors) or analyzer sample and cycle time (see points 1 and 9 in part 3 on how to address these applications). The more important loops tend to be near-integrating where the process time constant is more than 4 times larger than the process dead time, true integrating where the process will continually ramp when the controller is in manual and runaway where the process deviation will accelerate when the controller is in manual. Continuous temperature and composition loops on volumes with some degree of mixing due to reflux, recycle or agitation have a near-integrating response. Batch composition and temperature have a true integrating response. The runaway response occurs in highly exothermic typically polymerization reactors but is never actually observed because it is too dangerous to put the controller in manual during a reaction long enough to see much acceleration. Most gas pressure loops and of course, nearly all level loops have an integrating response. It is critical to tune PID controllers on near-integrating, true integrating and runaway processes with maximum gain, minimum reset action and maximum rate action so that the PID can provide the negative feedback action missing in these processes. As a practical matter, near-integrating, true integrating, and runaway processes are tuned with integrating process tuning rules where the initial ramp rate is used to estimate an integrating process gain. Maximize synergy between chemical engineering, biochemical engineering, electrical engineering, mechanical engineering and computer science . All of these degrees bring something to the party for a successful automation system implementation. The following simplification provides some perspective: chemical and biochemical engineers offer process knowledge, electrical engineers offer control, instrumentation and electrical system knowledge, mechanical engineers offer equipment and piping knowledge, and computer scientists offer data historian and industrial internet knowledge. All of these people plus operators should be involved in process control improvements and whatever is expected from the next big thing (e.g., Industrial Internet of Things, Digitalization, Big Data and Industry 4.0). The major technical societies especially AIChE, IEE, ISA, and ASME should see the synergy of exchange of knowledge rather than the current view of other societies being competition. Identify and document justifications to develop new skills, explore new opportunities and innovate. Increasing emphasis on reducing project costs is overloading practitioners to the point they don’t have time to attend short courses, symposiums or even online presentations. Contributing factors are loss of expertise from retirements, fear of making any changes, and present day executives who have no industry experience and are focused on financials, such as reducing project expenditures and shortening project schedules. At this point practitioners must be proactive and do investigation on their own time of opportunities and process metrics. Developing skills with the digital twin can be way of defining and showing associates and management the type and value of improvements as noted in all points in part 5. The digital twin with demonstrated key performance indicators (KPI) showing value of the increases in process capacity or efficiency, plus data analytics and Industry 4.0 can lead to people teaching people, eliminating silos, spurring creativity and deeper involvement, nurturing a sense of community and common objectives, and connecting the layers of automation and expertise so everybody knows everybody. To advance our profession, practitioners should seek to publish what is learned, which can be done generically without disclosing proprietary data. Use inferential measurements periodically corrected by at-line analyzers to provide fast analytical measurements of key process compositions. First principle models or experimental models identified by model predictive control or data analytics software can be used to provide immediate composition measurements with no delay associated with process sample system and analyzer cycle time. The inferential measurement result is synchronized with an at-line analyzer result by the insertion of a dead time equal to sample transportation delay plus 1.5 times the analyzer cycle time. A fraction, usually less than 0.5 of the difference between the inferential measurement and at-line analyzer result after elimination of outliers, is added to correct the inferential measurement whenever there is an updated at-line analyzer result. Use inline analyzers and at-line analyzers whose sensors are in the process or near the process, respectively. There are many inline sensors available today (e.g., conductivity, chlorine, density, dissolved carbon dioxide, dielectric spectroscopy, dissolved oxygen, focused beam reflectance, laser based measurements, pH, turbidity, and viscosity). The next best alternative is an at-line analyzer located as close as possible to the process connection to minimize sample transportation delay. The practice of locating all analyzers in one building creates horrendous dead time. An example of an innovative fast at-line analyzer capable of extensive sensitive measurements of components plus cell concentration and size for biological processes is the Nova Bioprofile Flex. Chromatographs, near infrared, mass spectrometers, nuclear magnetic resonance, and MLT gas analyzers using a combination of non-dispersive infrared, ultraviolet and visible spectroscopy with electrochemical and paramagnetic sensors have increased functionality and maintainability.
  • How Often Do Measurements Need to Be Calibrated?

    The post How Often Do Measurements Need to Be Calibrated? first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. In the  ISA Mentor Program , I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Greg Breitzke. Greg Breitzke is an E&I reliability specialist – instrumentation/electrical for Stepan . Greg has focused his career on project construction and commissioning as a technician, supervisor, or field engineer. This is his first in-house role, and he is tasked with reviewing and updating plant maintenance procedures for I&E equipment. Greg Breitzke’s Question I am working through an issue that can be beneficial to other Mentor Program Participants. NFPA 70B provides a detailed description for the prescribed maintenance and frequency based on equipment type, making the electrical portion fairly straight forward. The instrumentation is another matter. We are working to consolidate an abundance of current procedures based on make/model, to a reduced list based on technology. The strategy is to “right size” frequencies for calibration and functional testing ; decreasing non-value maintenance to have the ability to increase value added activities, within the existing head count.   My current plan for the instrumentation consists of:  Sort through the historical paper files with calibration records to determine how long a device has remained in tolerance before a correction was applied, Compare data against any work orders written against the asset that may reduce the frequency, Apply safety factors relative to the device impact on safety, regulatory compliance, quality, custody transfer, basic control, or indication only. I am trying to provide a reference baseline for review of these frequencies, but having little luck in the industry standards I have access to.  Is there a standard or RAGAGEP for calibration and functional testing frequency min/max by technology, that I can reference for a baseline? Nick Sands’ Answer The ISA recommended practice is not on the process of calibration but on a calibration management system: ISA-RP105.00.01-2017, Management of a Calibration Program for Industrial Automation and Control Systems. While I contributed, Leo Staples would be a good person for more explanation. For SIS, there is a requirement to perform calibration, which is comparison against a standard device, within a documented frequency and with documented limits, and correction when outside of limits. This is also required by OSHA for critical equipment under the PSM regulation. EPA has similar requirements under PSRM of course. Correction when out of limits is considered a failed proof test of the instrument in some cases, potentially affecting the reliability of the safety function. Paul Gruhn would be a good person for more explanation. Paul Gruhn’s Answer The ISA/IEC 61511 is performance based and does not mandate specific frequencies. Devices must be tested as some interval to make sure they perform as intended. The frequency required will be based on many different factors (e.g., SIL (performance) target, failure rate of the device in that service, diagnostic coverage, redundancy used (if any), etc.). Leo Staples’ Answer Section 5.6 ISA of the ISA technical report ISA-RP105.00.01-2017 addresses in detail calibration verification intervals or frequencies. Users should establish calibration intervals for a loop/component based on the following: criticality of the loop/component the performance history of the loop/component the ruggedness/stability of the component(s) the operating environment. Exceptions include SIS related devices where calibration intervals are established to meet SIL requirements. Other factors that can drive calibration intervals include contracts regulatory requirements. The idea for the technical report came about after years of frustration dealing with ambiguous gas measurement contracts and government regulations. In many cases these simply stated users should follow good industry practices when addressing all aspects of calibrations. Calibration intervals alone do not address the other major factors that affect measurement accuracy. These include the accuracy of the calibration equipment, knowledge of the calibration personnel, adherence to defined calibration procedures, and knowledge of the personnel responsible for the calibration program. I have lots of war stories if anyone is interested. One of the last things that I did at my company before I retired was develop a Calibration Program Standard Operating Procedure (SOP) based on ISA-RP105.00.01-2017. The SOP was designed for use in the Generation, Transmission & Distribution, and other Division of the Company. Some of you may find this funny, but it was even used to determine the calibration frequency for NERC CIP physical security entry control point devices. Initially personnel from the Physical Security Department were testing these devices monthly only because that was what they had always done. While this was before the SOP was established my team used the concepts in establishing the calibration intervals for these devices. This work was well received by the auditors. As a side note, the review of monthly calibration intervals for these devices found the practices caused more problems than it prevented. The ISA recommended practice is not on the process of calibration, but on a calibration management system : ISA-RP105.00.01-2017, Management of a Calibration Program for Industrial Automation and Control Systems. ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program. Greg McMillan’s Answer The measurement drift can provide considerable guidance in that when the number of months between calibrations multiplied by drift per month approaches the allowable error, it is time for a calibration check. Most transmitters today have a low drift rate but thermocouples and most electrodes have a drift rate much larger than the transmitter. The past records of calibration results will provide an update on actual drift for an application. Also, fouling of sensors, particularly electrodes, is an issue revealed in 86% response time during calibration tests (often overlooked). The sensing element is the most vulnerable component in nearly all measurements. Calibration checks should be made more frequently at the beginning to establish a drift rate and near the end of the sensor life when drift and failure rates accelerate. Sensor life for pH electrodes can decrease from a year to a few weeks due to high temperature, solids, strong acids and bases (e.g., caustic) and poisonous ions (e.g., cyanide). For every 25 oC increase in temperature, the electrode life is cut in half unless a high temperature glass is used. Accuracy is particularly important for primary loops (e.g., composition, pH, and temperature) to ensure you are at the right operating point. For secondary loops whose setpoint is corrected by a primary loop, accuracy is less of an issue. For all loops, the 5 Rs (reliability, resolution, repeatability, rangeability and response time) are important for measurements and valves. Drift in a primary loop sensor shows up as a different average controller output for a given production rate assuming no changes in raw materials, utilities, or equipment. Fouling of a sensor shows up as an increase in dead time and oscillation loop period. Middle signal selection using 3 separate sensors provides an incredible amount of additional intelligence and reliability reducing unnecessary maintenance. Drift shows up as a sensor with a consistently increasing average deviation from the middle value. The resulting offset is obvious. Coating shows up as a sensor lagging changes in the middle value. A decrease in span shows up as a sensor falling short of middle value for a change in setpoint. The installed accuracy greatly depends upon installation details and process fluid particularly taking into account sensor location in terms of seeing a representative indication of the process with minimal measurement noise. Changes in phase can be problematic for nearly all sensors. Impulse lines and capillary systems are a major source of poor measurement performance as detailed in the Control Talk columns Prevent pressure transmitter problems and Your DP problems could be a result of improper use of purges, fills, capillaries and seals . At the end of this post, I give a lot more details on how to minimize drift and maximize accuracy and repeatability by better temperature and pH sensors and through middle signal selection. Free Calibration Essentials eBook For an additional educational resource, download  Calibration Essentials, an informative eBook produced by ISA and  Beamex . The free e-book provides vital information about calibrating process instruments today. To download the eBook, click this link . Hunter Vegas’ Answer There is no easy answer to this very complicated question. Unfortunately the answer is ‘it depends’ but I’ll do my best to cover the main points in this short reply. 1) Yes there are some instrument technologies that have a tendency to drift more than others. A partial list of ‘drifters’ might include: pH (drifts for all kinds of reasons – aging of probe, temperature, caustic/acid concentration, fouling, etc. etc.) Thermocouples (tend to drift more than RTDs especially at high temperature or in hydrogen service) Turbine meters in something other than very clean, lubricating service will tend to age and wear out so they will read low as they age. However cavitation can make them intermittently read high. Vortex meters with piezo crystals can age over time and their low flow cut out increases. Any flow/pressure transmitter with a diaphragm seal can drift due to process temperature and/or ambient temperature. Most analyzers (oxygen, CO, chromatographs, LEL) This list could go on and on. 2) Some instrument technologies don’t drift as much. I’ve had good success with Coriolis and radar. (Radar doesn’t usually drift as much as it just cuts out. Coriolis usually works or it doesn’t. Obviously there are situations where either can drift but they are better than most.) DP in clean service with no diaphragm seals is usually pretty trouble free, especially the newer transmitters that are much more stable. 3) The criticality of the service obviously impacts how often one needs to calibrate. Any of these issues could dramatically impact the frequency: Is it a SIS instrument? The proof testing frequency will be decided by the SIS calculations. Is it an environmental instrument? The state/feds may require calibrations on a particular frequency. Is it a custody transfer meter? If you are selling millions of pounds of X a year you certainly want to make sure the meter is accurate or you could be giving away a lot of product! Is it a critical control instrument that directly affects product quality or throughput? 4) Obviously if a frequency is dictated by the service then that is the end of that. Once those are out of the way one can usually look at the service and come up with at least a reasonable calibration frequency as a starting point. Start calibrating at that frequency and then monitor history. If you are checking a meter every six months and have checked a meter 4 times in the last two years and the drift has remained less than 50% of the tolerance, then dropping back to a 12 month calibration cycle make perfect sense. Similarly if you calibrate every 6 months and find the meter drift is > 50% every calibration then you probably need to calibrate more often. However if the meter is older it may be cheaper to replace the meter with a new transmitter which is more stable. 5) The last comment I’ll make is to make sure you are actually calibrating something that matters. I could go on for pages about companies who are diligently calibrating their instrumentation but aren’t actually calibrating their instrumentation. In other words they go through the motions, fill out the paperwork, and can point to reams of calibration logs yet they aren’t adequately testing the instrument loop and it could still be completely wrong. (For instance, shooting a temperature transmitter loop but not actually checking the RTD or thermocouple that feeds it, using a simulator to shoot a 4-20mA signal into the DCS to check the DCS reading but not actually testing the instrument itself, etc. They often check one small part of the loop and after a successful test, consider the whole loop ‘calibrated’. Greg McMillan’s Answer The Process/Industrial Instruments and Controls Handbook Sixth Edition 2019 edited by me and Hunter Vegas provide insight on how to maximize accuracy and minimize drift for most types of measurements. The following excerpt written by me is for temperature: Temperature The repeatability, accuracy and signal strength are two orders of magnitude better for an RTD compared to a TC. The drift for a RTD below 400 o C is also two orders of magnitude less than a TC. The 1 to 20 o C drift per year of a TC is of particular concern for biological and chemical reactor and distillation control because of the profound effect on product quality from control at the wrong operating point. The already exceptional accuracy for a Class A RTD of 0.1 o C can be improved to 0.02 o C by “sensor matching” where the four constants of a Callendar-Van-Dusen (CVD) equation provided by the supplier for the sensor are entered into the transmitter. The main limit to accuracy of an RTD is the wiring. The use of three extension lead wires between the sensor and transmitter or input card can enable the measurement to be compensated for changes in resistance in the lead wires due to temperature assuming the change is exactly the same for both lead wires. The use of four extension lead wires enables total compensation that accounts for the inevitable uncertainty in resistance of lead wires. Standard lead wires have a tolerance of 10% in resistance. For 500 feet of 20 gauge lead wire, the error could be as large as 26 o C for a 2-wire RTD and 2.6 o C a 3-wire RTD. The “best practice” is to use a 4 wire RTD unless the transmitter is located close to the sensor, preferably on the sensor. The transmitter accuracy is about 0.1 o C. A handheld signal generator of resistance and voltage can be used to simulate the sensor to check or change a transmitter calibration. The sensor connected to the transmitter with linearization needs to be inserted in a dry block simulator. A bath can be used for low temperatures to test thermowell response time but a dry block is better for calibration. The accuracy of the reference temperature sensor in the block or bath should be 4 times more accurate than the sensor being tested. The block or bath readout resolution must be better than the best possible precision of the sensor. The block or bath calibration system should have accuracy traceable to the National Metrology Institute of the user country (NIST in USA). The accuracy at the normal setpoint to ensure the proper process operating point must be confirmed by a temperature test with a block. For factory assembled and calibrated sensor and thermowell with integral temperature transmitter, a single point temperature test in a dry block is usually sufficient with minimal zero or offset adjustment needed. For an RTD with “sensor matching,” adjustment is often not needed. For field calibration, the temperature of the block must be varied to cover the calibration range to set the linearization, span and zero adjustments. For field assembly, it would be wise to check the 63% response time in a bath. Middle Signal Selection The best solution in terms of increasing reliability, maintainability, and accuracy for all sensors with different durations of process service is automatic selection of the middle value for the loop process variable (PV). A very large chemical intermediates plant extended middle signal selection to all measurements that in combination with triple redundant controller essentially eliminated the one or more spurious trips per year. Middle signal selection was a requirement for all pH loops in Monsanto and Solutia. The return on investment for the additional electrodes from improved process performance and reduced life cycle costs is typically more than enough to justify the additional capital costs for biological and chemical processes if the electrode life expectancy has been proven to be acceptable in lab tests for harsh conditions. The use of the middle signal inherently ignores a single failure of any type including the most insidious failure that gives a pH value equal to the set point. The middle value reduces noise without the introduction of the lag from damping adjustment or signal filter and facilitates monitoring the relative speed of the response and drift, which are indicative of measurement and reference electrode coatings, respectively. The middle value used as the loop PV for well-tuned loops will reside near the set point regardless of drift. A drift in one of the other electrodes is indicative of a plugging or poisoning of its reference. If both of the other electrodes are drifting in the same direction, the middle value electrode probably has a reference problem. If the change in pH for a set point change is slower or smaller for one of the other electrodes, it indicates a coating or loss in efficiency, respectively for the subject glass electrode. Loss of pH glass electrode efficiency results from deterioration of glass surface due to chemical attack, dehydration, non-aqueous solvents, and aging accelerated by high process temperatures. Decreases in glass electrode shunt resistance caused by exposure of O-rings and seals to a harsh or hot process can also cause a loss in electrode efficiency. pH Electrodes Here is some detailed guidance on pH electrode calibration from the ISA book Essentials of Modern Measurements and Final Control Elements . Buffer Calibrations Buffer calibrations use two buffer solutions, usually at least 3 pH units apart, which allow the pH analyzer to calculate a new slope and zero value, corresponding to the particular characteristics of the sensor to more accurately derive pH from the milliVolt and temperature signals. The slope and zero value derived from a buffer calibration provide an indication of the condition of the glass electrode from the magnitude of its slope, while the zero value gives an indication of reference poisoning or asymmetry potential, which is an offset within the pH electrode itself. The slope of pH electrode tend to decrease from an initial value relatively close to the theoretical value of 59.16 mV/pH, largely due in many cases to the development of a high impedance short within the sensor, which forms a shunt of the electrode potential. Zero offset values will generally lie within + 15 mV due to liquid junction potential, larger deviations are indications of poisoning. Buffer solutions have a stated pH value at 25°C, but the stated value changes with temperature especially for stated values that are 7 pH or above. The buffer value at the calibration temperature should be used or errors will result. The values of a buffer at temperatures other than 25°C are usually listed on the bottle, or better, the temperature behavior of the buffer can be loaded into the pH transmitter allowing it to use the correct buffer value at calibration. Calibration errors can also be caused by buffer calibrations done in haste, which may not allow the pH sensor to fully respond to the buffer solution. This will cause errors, especially in the case of a warm pH sensor not being given enough time to cool down to the temperature of the buffer solution. pH transmitters employ a stabilization feature, which prevents the analyzer from accepting a buffer pH reading that has not reached a prescribed level of stabilization, in terms of pH change per time. pH Standardization Standardization is a simple zero adjustment of a pH analyzer to match the reading of a sample of the process solution made using a laboratory or portable pH analyzer. Standardization eliminates the removal and handling of electrodes and the upset to the equilibrium of the reference electrode junction. Standardization also takes into account the liquid junction potential from high ionic strength solutions and non-aqueous solvents in chemical reactions that would not be seen in buffer solutions. For greatest accuracy, samples should be immediately measured at the sample point with a portable pH meter. If a lab sample measurement value is used, it must be time stamped and the lab value compare to a historical online value for a calibration adjustment. The middle signal selected value from three electrodes of different ages can be used instead of a sample pH provided that a dynamic response to load disturbances or setpoint changes of at least two electrodes is confirmed. If more than one electrode is severely coated, aged, broken or poisoned, the middle signal is no longer representative of actual process pH. Standardization is most useful for zeroing out a liquid junction potential, but some caution should be used when using the zero adjustment. A simple standardization does not demonstrate that the pH sensor is responding to pH, as does a buffer calibration, and in some cases, a broken pH electrode can result in a believable pH reading, which may be standardized to a grab sample value. A sample can be prone to contamination from the sample container or even exposure to air; high purity water is a prime example, a referee measurement must be exposed to a flowing sample using a flowing reference electrode. A reaction occurring in the sample may not have reached completion when the sample was taken, but will have completed by the time it reaches the lab. Discrepancies between the laboratory measurement and an on-line measurement at an elevated temperature may be due to the solution pH being temperature dependent. Adjusting the analyzer’s solution temperature compensation (not a simple zero adjustment) is the proper course of action. It must be remembered that the laboratory or portable analyzer used to adjust the on-line measurement is not a primary pH standard, as is a buffer solution, and while it is almost always assumed that the laboratory is right, this is not always the case. The calibration of pH electrodes for non-aqueous solutions is even more challenging as discussed in the Control Talk column The wild side of pH measurement . Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • How Often Do Measurements Need to Be Calibrated?

    The post How Often Do Measurements Need to Be Calibrated? first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. In the  ISA Mentor Program , I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Greg Breitzke. Greg Breitzke is an E&I reliability specialist – instrumentation/electrical for Stepan . Greg has focused his career on project construction and commissioning as a technician, supervisor, or field engineer. This is his first in-house role, and he is tasked with reviewing and updating plant maintenance procedures for I&E equipment. Greg Breitzke’s Question I am working through an issue that can be beneficial to other Mentor Program Participants. NFPA 70B provides a detailed description for the prescribed maintenance and frequency based on equipment type, making the electrical portion fairly straight forward.  The instrumentation is another matter.  We are working to consolidate an abundance of current procedures based on make/model, to a reduced list based on technology.  The strategy is to “right size” frequencies for calibration and functional testing; decreasing non-value maintenance to have the ability to increase value added activities, within the existing head count.   My current plan for the instrumentation consists of:  Sort through the historical paper files with calibration records to determine how long a device has remained in tolerance before a correction was applied, Compare data against any work orders written against the asset that may reduce the frequency, Apply safety factors relative to the device impact on safety, regulatory compliance, quality, custody transfer, basic control, or indication only. I am trying to provide a reference baseline for review of these frequencies, but having little luck in the industry standards I have access to.  Is there a standard or RAGAGEP for calibration and functional testing frequency min/max by technology, that I can reference for a baseline? Nick Sands’ Answer The ISA recommended practice is not on the process of calibration but on a calibration management system: ISA-RP105.00.01-2017, Management of a Calibration Program for Industrial Automation and Control Systems. While I contributed, Leo Staples would be a good person for more explanation. For SIS, there is a requirement to perform calibration, which is comparison against a standard device, within a documented frequency and with documented limits, and correction when outside of limits. This is also required by OSHA for critical equipment under the PSM regulation. EPA has similar requirements under PSRM of course. Correction when out of limits is considered a failed proof test of the instrument in some cases, potentially affecting the reliability of the safety function. Paul Gruhn would be a good person for more explanation. Paul Gruhn’s Answer The ISA/IEC 61511 is performance based and does not mandate specific frequencies. Devices must be tested as some interval to make sure they perform as intended. The frequency required will be based on many different factors (e.g., SIL (performance) target, failure rate of the device in that service, diagnostic coverage, redundancy used (if any), etc.). Leo Staples’ Answer Section 5.6 ISA of the ISA technical report ISA-RP105.00.01-2017 addresses in detail calibration verification intervals or frequencies.  Users should establish calibration intervals for a loop/component based on the following: criticality of the loop/component the performance history of the loop/component the ruggedness/stability of the component(s) the operating environment. Exceptions include SIS related devices where calibration intervals are established to meet SIL requirements. Other factors that can drive calibration intervals include contracts regulatory requirements. The idea for the technical report came about after years of frustration dealing with ambiguous gas measurement contracts and government regulations. In many cases these simply stated users should follow good industry practices when addressing all aspects of calibrations. Calibration intervals alone do not address the other major factors that affect measurement accuracy. These include the accuracy of the calibration equipment, knowledge of the calibration personnel, adherence to defined calibration procedures, and knowledge of the personnel responsible for the calibration program. I have lots of war stories if anyone is interested. One of the last things that I did at my company before I retired was develop a Calibration Program Standard Operating Procedure (SOP) based on ISA-RP105.00.01-2017. The SOP was designed for use in the Generation, Transmission & Distribution, and other Division of the Company. Some of you may find this funny, but it was even used to determine the calibration frequency for NERC CIP physical security entry control point devices. Initially personnel from the Physical Security Department were testing these devices monthly only because that was what they had always done. While this was before the SOP was established my team used the concepts in establishing the calibration intervals for these devices. This work was well received by the auditors. As a side note, the review of monthly calibration intervals for these devices found the practices caused more problems than it prevented. The ISA recommended practice is not on the process of calibration, but on a calibration management system: ISA-RP105.00.01-2017, Management of a Calibration Program for Industrial Automation and Control Systems. ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program. Greg McMillan’s Answer The measurement drift can provide considerable guidance in that when the number of months between calibrations multiplied by drift per month approaches the allowable error, it is time for a calibration check. Most transmitters today have a low drift rate but thermocouples and most electrodes have a drift rate much larger than the transmitter. The past records of calibration results will provide an update on actual drift for an application. Also, fouling of sensors, particularly electrodes, is an issue revealed in 86% response time during calibration tests (often overlooked). The sensing element is the most vulnerable component in nearly all measurements. Calibration checks should be made more frequently at the beginning to establish a drift rate and near the end of the sensor life when drift and failure rates accelerate. Sensor life for pH electrodes can decrease from a year to a few weeks due to high temperature, solids, strong acids and bases (e.g., caustic) and poisonous ions (e.g., cyanide). For every 25 oC increase in temperature, the electrode life is cut in half unless a high temperature glass is used. Accuracy is particularly important for primary loops (e.g., composition, pH, and temperature) to ensure you are at the right operating point. For secondary loops whose setpoint is corrected by a primary loop, accuracy is less of an issue. For all loops, the 5 Rs (reliability, resolution, repeatability, rangeability and response time) are important for measurements and valves. Drift in a primary loop sensor shows up as a different average controller output for a given production rate assuming no changes in raw materials, utilities, or equipment. Fouling of a sensor shows up as an increase in dead time and oscillation loop period. Middle signal selection using 3 separate sensors provides an incredible amount of additional intelligence and reliability reducing unnecessary maintenance. Drift shows up as a sensor with a consistently increasing average deviation from the middle value. The resulting offset is obvious. Coating shows up as a sensor lagging changes in the middle value. A decrease in span shows up as a sensor falling short of middle value for a change in setpoint. The installed accuracy greatly depends upon installation details and process fluid particularly taking into account sensor location in terms of seeing a representative indication of the process with minimal measurement noise. Changes in phase can be problematic for nearly all sensors. Impulse lines and capillary systems are a major source of poor measurement performance as detailed in the Control Talk columns Prevent pressure transmitter problems and Your DP problems could be a result of improper use of purges, fills, capillaries and seals . At the end of this post, I give a lot more details on how to minimize drift and maximize accuracy and repeatability by better temperature and pH sensors and through middle signal selection. For an additional educational resource, download Calibration Essentials, an informative e-book produced by ISA and Beamex . The free e-book provides vital information about calibrating process instruments today. To download the e-book, click this link . Hunter Vegas’ Answer There is no easy answer to this very complicated question. Unfortunately the answer is ‘it depends’ but I’ll do my best to cover the main points in this short reply. 1) Yes there are some instrument technologies that have a tendency to drift more than others. A partial list of ‘drifters’ might include: pH (drifts for all kinds of reasons – aging of probe, temperature, caustic/acid concentration, fouling, etc. etc.) Thermocouples (tend to drift more than RTDs especially at high temperature or in hydrogen service) Turbine meters in something other than very clean, lubricating service will tend to age and wear out so they will read low as they age. However cavitation can make them intermittently read high. Vortex meters with piezo crystals can age over time and their low flow cut out increases. Any flow/pressure transmitter with a diaphragm seal can drift due to process temperature and/or ambient temperature. Most analyzers (oxygen, CO, chromatographs, LEL) This list could go on and on. 2) Some instrument technologies don’t drift as much. I’ve had good success with Coriolis and radar. (Radar doesn’t usually drift as much as it just cuts out. Coriolis usually works or it doesn’t. Obviously there are situations where either can drift but they are better than most.) DP in clean service with no diaphragm seals is usually pretty trouble free, especially the newer transmitters that are much more stable. 3) The criticality of the service obviously impacts how often one needs to calibrate. Any of these issues could dramatically impact the frequency: Is it a SIS instrument? The proof testing frequency will be decided by the SIS calculations. Is it an environmental instrument? The state/feds may require calibrations on a particular frequency. Is it a custody transfer meter? If you are selling millions of pounds of X a year you certainly want to make sure the meter is accurate or you could be giving away a lot of product! Is it a critical control instrument that directly affects product quality or throughput? 4) Obviously if a frequency is dictated by the service then that is the end of that. Once those are out of the way one can usually look at the service and come up with at least a reasonable calibration frequency as a starting point. Start calibrating at that frequency and then monitor history. If you are checking a meter every six months and have checked a meter 4 times in the last two years and the drift has remained less than 50% of the tolerance, then dropping back to a 12 month calibration cycle make perfect sense. Similarly if you calibrate every 6 months and find the meter drift is > 50% every calibration then you probably need to calibrate more often. However if the meter is older it may be cheaper to replace the meter with a new transmitter which is more stable. 5) The last comment I’ll make is to make sure you are actually calibrating something that matters. I could go on for pages about companies who are diligently calibrating their instrumentation but aren’t actually calibrating their instrumentation. In other words they go through the motions, fill out the paperwork, and can point to reams of calibration logs yet they aren’t adequately testing the instrument loop and it could still be completely wrong. (For instance, shooting a temperature transmitter loop but not actually checking the RTD or thermocouple that feeds it, using a simulator to shoot a 4-20mA signal into the DCS to check the DCS reading but not actually testing the instrument itself, etc. They often check one small part of the loop and after a successful test, consider the whole loop ‘calibrated’. Greg McMillan’s Answer The Process/Industrial Instruments and Controls Handbook Sixth Edition 2019 edited by me and Hunter Vegas provide insight on how to maximize accuracy and minimize drift for most types of measurements. The following excerpt written by me is for temperature: Temperature The repeatability, accuracy and signal strength are two orders of magnitude better for an RTD compared to a TC. The drift for a RTD below 400 o C is also two orders of magnitude less than a TC. The 1 to 20 o C drift per year of a TC is of particular concern for biological and chemical reactor and distillation control because of the profound effect on product quality from control at the wrong operating point. The already exceptional accuracy for a Class A RTD of 0.1 o C can be improved to 0.02 o C by “sensor matching” where the four constants of a Callendar-Van-Dusen (CVD) equation provided by the supplier for the sensor are entered into the transmitter. The main limit to accuracy of an RTD is the wiring. The use of three extension lead wires between the sensor and transmitter or input card can enable the measurement to be compensated for changes in resistance in the lead wires due to temperature assuming the change is exactly the same for both lead wires. The use of four extension lead wires enables total compensation that accounts for the inevitable uncertainty in resistance of lead wires. Standard lead wires have a tolerance of 10% in resistance. For 500 feet of 20 gauge lead wire, the error could be as large as 26 o C for a 2-wire RTD and 2.6 o C a 3-wire RTD. The “best practice” is to use a 4 wire RTD unless the transmitter is located close to the sensor, preferably on the sensor. The transmitter accuracy is about 0.1 o C. A handheld signal generator of resistance and voltage can be used to simulate the sensor to check or change a transmitter calibration. The sensor connected to the transmitter with linearization needs to be inserted in a dry block simulator. A bath can be used for low temperatures to test thermowell response time but a dry block is better for calibration. The accuracy of the reference temperature sensor in the block or bath should be 4 times more accurate than the sensor being tested. The block or bath readout resolution must be better than the best possible precision of the sensor. The block or bath calibration system should have accuracy traceable to the National Metrology Institute of the user country (NIST in USA). The accuracy at the normal setpoint to ensure the proper process operating point must be confirmed by a temperature test with a block. For factory assembled and calibrated sensor and thermowell with integral temperature transmitter, a single point temperature test in a dry block is usually sufficient with minimal zero or offset adjustment needed. For an RTD with “sensor matching,” adjustment is often not needed. For field calibration, the temperature of the block must be varied to cover the calibration range to set the linearization, span and zero adjustments. For field assembly, it would be wise to check the 63% response time in a bath. Middle Signal Selection The best solution in terms of increasing reliability, maintainability, and accuracy for all sensors with different durations of process service is automatic selection of the middle value for the loop process variable (PV). A very large chemical intermediates plant extended middle signal selection to all measurements that in combination with triple redundant controller essentially eliminated the one or more spurious trips per year. Middle signal selection was a requirement for all pH loops in Monsanto and Solutia. The return on investment for the additional electrodes from improved process performance and reduced life cycle costs is typically more than enough to justify the additional capital costs for biological and chemical processes if the electrode life expectancy has been proven to be acceptable in lab tests for harsh conditions. The use of the middle signal inherently ignores a single failure of any type including the most insidious failure that gives a pH value equal to the set point. The middle value reduces noise without the introduction of the lag from damping adjustment or signal filter and facilitates monitoring the relative speed of the response and drift, which are indicative of measurement and reference electrode coatings, respectively. The middle value used as the loop PV for well-tuned loops will reside near the set point regardless of drift. A drift in one of the other electrodes is indicative of a plugging or poisoning of its reference. If both of the other electrodes are drifting in the same direction, the middle value electrode probably has a reference problem. If the change in pH for a set point change is slower or smaller for one of the other electrodes, it indicates a coating or loss in efficiency, respectively for the subject glass electrode. Loss of pH glass electrode efficiency results from deterioration of glass surface due to chemical attack, dehydration, non-aqueous solvents, and aging accelerated by high process temperatures. Decreases in glass electrode shunt resistance caused by exposure of O-rings and seals to a harsh or hot process can also cause a loss in electrode efficiency. pH Electrodes Here is some detailed guidance on pH electrode calibration from the ISA book Essentials of Modern Measurements and Final Control Elements . Buffer Calibrations Buffer calibrations use two buffer solutions, usually at least 3 pH units apart, which allow the pH analyzer to calculate a new slope and zero value, corresponding to the particular characteristics of the sensor to more accurately derive pH from the milliVolt and temperature signals. The slope and zero value derived from a buffer calibration provide an indication of the condition of the glass electrode from the magnitude of its slope, while the zero value gives an indication of reference poisoning or asymmetry potential, which is an offset within the pH electrode itself. The slope of pH electrode tend to decrease from an initial value relatively close to the theoretical value of 59.16 mV/pH, largely due in many cases to the development of a high impedance short within the sensor, which forms a shunt of the electrode potential. Zero offset values will generally lie within + 15 mV due to liquid junction potential, larger deviations are indications of poisoning. Buffer solutions have a stated pH value at 25°C, but the stated value changes with temperature especially for stated values that are 7 pH or above. The buffer value at the calibration temperature should be used or errors will result. The values of a buffer at temperatures other than 25°C are usually listed on the bottle, or better, the temperature behavior of the buffer can be loaded into the pH transmitter allowing it to use the correct buffer value at calibration. Calibration errors can also be caused by buffer calibrations done in haste, which may not allow the pH sensor to fully respond to the buffer solution. This will cause errors, especially in the case of a warm pH sensor not being given enough time to cool down to the temperature of the buffer solution. pH transmitters employ a stabilization feature, which prevents the analyzer from accepting a buffer pH reading that has not reached a prescribed level of stabilization, in terms of pH change per time. pH Standardization Standardization is a simple zero adjustment of a pH analyzer to match the reading of a sample of the process solution made using a laboratory or portable pH analyzer. Standardization eliminates the removal and handling of electrodes and the upset to the equilibrium of the reference electrode junction. Standardization also takes into account the liquid junction potential from high ionic strength solutions and non-aqueous solvents in chemical reactions that would not be seen in buffer solutions. For greatest accuracy, samples should be immediately measured at the sample point with a portable pH meter. If a lab sample measurement value is used, it must be time stamped and the lab value compare to a historical online value for a calibration adjustment. The middle signal selected value from three electrodes of different ages can be used instead of a sample pH provided that a dynamic response to load disturbances or setpoint changes of at least two electrodes is confirmed. If more than one electrode is severely coated, aged, broken or poisoned, the middle signal is no longer representative of actual process pH. Standardization is most useful for zeroing out a liquid junction potential, but some caution should be used when using the zero adjustment. A simple standardization does not demonstrate that the pH sensor is responding to pH, as does a buffer calibration, and in some cases, a broken pH electrode can result in a believable pH reading, which may be standardized to a grab sample value. A sample can be prone to contamination from the sample container or even exposure to air; high purity water is a prime example, a referee measurement must be exposed to a flowing sample using a flowing reference electrode. A reaction occurring in the sample may not have reached completion when the sample was taken, but will have completed by the time it reaches the lab. Discrepancies between the laboratory measurement and an on-line measurement at an elevated temperature may be due to the solution pH being temperature dependent. Adjusting the analyzer’s solution temperature compensation (not a simple zero adjustment) is the proper course of action. It must be remembered that the laboratory or portable analyzer used to adjust the on-line measurement is not a primary pH standard, as is a buffer solution, and while it is almost always assumed that the laboratory is right, this is not always the case. The calibration of pH electrodes for non-aqueous solutions is even more challenging as discussed in the Control Talk column The wild side of pH measurement . Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • Missed Opportunities in Process Control - Part 5

    Here is the Fifth part of a point blank decisive comprehensive list of what we really need to know in a detailed attempt to reduce the disparity between theory and practice. Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail. If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts . You can also get a comprehensive resource focused on what you really need to know for a successful automaton project including nearly a thousand best practices in the 98% new 2019 Process/Industrial Instruments and Controls Handbook Sixth Edition capturing the expertise of 50 leaders in industry. Simple computation of a process variable rate of change with minimal noise and fast updates enable PID and MPC to optimize batch profiles . Many batch profiles have a key process variable that responds only in one direction. A one directional response occurs for temperature when there is only heating and no cooling or vice versa. Similarly, a one direction response occurs for pH when there is only a base and no acid flow or vice versa. Most batch composition responses are one directional in that the product concentration only increases with time. Integral action assumes that the direction of a process variable (PV) can be changed. Integral action can be turned off by choosing a PID structure of proportional-only (P-only) or proportional-derivative (PD). Integral action is an inherent aspect of MPC so unless some special modifications are employed, MPC is not used. A solution that enables both MPC and PID control opening up optimization opportunities is to translate the controlled variable (CV) to a PV rate of change (ΔPV/Δt). The new CV can change in both directions and the integrating response of a batch PV is now a self-regulating response of a batch CV with a possible steady state (constant slope). Furthermore, the CV is now representative of the batch profile slope and the profile can be optimized. Typically, a steep slope for the start and a gradual slope for the finish of a batch is best. The rate of change calculation simply passes the PV through a dead time block and the output of the block that is the old PV is subtracted from the input of the block that is the new PV. This change in PV is divided by the block dead time chosen to maximize the signal to noise ratio. For much more on feedback control opportunities for batch reactors see the Control feature article “ Unlocking the Secret Profiles of Batch Reactors ”.   Process variable rate of change computation identifies a compressor surge curve and actual occurrences of surge.  A similar computation detailed for batch control can be used to identify a surge point by realizing it occurs when the slope is zero indicated by the change in discharge pressure (ΔP) divided by the change in suction flow (ΔF) becomes zero  (ΔP/ΔF =0). Thus, the operating point on a characteristic curve can be monitored when the there is a significant rate of change of flow by dividing the change in discharge pressure by the change in suction flow using dead time blocks to create an old PV that is subtracted from the new PV with the dead time parameter again chosen to maximize the signal to noise ratio. Approaches to the surge point as a result of a decrease in suction flow can be identified by a slope that becomes very small realizing seeking to see a slope that is zero is too late. At a zero slope, the process is unstable and the suction flow will jump to a negative value in less than 0.06 seconds indicating surge. A PV rate of change (ΔPV/Δt) calculation described for batch control can be used to detect and count surge cycles, but the dead time parameter setting must be small (e.g., 0.2 seconds). The detection of surge can be used to trigger an open loop backup that will prevent additional surge cycles. See the Control feature article “ Compressor surge control: Deeper understanding, simulation can eliminate instabilities ” for much more enlightenment on the very detrimental and challenging dynamics of compressor surge response and control. Simple future value computation provides better operator understanding, batch end point prediction and full throttle setpoint response. The same calculation of PV rate of change multiplied by an intelligent time interval and added to the current PV can provide immediate updates as to the future value of the PV with a good signal to noise ratio. The time interval should be greater than the total loop dead time since any action taken at that moment by an operator or control system, does not have an effect seen until after the total loop dead time. Humans don’t understand this and expect to see in a few seconds the effect of changes made. This leads to successive actions that are counter product and PID tuning that tends to emphasize integral action since it is always driving the output in the direction to correct and difference between the setpoint (SP) and PV even if overshoot is eminent. For more on the opportunities see the Control Talk Blog “ Future Values are the Future ” and the Control feature article “ Full Throttle Batch and Startup Response ”. Process Analytic Technology (PAT) opportunities are greater than ever.   The primary focus of the PAT initiative by the FDA is to reduce variability by gaining a better understanding of the process and to encourage pharmaceutical manufacturers to continuously improve processes. PAT is defined in Section IV of the “Guidance for Industry” as follows: The Agency considers PAT to be a system for designing, analyzing, and controlling manufacturing through timely measurements (i.e., during processing) of critical quality and performance attributes of raw and in-process materials and processes, with the goal of ensuring final product quality. It is important to note that the term analytical in PAT is viewed broadly to include chemical, physical, microbiological, mathematical, and risk analysis conducted in an integrated manner. The goal of PAT is to enhance understanding and control the manufacturing process, which is consistent with our current drug quality system: quality cannot be tested into products; it should be built-in or should be by design.” New and improved on-line analyzers include dissolved carbon dioxide help ensure good cell conditions, turbidity and dielectric spectroscopy offer measurement of cell concentration and viability and at-line analyzers such as mass spectrometers provide off gas concentration for computing oxygen uptake rate (OUR), and the Nova Bioprofile Flex can provide fast and precise analysis of concentrations of medium components such as glucose, lactate, glutamine, ammonium, sodium, and potassium besides cell concentration, viability, size and osmolality.  The Aspectrics encoded photometric NIR can possibly be calibrated to measure the same components plus possibly indicators of weakening cells. Batch end point and profile optimization can be done.  The digital twin described in the Control feature article “ Virtual plant virtuosity “ can be a great asset in increasing process understanding and developing, testing and tuning process control improvements so that implementation is seamless making the documentation for management of change proactive and efficient.  The digital twin kinetics for cell growth and product formation have been greatly improved in the last five years enabling high fidelity bioreactor models that are easy to fit largely eliminating the need for proprietary research data. Thus, today bioprocess digital twins make opportunities a reality described in the Bioprocess International, Process Design Supplement feature article “ PAT Tools for Accelerated process Development and Improvement ”. Faster and more productive data analytics by use of digital twin. Nonintrusive incredibly more relevant inputs can be found and an intelligent and extensive Design of Experiments (DOE) conducted by use of the digital twin without affecting the process.  Process control depends on identification of the dynamics and changes in process variables for changes in other process variables most notably flows over a wide operating range due to nonlinearity and interactions. Most plant data used for developing principal component analysis (PCA) and predictions by partial least squares (PLS) does not show sufficient changes in the process inputs or process outputs and does not cover the complete possible operating range especially startup and abnormal conditions when data analytics is most needed. First principal models are exceptional at identifying process gains and can be improved to enable the identification of dynamics by including valve backlash, stiction and response times, mixing and measurement lags, transportation delays, analyzer cycle times, and transmitter and controller update rates. The identification of dynamics is essential for dynamic compensation of inputs for continuous processes so that a process input change is synchronized with a corresponding process output change. Dynamic compensation is not needed for prediction of batch end points but could be useful for batch profile control where the translation of batch component concentration or temperature to a rate of change (batch slope) gives a steady state like a continuous process. Better paring of controlled and manipulated variables by use of digital twin.   The accurate steady state process gains of first principal models enable a comprehensive gain array analysis that is the key tool for finding the proper pairing of variables. Integrating process gains can be converted to steady state gains by computing and using the PV rate of change via the simple computation described in item 41. Online process performance metrics for practitioners and executives enabling justification and optimization of process control improvements by use of digital twin.  Online process metrics computing  a moving average metric of process capacity or efficiency for a shift, batch, month and quarter can provide the analysis and incentive for operators, process and automation engineers, maintenance technicians, managers and executives. The monthly and quarterly analysis periods are of greatest interest to people making business decisions. The shift and batch analysis periods are of greatest use to operators and process engineers. All of this is best developed and tested with a digital twin. For more on the identification and value of online metrics, see the Control Talk column “ Getting innovation back into process control ”. Nonintrusive adaptation of first principle models by MPC in digital twin can be readily done.  It is not commonly recognized that the fidelity of a first principal model is seen in how well the manipulated flows in the digital twin match those in the plant. The digital twin has the same controllers, setpoints and tuning as in the actual plant. Differences in the manipulated flows trajectories for disturbances and operating point changes are indicative of a mismatch of dynamics. Differences in steady state flows are indicative of a mismatch of process parameters. In either case, a MPC whose setpoints are the plant steady state or rate of change flows and whose controlled variables are the digital twin steady state or rate of change flows, can be manipulate process or dynamic parameters to adapt the model in the digital twin. The models for the MPC can be identified by conventional methods using just the models in the digital twin. The MPC ability to adapt the model can be tested and then actually used via the digital twin inputs of actual plant flows. For an adaptation of bioreactor model example, see Advanced Process Control Chapter 18 in A Guide to Automation Body of Knowledge Third Edition . Realization that PID algorithms in industry seldom use Ideal Form.  Not sure why but most text books and professors show and teach the Ideal Form. In the Ideal Form, the proportional mode gain only affects the proportional mode resulting in an integral gain and derivative gain for the other modes. Most industrial PIDs use a Form where the proportional mode tuning setting (e.g., gain or proportional band) affects all modes. The integral tuning setting is either repeats per unit time (e.g., repeats per minute) or time (e.g., seconds) and the derivative setting is a rate time (e.g., seconds). The behavior and tuning settings are quite different severely reducing the possible synergy between industry and academia. Realization that PID algorithms in industry seldom use engineering units.   Even stranger to me is the common misconception by professors that the PID algorithm works in engineering units. I have only seen this in one particular PLC and the MeasureX sheet thickness control system. In nearly all other PLC and DCS control systems, the algorithm works in percent of controlled variable and manipulated variable signals.  The effect of valve installed flow characteristic and measurement span has a straightforward effect on valve and measurement gains and consequently the open loop gain and PID tuning. If a PID algorithm works in engineering units, these straightforward effects are lost and simply changing the type of engineering units (e.g., going from lbs per minute to cubic feet per hour) can have a huge effect on tuning a PID working in engineering units (no effect on a PID working in percent). This disconnect is also severely reducing the possible synergy between industry and academia.
  • Basic Guidelines for Control Valve Selection and Sizing

    The post Basic Guidelines for Control Valve Selection and Sizing first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. In the  ISA Mentor Program , I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Hiten Dalal. Hiten Dalal , PE, PMP, is senior automation engineer for products pipeline at Kinder Morgan , Inc. Hiten has extensive experience in pipeline pressure and flow control. Hiten Dalal’s Question Are there basic rule of thumb guidelines for control valve sizing outside of relying on the valve supplier and using the valve manufacturer’s sizing program? Hunter Vegas’ Answer Selecting and sizing control valves seems to have become a lost art. Most engineers toss it over the fence to the vendor along with a handful of (mostly wrong) process data values, and a salesperson plugs the values into a vendor program which spits out a result.  Control valves often determine the capability of the control system, and a poorly sized and selected control valve will make tight control impossible regardless of the control strategy or tuning employed. Selecting the right valve matters! There are several aspects of sizing/selecting a control valve that must be addressed: Determine what the valve is supposed to do Is this valve used for tight control or is ‘loose’ control acceptable. (For instance, are you trying to control a flow within a very tight margin across a broad range of process conditions or are you simply throttling a charge flow down as it approaches setpoint to avoid overshoot). The requirements for one situation are quite different from the other. Is this valve supposed to provide control or tight shutoff? A valve can almost never do both. If you need both, then add a separate on/off shutoff valve.  Understand the TRUE process conditions What is the minimum flow that the valve must control? What is the maximum flow that the valve must pass? What are the TRUE upstream/downstream pressures and differential pressure across the valves in those conditions? (Note that the P1 and DP at low flow rates will usually be much higher than at full flow rates. If you see a valve spec showing the same DP valve for high and low flow conditions it will be wrong 95%+ of the time. What is the min/max temperature the valve might see? Don’t forget about clean out/steam out conditions or abnormal conditions that might subject a valve to high steam temperatures. What is the process fluid? Is it always the same or could it be a mix of products? Note that gathering this data is probably the hardest to do. It often takes a sketch of the piping, an understanding of the process hydraulics, and examination of the system pump curves to determine the real pressure drops under various conditions. Note too that the DP may change when you select a valve since it might require pipe reducers/expanders to be installed in a pipe that is sized larger. Understand the installed flow characteristic of the valve  This can be another difficult task.  Ideally the control valve response should be linear (from the control system’s perspective).  If the PID output changes 5%, the process should respond in a similar fashion regardless of where the output is.  (In other words 15% to 20% or 85% to 90% should ideally generate the same process response). If the valve response is non-linear, control becomes much more difficult.  (You can tune for one process condition but if conditions change the dynamics change and now the tuning doesn’t work nearly as well.)  The valve response is determined by a number of items including: The characteristics of the valve itself. (It might be linear, equal percent, quick opening, or something else.) The DP of the process – The differential pressure across the valve is typically a function of the flow (the higher the flow, the lower the DP across the valve). This will generate a non-linear function. System pressure and pump curves – pumps often have non-linear characteristics as well, so the available pressure will vary with the flow. The user has to understand all of these conditions so he/she can pick the right valve plug. Ideally you pick a valve characteristic that will offset the non-linear effects of the process and make the overall response of the system linear.  If the pressure drop is high, you may have a cavitation, flashing, or choked flow situation  That complicates matter still further because now you’ll need to know a lot more about the process fluid itself. If you are faced with cavitation or flashing you may need to know the vapor pressure and critical pressure of the fluid. This information may be readily available or not if the fluid is a mix of products. Choked flow conditions are usually accompanied with noise problems and will also require additional fluid data to perform the calculations. Realize too that the selection of the valve internals will have a big impact on the flow rates, response, etc.  (You’ll be looking at anti-cav trim, diffusers, etc.) Armed with all of that information (and it is a lot of information) you can finally start sizing/selecting the valve  Usually the vendor’s program is a good place to start, but some programs are much better than others because some have more process data ‘built in’ and have the advanced calculations required to handle cavitation, flashing, choked flow, and noise calculations.  Others are very simplistic and may not handle the more advanced conditions. Theoretically you could use any vendor’s program to do any valve but obviously the vendor program will typically have only its valve data built in so if you use a different program you’ll have to enter that data (if you can find it!) One caution about this – some vendors have different valve constants which can be difficult to convert.   The procedure for finally choosing the valve is (roughly) as follows: Run a down and dirty calc to just see what you have. What is the required Cv at min and max flows? Do I have cavitation/flashing/choking issues? Assuming no cavitation/flashing/choking then you can take the result and start to select a particular valve. The selection process includes: Pick an acceptable valve body type. (Reciprocating control valves with a digital positioner and a good guide design will provide the tightest control.  However other body styles might be acceptable depending on the requirements and budget.) Pick the right valve characteristic to provide an overall linear response. Now look at the offering of that valve and trim style and pick a valve with the proper range of CVs. Usually you want some room above the max flow and you want to make sure you are able to control at the minimum flow and not be bumping off the seat. Note that you may have to go to a different valve body (or even manufacturer) to meet your desired characteristic and Cv.  Make sure the valve body/seals are compatible with your process fluid and the temperature. If there is cavitation/flashing/choking then things get a lot more complicated so I’ll save that for another lesson.   Hope this helped. It was probably a bit more than you were wanting but control valve selection and sizing is a lot more complicated than most realize. ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program. Greg McMillan’s Answer Hunter did a great job of providing detailed concise advice. My offering here is to help avoid the common problems from an inappropriate focus on maximizing valve capacity, minimizing valve pressure drop, minimizing valve leakage and minimizing valve cost. All these things have resulted in “on-off valves” posing as “throttling valves” creating problems of poor actuator and positioner sensitivity, excessive backlash and stiction, unsuspected nonlinearity, poor rangeability, and smart positioners giving dumb diagnostics. While certain applications, such as pH control, are particularly sensitive to these valve problems, nearly all loops will suffer from backlash and stiction exceeding 5% (quite common with many “on-off valves”) causing limit cycles that can spread through the process. These “on-off valves” are quite attractive because of the high capacity and low pressure drop, leakage and cost. To address leakage requirements, a separate tight shutoff valve should be used in series with a good throttling valve and coordinated to open and close to enable a good throttling valve to smoothly do its job. Unfortunately there is nothing on a valve specification sheet that requires the valve have a reasonably precise and timely response to signals and not create oscillations from a loop simply being in automatic making us extremely vulnerable to common misconceptions. The most threatening one that comes to mind in selection and sizing is that rangeability is determined by how well a minimum Cv matches the theoretical characteristic. In reality, the minimum Cv cannot be less than the backlash and stiction near the seat. Most valve suppliers will not provide backlash and stiction for positions less than 40% because of the great increase from the sliding stem valve plug riding the seat or the rotary disk or ball rubbing the seal.  Also, tests by the supplier are for loose packing. Many think piston actuators are better than diaphragm actuators. Maybe the physical size and cost is less and the capability for thrust and torque higher, but the sensitivity is an order of magnitude less and vulnerability to actuator seal problems much greater. Higher pressure diaphragm actuators are now available enabling use on larger valves and pressure drops. One more major misconception is that boosters should be used instead of positioners on fast loops. This is downright dangerous due to positive feedback between flexure of diaphragm slightly changing actuator pressure and extremely high booster outlet port sensitivity. To reduce response time, the booster should be put on the positioner output with a bypass valve opened just enough to stop high frequency oscillations by allowing the positioner to see the much greater actuator and booster volume. The following excerpt from the Control Talk blog Sizing up valve sizing opportunities provides some more detailed warnings: We are pretty diligent about making sure the valve can supply the maximum flow. In fact, we can become so diligent we choose a valve size much greater than needed thinking bigger is better in case we ever need more. What we often do not realize is that the process engineer has already built in a factor to make sure there is more than enough flow in the given maximum (e.g., 25% more than needed). Since valve size and valve leakage are prominent requirements on the specification sheet if the materials of construction requirements are clear, we are setup for a bad scenario of buying a larger valve with higher friction. The valve supplier is happy to sell a larger valve and the piping designer is happier that not much or any of a pipe reducer is needed for valve installation and the pump size may be smaller. The process is not happy. The operators are not happy looking at trend charts unless the trend chart time and process variable scales are so large the limit cycle looks like noise. Eventually everyone will be unhappy. The limit cycle amplitude is large because of greater friction near the seat and the higher valve gain. The amplitude in flow units is the percent resolution (e.g., % stick-slip) multiplied by the valve gain (e.g., delta pph per delta % signal). You get a double whammy from a larger resolution limit and a larger valve gain. If you further decide to reduce the pressure drop allocated to the valve as a fraction of total system pressure drop to less than 0.25, a linear characteristic becomes quick opening greatly increasing the valve gain near the closed position. For a fraction much less than 0.25 and an equal percentage trim you may be literally and figuratively bottoming out for the given R factor that sets the rangeability for the inherent flow characteristic (e.g., R=50). What can you do to lead the way and become the “go to” resource for intelligent valve sizing? You need to compute the installed flow characteristic for various valve and trim sizes as discussed in the Jan 2016 Control Talk post Why and how to establish installed valve flow characteristics . You should take advantage of supplier software and your company’s mechanical engineer’s knowledge of the piping system design and details. You must choose the right inherent flow characteristic. If the pressure drop available to the control valve is relatively constant, then linear trim is best because the installed flow characteristic is then the inherent flow characteristic. The valve pressure drop can be relatively constant due to a variety of reasons most notably pressure control loops or changes in pressure in the rest of the piping system being negligible (fictional losses in system piping negligible). For more on this see the 5/06/2015 Control Talk blog Best Control Valve Flow Characteristic Tips. On the installed flow characteristic you need to make sure the valve gain in percent (% flow per % signal) from minimum to maximum flow does not change by more than a factor of 4 (e.g., 0.5 to 2.0) with the minimum gain greater than 0.25 and the maximum gain less than 4. For sliding stem valves, this valve gain requirement corresponds to minimum and maximum valve positions of 10% and 90%. For many rotary valves, this requirement corresponds to minimum and maximum disk or ball rotations of 20 degrees and 50 degrees. Furthermore, the limit cycle amplitude being the resolution in percent multiplied by the valve gain in flow units (e.g., pph per %) and by the process gain in engineering units (e.g., pH per pph) must be less than the allowable process variability (e.g., pH). The amplitude and conditions for a limit cycle from backlash is a bit more complicated but still computable. For sliding stem valves, you have more flexibility in that you may be able to change out trim sizes as the process requirements change. Plus, sliding stem valves generally have a much better resolution if you have a sensitive diaphragm actuator with plenty of thrust or torque and a smart positioner. The books Tuning and Control Loop Performance Fourth Edition and Essentials of Modern Measurements and Final Elements have simple equations to compute the installed flow characteristic and the minimum possible Cv for controllability based on the theoretical inherent flow characteristic, valve drop to total system drop pressure ratio and the resolution limit. Here is some guidance from “Chapter 4 – Best Control Valves and Variable Frequency Drives” of Process/Industrial Instruments and Controls Handbook Sixth Edition that Hunter and I just finished with the contributions of 50 experts in our profession to address nearly all aspects of achieving the best automation project performance. Use of ISA Standard for Valve Response Testing The effect of resolution limits from stiction and dead band from backlash are most noticeable for changes in controller output less than 0.4% and the effect of rate limiting is greatest for changes greater than 40%. For PID output changes of 2%, a poor valve or VFD design and setup are not very noticeable. An increase in PID gain resulting in changes in PID output greater than 0.4% can reduce oscillations from poor positioner design and dead band. The requirements in terms of 86% response time and travel gain (change in valve position divided by change in signal) should be specified for small, medium and large signal changes. In general, the travel gain requirement is relaxed for small signal changes due to effect of backlash and stiction, and the 86% response time requirement is relaxed for large signal changes due to the effect of rate limiting. The measurement of actual valve travel is problematic for on-off valves posing as throttling valves because the shaft movement is not disk or ball movement. The resulting difference between shaft position and actual ball or disk position has been observed in several applications to be as large as 8 percent. Best Practices Use sizing software with physical properties for worst case operating conditions. The minimum valve position must be greater than backlash and deadband. Based on a relatively good installed flow characteristic valve gains (valve drop to system pressure drop ratio greater than 0.25), there are minimum and maximum positions during sizing to minimize nonlinearity to less than 4:1. For sliding stem valves, the minimum and maximum valve positions are typically 10% and 90%, respectively. For many rotary valves, the minimum and maximum disk or ball rotations are typically 20 degrees and 50 degrees, respectively. The range between minimum and maximum positions or rotations can be extended by signal characterization to linearize the installed flow characteristic. Include effect of piping reducer factor on effective flow coefficient Select valve location and type to eliminate or reduce damage from flashing Preferably use a sliding stem valve (size permitting) to minimize backlash and stiction unless crevices and trim causes concerns about erosion, plugging, sanitation, or accumulation of solids particularly monomers that could polymerize and for single port valves install “flow to open” to eliminate bathtub stopper swirling effect If a rotary valve is used, select valve with splined shaft to stem connection, integral cast of stem with ball or disk, and minimal seal friction to minimize backlash and stiction Use Teflon and for higher temperature ranges use Ultra Low Friction (ULF) packing Compute the installed valve flow characteristic for worst case operating conditions Size actuator to deliver more than 150% of the maximum torque or thrust required Select actuator and positioner with threshold sensitivities of 0.1% or better Ensure total valve assembly dead band is less than 0.4% over the entire throttle range Ensure total valve assembly resolution is better than 0.2% over the entire throttle range Choose inherent flow characteristic and valve to system pressure drop ratio that does not cause the product of valve and process gain divided by process time constant to change more than 4:1 over entire process operating point range and flow range Tune positioner aggressively for application without integral action with readback that indicates actual plug, disk or ball travel instead of just actuator shaft movement Use volume boosters on positioner output with booster bypass valve opened enough to assure stability to reduce valve 86% response time for large signal changes Use small (0.2%) as well as large step changes (20%) to test valve 86% response time Use ISA standard and technical report relaxing expectations on travel gain and 86% response time for small and large signal changes, respectively For much more on valve response see the Control feature article How to specify valves and positioners that do not compromise control . The best book I have for understanding the many details of valve design is Control Valves for the Chemical Process Industries written by Bill Fitzgerald and published by McGraw-Hill. The book that is specifically focused on this Q&A topic is Control Valve Selection and Sizing written by Les Driskell and published by ISA.  Most of my books in my office are old like me. Sometimes newer versions do not exist or are not as good. Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg Image Credit: Wikipedia
  • What Are the Opportunities for Nonlinear Control in Process Industry Applications?

    The post What Are the Opportunities for Nonlinear Control in Process Industry Applications? first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. In the  ISA Mentor Program , I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. These questions come from Flavio Briquente and Syed Misbahuddin. Model predictive control (MPC) has a proven successful history of providing extensive multivariable control and optimization. The applications in refineries are extensive forcing the PID in most cases to take a backseat. These processes tend to employ very large MPC matrices and employ extensive optimization by Linear Programs (LP). The models are linear and may be switched for different product mixtures. The plants tend to have a more constant production rates and greater linearity than seen in specialty chemical and biological processes. MPC is also widely used in petrochemical plants. The applications in other parts of the process industry are increasing but tend to use much smaller MPC matrices focused on a unit operation. MPC offers dynamic decoupling, disturbance and constraint control. To do the same in PID requires dynamic compensation of decoupling and feedforward signals and override control. The software to accomplish dynamic compensation for the PID is not explained or widely used. Also, interactions and override control involving more than two process variables is more challenging than practitioners can address.  MPC is easier to tune and has an integrated LP for optimization. Flavio Briguente is an advanced process control consultant at Evonik in North America, and is one of the original protégés of the ISA Mentor Program. Flavio has expertise in model predictive control and advanced PID control. He has worked at Rohm, Haas Company, and Monsanto Company. At Monsanto, he was appointed to the manufacturing technologist program, and served as the process control lead at the Sao Jose dos Campos plant in Brazil and a technical reference for the company’s South American sites. During his career, Flavio focused on different manufacturing processes, and made major contributions in optimization, advanced control strategies, Six Sigma and capital projects. He earned a chemical engineering degree from the University of São Paulo, a post-graduate degree in environmental engineering from FAAP, a master’s degree in automation and robotics from the University of Taubate, and a PhD in material and manufacturing processes from Aeronautics Institute of Technology.  Syed Misbahuddin is an advanced process control engineer for a major specialty chemicals company with experience in model predictive control and advanced PID control. Before joining industry, he received a master’s degree in chemical engineering with a focus on neural network-based controls. Additionally, he is trained as a Six Sigma Black Belt, which focuses on utilizing statistical process controls for variability reduction. This combination helps him implement controls utilizing physics-based, as well as, data-driven methods. The considerable experience and knowledge of Flavio and Syed blurs the line between protégé and resource leading to exceptionally technical and insightful questions and answers. Flavio Briguente’s Questions Can the existing MPC/APC techniques be applied for batch operation? Is there a non-linear MPC application available? Is there a known case in operation for chemical industry? What are the pros and cons of linear versus nonlinear MPC? Mark Darby’s Answers MPC was originally developed for continuous or semi-continuous processes. It is based on a receding horizon where the prediction and control horizons are fixed and shifted forwarded each execution of the controller. Most MPCs include an optimizer that optimizes the steady state at the end of the horizon, which the dynamic part of the MPC steers towards.  Batch processes are by definition non-steady-state and typically have an end-point condition that must be met at batch end and usually have a trajectory over time that controlled variables (CVs) are desired to follow. As a result, the standard MPC algorithm is not appropriate for batch processes and must be modified (note: there may be exceptions to this based on the application).  I am aware of MPC batch products available in the market, but I have no experience with them. Due to the nonlinear nature of batch processes, especially those involving exothermic reaction, a nonlinear MPC may be necessary. By far, the majority of MPCs applied industrially utilize a linear model. Many of the commercial linear packages include previsions for managing nonlinearities, such as using linearizing transformations, changing the gain, dynamics, or the models themselves. A typical approach is to apply a nonlinear static transformation to a manipulated variable or a controlled variable, commonly called Hammerstein and Wiener transformations. An example is characterizing the valve-flow relationship or controlling the logarithm of a distillation composition. Transformations are performed before or after the MPC engine (optimization) so that a linear optimization problem is retained.  Given the success of modeling chemical processes it may be surprising that linear, empirically developed models are still the norm.  The reason is that it is still quicker and cheaper to develop an empirical model and linear models most often perform well for the majority of processes, especially with the nonlinear capabilities mentioned previously.    Nonlinear MPC applications tend to be reserved for those applications where nonlinearities are present in both system gains and dynamic responses and the controller must operate at significantly different targets. Nonlinear MPC is routinely applied in polymer manufacturing.  These applications typically have less than five manipulated variables (MVs). A range of models have been used in nonlinear MPC, including neural nets, first principles, and hybrid models that combine first principle and empirical models. A potential disadvantage of developing a nonlinear MPC application is the time necessary to develop and validate the model. If a first principle model is used, lower level PID loops must also be modeled if the dynamics are significant (i.e., cannot be ignored).  With empirical modeling, the dynamics of the PID loops are embedded in the plant responses. Compared to a linear model, a nonlinear model will also require more computation time, so one would need to ensure that the controller can meet the required execution period based on the dynamics of the process and disturbances. In addition, there may be decisions around how to update the mode, i.e., which parameters or biases to adjust. For these reasons, nonlinear MPC is reserved for those applications that cannot be adequately controlled with linear MPC. My opinion is that we’ll be seeing more nonlinear applications once it becomes easier to develop nonlinear models. I see hybrid models being critical to this.  Known information would be incorporated and unknown parts would be described using empirically models using a range of techniques that might include machine learning. Such an approach might actually reduce the time of model development compared to linear approaches. Greg McMillan’s answers MPC for batch operations can be achieved by the translation of the controlled variable from batch temperature or composition with a unidirectional response (e.g., increasing temperature or composition) to the slope of the batch profile (temperature or composition rate of change) as noted in my article Get the Most out of Your Batch you then have a continuous type of process with a bi-directional response. There is still potentially a nonlinearity issue. For a perspective on the many challenges see my blog Why batch processes are difficult . I agree with Mark Darby that the use of hybrid systems where nonlinear models are integrated could be beneficial. My preference would be in the following order in terms of ability to understand and improve: first principle calculations simple signal characterizations principle components analysis (PCA) and partial least squares (PLS) neural networks (NN) There is an opportunity to use principle components for neural network inputs to eliminate correlations between inputs and to reduce the number of inputs. You are much more vulnerable with black box approaches like neural networks to inadequacies in training data. More details about the use of NN and recent advances will be discussed in a subsequent question by Syed. There is some synergy to be gained by using the best of what each of the above have to offer. In the literature and in practices, experts in a particular technology often do not see the benefit of other technologies. There are exceptions as seen in papers referenced in my answer to the next question. I personally see benefits in running a first principle model (FPM) to understand causes and effects and to identify process gains. Not realized is that the FPM parameters in a virtual plant that uses a digital twin running real time using the same setpoints as the actual plant can be adapted by use of a MPC. In the next section we will see how NN can be used to help a FPM. Signal characterization is a valuable tool to address nonlinearities in the valve and process as detailed in my blog Unexpected benefits of signal characterizers . I tried using NN to predict pH for a mixture of weak acids and bases and found better results from the simple use of a signal characterizer. Part of the problem is that the process gain is inversely proportional to production rate as detailed in my blog Hidden factor in our most important control loops . Since dead time mismatch has a big effect on MPC performance as detailed in the ISA Mentor Post How to Improve Loop Performance for Dead Time Dominant Systems , an intelligent update of dead time simply based on production rate for a transportation delay can be beneficial. Syed Misbahuddin’s follow-up question Recently, there has been an increased focus on the use of deep neural networks for artificial intelligence (AI) applications. Deep signifies many hidden layers. Recurrent neural networks have also been able in some cases to insure relationships are cause and effect rather than just correlations. They use a rather black box approach with models built from training data. How successful are deep neural networks in process control? Greg McMillan’s answers Pavilion Technologies in Austin has integrated Neural Networks with Model Predictive Control. Successful applications in the optimization of ethanol processes have been reported a decade ago. In the Pavilion 1996 white paper “The Process Perfector: The next step to Multivariable Control and Optimization” it appears that process gains possibly, from step testing of FPM or bump testing of actual process for an MPC, were used as the starting point. The NN was then able to provide a nonlinear model of the dynamics given the steady state gains. I am not sure what complexity of dynamics can be identified. The predictions of NN for continuous processes have the most notable successes in plug flow processes where there is no appreciable process time constant and the process dynamics simplify to a transportation delay. Examples of successes of NN for plug flow include dryer moisture, furnace CO, and kiln or catalytic reactor product composition prediction. Possible applications also exist for inline systems and sheets in pulp and paper processes and for extruders and static mixers. While the incentive is greater for high value biologic products, there are challenges with models of biological processes due to multiplicative effects (neural networks and data analytic models assume additive effects). Almost, every first principle model (FPM) has specific growth rate and product formation the result of a multiplication of  factors each between 0 and 1 to detail the effect of temperature, pH, dissolved oxygen, glucose, amino acid (e.g., glutamine), and inhibitors (e.g., lactic acid). Thus, each factor changes the effect of every other factor. You can understand this by realizing that if the temperature is too high, cells are not going to grow and may in fact die. It does not matter if there is enough oxygen or glucose. Similarly if there is not enough oxygen, it does not matter if the all the other conditions are fine. One way to address this problem is to make all factors as close to one and as constant as possible except for the factor of greatest interest. It has been shown data analytics can be used to identify the limitation and/or inhibition FPM parameter for one condition, such as the effect of glucose concentration via the Michaelis-Menten equation if all other factors are constant and nearly one. Process control is about changes in process inputs and consequential changes in process outputs. If there is no change, you cannot identify the process gain or dynamics. We know this is necessary in the identification of models for MPC and PID tuning and feedforward control. We often forget this in the data sets used to develop data models. A smart Design of Experiments (DOE) is really best to get the data sets to show changes in process outputs for changes in process inputs and to cover the range of interest. If setpoints are changed for different production rates and products, existing historical data may be rich enough if carefully pruned. Remember neural network models like statistical models are correlations and not cause and effect. Review by people knowledgeable in the process and control system is essential. Time synchronization of process inputs with process outputs is needed for continuous but not necessarily for batch models, explaining the notable successes in predicting batch end points. Often delays are inserted on continuous process inputs. This is sufficient for plug flow volumes, such as dryers, where the dynamics are principally a transport delay. For back mixed volumes such as vessels and columns a time lag and delay should be used that is dependent upon production rate. Neural network (NN) models are more difficult to troubleshoot than data analytic models and are vulnerable to correlated inputs (data analytics benefits from principle component analysis and drill down to contributors). NN models can introduce localized reversal of slope and bizarre extrapolation beyond training data not seen in data analytics. Data analytics’ piecewise linear fit can successfully model nonlinear batch profiles. To me this is similar in principle to the use of signal characterizers to provide a piecewise fit of titration curves. Process inputs and outputs that are coincidental are an issue for process diagnostics and predictions by MVSPC and NN models. Coincidences can come and go and never even appear again. They can be caused by unmeasured disturbances (e.g., concentrations of unrealized inhibiters and contaminants), operator actions (e.g., largely unpredictable and unrepeatable), operating states (e.g., controllers not in highest mode or at output limits), weather (e.g., blue northerners), poor installations (e.g., unsecured capillary blowing in wind), and just bad luck. I found a 1998 Hydrocarbon Processing article by Aspen Technology Inc. “Applying neural networks” that provides practical guidance and opportunities for hybrid models. The dynamics can be adapted and cause and effect relationships increased by advancements associated with recurrent neural networks as discussed in Chapter 2 Neural Networks with Feedback and Self-Organization in The Fundamentals of Computational Intelligence: System Approach by Mikhail Z. Zgurovsky and Yuriy P. Zaychenko (Springer 2016). ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program. Mark Darby’s answers The companies best known for neural net-based controllers are Pavilion (now Rockwell) and AspenTech. There have been multiple papers and presentations by these companies over the past 20 years with many successful applications in polymers. It’s clear from reading these papers that their approaches have continued to evolve over time and standard approaches have been developed. Today both approaches incorporate first principles models and make extensive use of historical data. For polymer reactor applications, the FPM involves dynamic reaction heat and mass balance equations and historical data is used to develop steady-state property predictions. Process testing time is needed only to capture or confirm dynamic aspects of the models.  Enhancements to the neural networks used in control applications have been reported.  AspenTech addressed the extrapolation challenges of neural nets with bounded derivatives.  Pavilion makes use of constrained neural nets in their fitting of models. Rockwell describes a different approach to the modeling and control of a fed-batch ethanol process in a presentation made at the 2009 American Control Conference, titled “Industrial Application of Nonlinear Model Predictive Control Technology for Fuel Ethanol Fermentation.”  The first step was the development of a kinetic model based on the structure of a FPM.  Certain reaction parameters in the nonlinear state space model were modeled using a neural net.  The online model is a more efficient non-linear model, fit from the initial model that handles nonlinear dynamics.  Parameters are fit by a gain constrained neural-net.  The nonlinear model is described in a Hydrocarbon Processing article titled Model predictive control for nonlinear processes with varying dynamics . To Syed’s follow-up question about deep neural networks, Deep neural networks require more parameters, but techniques have been developed that help deal with this. I have not seen results in process control applications, but it will be interesting to see if these enhancements developed and used by the google-types will be useful for our industries.       In addition to Greg’s citings, I wanted to mention a few other articles that describe approaches to nonlinear control. A FPM-based nonlinear controller was developed by ExxonMobil, primarily for polymer applications.  It is described in a paper presented at the Chemical Process Control VI conference (2001) titled “Evolution of a Nonlinear Model Predictive Controller,” and in a subsequent paper presented at another conference, Assessment and future directions of nonlinear model predictive control (2005), entitled NLMPC: A Platform for Optimal Control of Feed- or Product-Flexible Manufacturing. The motivation for a first principles model-based MPC for polymers included the nonlinearity associated with both gains and dynamics, constraint handling, control of new grades not previous produced, and the portability of the model/controller to other plants. In the modeling step, the estimation of model parameters in the FPM (parameter estimation) was a cited as a challenge. State estimation of the CVs, in light of unmeasured disturbances, is considered essential for the model update (feedback step). Finally, the increased skills necessary to support and maintain the nonlinear controller was mentioned, in particular, to diagnosis and correct convergence problems. A hybrid modeling approach to batch processes is described in a 2007 conference presentation at the 8th International IFAC Symposium on Dynamics and Control of Process Systems by IPCOS, titled “An Efficient Approach for Efficient Modeling and Advanced Control of Chemical Batch Processes.” The motivation for the nonlinear controller is the nonlinear behavior of many batch processes. Here, fundamental relationships were used for mass and energy balances and an empirical model for the reaction energy (which includes the kinetics), which was fit from historical data. The controller used the MPC structure, modified for the batch process. Future prediction of the CVs in the controller were made using the hybrid model, whereas the dynamic controller incorporated linearizations of the hybrid model. I think it is fair to say that there is a lack of nonlinear solvers tailored to hybrid modeling. An exception is the freely available software environments APMonitor and GEKKO developed by John Hedengren’s group at BYU. It solves dynamic optimization problems with first principle or hybrid models. It has built-in functions for model building, updating, and control. Here is a link to the website that contain references and videos for a range of nonlinear applications, including a batch distillation application. Hunter Vegas’ answers I worked with neutral networks quite a bit when they first came out in the late 1990s. I have not tried working with them much since but I will pass on my findings which I expect are as applicable now as they were then. Neural networks sound useful in principle. Give a neural network a pile of training data, let it ‘discover’ correlations between the inputs and the output data, then reverse those correlations in order to create a model which can be used for control. Unfortunately actually creating such a neural network and using it for control is much harder than it looks. Some reasons for this are: Finding training data is hard. Most of the time of the system is running fairly normal and tends to draw flat lines.  Only during upsets does it actually move around and provide the neural network useful information. Therefore you only want to feed the networks upset data to train it. Then you need to find more upset data to test it. Finding that much upset data is are not so easy to do. (If you train it on normal data, the neural network learns to draw straight lines which does not do much for control.) Finding the correlations is not so easy. The marketing literature suggests you just feed it the data and the network “figures it out.”  In reality that doesn’t usually happen. It may be that the correlations involve the derivative of an input, or the correlation is shifted in time, or perhaps there is correlation of a mathematical combination of inputs involving variables with different time shifts. Long story short – the system usually doesn’t ‘figure it out’ – YOU DO!  After playing with it for a while and testing and re-testing data you will start to see the correlations yourself which allows you to help the network focus on information that matters.  In many cases you actually figure out the correlation and the neural network just backs you up to confirm it. Implementing a multi variable controller is always a challenge. The more variables you add, the lower the reliability becomes.  Implementing any multivariable controller is a challenge because you have to make it smart enough to know how to handle input data failures gracefully. So even when you have a model, turning it into a robust controller that can manipulate the process is not always such an easy thing. I am not saying neutral networks do not work – I actually had very good success with them. However when all was said and done I pretty much figured out the correlations myself through trial and error and was able to utilize that information to improve control. I wrote a paper on the topic and won an ISA award because neural networks were all the rage at that time, but the reality was I just used the software to reinforce what I learned during the ‘network training’ process. Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • Missed Opportunities in Process Control - Part 4

    The post, Missed Opportunities in Process Control - Part 4 , first appeared on the ControlGlobal.com Control Talk blog. Here is the Fourth part of a point blank decisive comprehensive list of what we really need to know in a detailed attempt to reduce the disparity between theory and practice. Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail. If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts . You can also get a comprehensive resource focused on what you really need to know for a successful automaton project including nearly a thousand best practices in the 98% new 2019 Process/Industrial Instruments and Controls Handbook Sixth Edition capturing the expertise of 50 leaders in industry. Eliminate air gap in thermowells to make a temperature response much faster. Contrary to popular opinion, the type of sensor is not a significant factor in the speed of the temperature response in the process industry. While an RTD may be a few seconds slower than a TC, the annular clearance around the sheath can cause an order of magnitude larger measurement time lag. Additionally, a tip not touching the bottom of the thermowell can be even worse. Air is a great insulator as seen in the design of more energy efficient windows. Spring loaded tight fitting sheathed sensors in stepped metal thermowells of the proper insertion length are best. Ceramic protection tubes cause a large measurement lag due to poor thermal conductivity. Low fluid velocities can cause an increase in the lag as well. See the Control Talk column “ A meeting of minds ” on how to get the most precise and responsive temperature measurement. Use the best glass and sufficient velocity to keep pH measurement fast by reducing aging and coatings. A aged glass electrode due to even moderately high temperature (e.g., > 30 o C), chemical attack from strong acids or strong bases (e.g., caustic) or dehydration from not being wetted or exposed to non-aqueous solvents can increase the sensor lag time by orders of magnitude. High temperature glass and specific ion resistant glasses are incredibly beneficial to sustain accuracy and a clean healthy electrode sensor lag of just a few seconds. Velocities must be greater than 1 fps for fast response and greater than 5 fps to prevent fouling that can also increase sensor lag by orders of magnitude by almost imperceptible coatings. This is helpful for thermowells as well but the adverse effects in terms of slower response time are not as dramatic as seen for pH. Electrodes must be kept wetted and exposure to non-aqueous solvents and harsh process conditions reduced by automatically retractable assemblies with ability to soak in buffer solutions. See the Control Talk column “ Meeting of minds encore ” on how to get the most precise and responsive pH measurement. Avoid the measurement lag becoming the primary lag. If the measurement lag becomes larger than the largest process time constant, the trend charts may look better due to attenuation of oscillation amplitude by the filtering effect. The PID gain may even be able to be increased because the PID does not know where the primary lag came from. The key is that the actual amplitude of the process oscillation and the peak error is larger (often unknown unless a special separate fast measurement is installed). Seen on the trend charts is the fact the period of oscillation is larger possibly to the point of creating a sustained oscillation. Besides slow electrodes and thermowells, this situation can occur simply due to transmitter damping or signal filter time settings. For compressor surge and many gas pressure control systems, the filter time and transmitter damping settings must not exceed 0.2 sec. For a much greater understanding, see the Control Talk Blog “ Measurement Attenuation and Deception Tips ”. Real rangeability of a control valve depends upon ratio of valve drop to system pressure drop, actuator and positioner sensitivity, backlash, and stiction. Often rangeability is based on the deviation from an inherent flow characteristic leading to statements that a rotary valve often designed for on-off control as having the greatest rangeability. The real definition should depend upon minimum controllable flow that is a function of the installed flow characteristic, sensitivity, backlash and stiction near the closed position, all of which are generally worse for these on-off valves that supposedly have the best rangeability. The best valve rangeability is achieved with a valve drop to system pressure drop ratio greater than 0.25, generously sized diaphragm actuators, digital positioner tuned with high gain and no integral action, low friction packing (e.g., Enviro-Seal), and a sliding stem valve. If a rotary valve must be used, there should be a splined shaft to stem connection and stem integrally cast with ball or disk to minimize backlash and a low friction seal or ideally no seal to minimize stiction. A graduated v-notch ball or contoured butterfly should be used to improve flow characteristic. Equations to compute the actual valve rangeability based on pressure drop ratio and resolution are given in Tuning and Control Loop Performance Fourth Edition . Real rangeability of a variable frequency drive (VFD) depends upon ratio of static pressure to system pressure drop, motor design, inverter type, input card resolution and dead band setting. The best VFD rangeability and response is achieved by a static pressure to system pressure drop ratio less than 0.25, a generously sized TEFC motor with 1.15 service factor and Class F insulation, pulse width modulated inverter, speed to torque cascade control in inverter, and no dead band or rate limiting in the inverter setup. Identify and minimize transportation delays. The delay for a temperature or composition change to propagate from the point of manipulated change to the process or sensor is simply the process volume divided by the process flow rate. Normal process design procedures do not recognize the detrimental effect of dead time. The biggest example is equipment design guidelines that have a dip tube designed to be large in diameter extending down toward the impeller. Missing is the understanding the incredibly large dead time for pH control where the reagent flow is a gph or less and the dip tube volume is a gallon or more. When the reagent valve is closed, the dip tube is back filled with process fluid from migration of high to low concentrations. To get the reagent to displace the process fluid takes more than an hour. When the reagent valve shuts off, it may take hours before reagent stops dripping and migrating into the process. To go from acid to base in split range control may take hours to displace the acid in the dip tube. The same thing happens to go from base to acid. The stiction is also highest at the closure position. When you consider pH is so sensitive, it is no wonder that pH systems oscillate across the split range point. The real rangeability of flow meters depends upon the signal to noise ratio at low flows, minimum velocity, and whether accuracy is a percent of scale or reading. The best flow rangeability is achieved by meters with accuracy in percent of reading, minimal noise at low flows and least effect of low velocities including the possible transition to laminar flow. Consequently Coriolis flow meters have the best rangeability (e.g., 200:1) and magmeters have the next best rangeability (e.g., 50:1). Most rangeability statements for other meters is based on a ratio of maximum to minimum meter velocity and turbulent flow and do not take into account the actual maximum flow experienced is much less than meter capacity. Use Coriolis flow meters for stoichiometric control and heating value control. The Coriolis flowmeter has the greatest accuracy with a mass flow measurement independent of composition. This capability is key to keeping flows in the right ratio particularly for reactants per the factors in the stoichiometric equation for the reaction (mole flow rate is simply mass flow rate divided by molecular weight of reactant). For waste fuels, the heat release rate upon combustion is a strong function of the mass flow greatly facilitating optimization of supplemental fuel use. Nearly all ratio control systems could benefit from true mass flow measurements with great accuracy and rangeability. For more on what you need to know to achieving what the Coriolis meter is capable of, see the Control Talk Column “ Knowing the best is the best ”. Identify and minimize the total dead time. Dead time is easily identified on a properly scaled trend chart as simply the time delay between a manual output change or setpoint change and the start of the change in the process variable being controlled. The least disruption is usually by simply putting PID momentarily in manual and making a small output change simulating a load disturbance. The test should be done at different production rates and run times. The dead time tends to be largest at low production rates due to larger transportation delays and slower heat transfer rates and sensor response. Dead time also tends to increase with production run time due to fouling or frosting of heat transfer surfaces. See the Control Talk Blog “ Deadtime, the Simple Easy Key to Better Control ” for a more extensive explanation of why I would be out of a job if the dead time was zero. Identify and minimize the ultimate period. This goes hand in hand with knowing and reducing the total loop dead time. The ultimate period in most loops is simply 4 times the dead time in a first order approximation where a secondary time constant is taken as creating additional dead time. Dead time dominant loops have a smaller ultimate period that approaches 2 times the dead time for a pure dead time loop (extremely rare). Input oscillations with a period between ½ and twice the ultimate period result in resonance requiring less aggressive tuning. Input oscillations less than ½ the ultimate period can be considered to be noise requiring filtering and less aggressive tuning. Oscillations periods greater than twice the ultimate period, are attenuated by more aggressive tuning. Note that input oscillations persist when the PID is in manual. For damped oscillations that only appear when the PID is in auto, an oscillation period close to ultimate period indicates too high a PID gain and more than twice the ultimate period indicates too low a PID reset time. A damped oscillation period approaching or exceeding 10 times the ultimate period indicates a violation of the gain window for near-integrating, true integrating or runaway processes. Oscillations greater than four times the ultimate period with constant amplitude are limit cycles due to backlash (dead band) or stiction (resolution limit). See the Control Talk Blogs “ Controller Attenuation and Resonance Tips ” and “ Processes with no Steady State in PID Time Frame Tips ” for more guidance.
  • How to Implement Effective Safety Instrumented Systems for Process Automation Applications

    The post How to Implement Effective Safety Instrumented Systems for Process Automation Applications first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. In the  ISA Mentor Program , I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Hariharan Ramachandran . Hariharan starts an enlightening conversation introducing platform independent key concepts for an effective safety instrumented system with the Mentor Program resource Len Laskowski , a principal technical SIS consultant, and Hunter Vegas , co-founder of the Mentor Program. Hariharan Ramachandran , a recent resource added to the ISA Mentor Program, is a control and safety systems professional with various levels of experience in the field of Industrial control, safety and automation. He has worked for various companies and executed global projects for oil and gas and petrochemical industries gaining experience in the entire life cycle of industrial automation and safety projects. Len Laskowski is a principal technical SIS consultant for Emerson Automation Solutions, and is a voting member of ISA84, Instrumented Systems to Achieve Functional Safety in the Process Industries. Hunter Vegas , P.E., has worked as an instrument engineer, production engineer, instrumentation group leader, principal automation engineer, and unit production manager. In 2001, he entered the systems integration industry and is currently working for Wunderlich-Malec as an engineering project manager in Kernersville, N.C. Hunter has executed thousands of instrumentation and control projects over his career, with budgets ranging from a few thousand to millions of dollars. He is proficient in field instrumentation sizing and selection, safety interlock design, electrical design, advanced control strategy, and numerous control system hardware and software platforms. Hunter earned a B.S.E.E. degree from Tulane University and an M.B.A. from Wake Forest University. Hariharan Ramachandran’s First Question How is the safety integrity level (SIL) of a critical safety system maintained throughout the lifecycle? Len Laskowski’s Answer The answer might sound a bit trite by the simple answer is by diligently following the lifecycle steps from beginning to end. Perform the design correctly and verify that it has been executed correctly. The SIS team should not blindly accept HAZOP and LOPA results at face value. The design that the LOPAs drive is no better than the team that determined the LOPA and the information they were provided. Often the LOPA results are based on incomplete or possibly misleading information. I believe a good SIS design team should question the LOPA and seek to validate its assumptions. I have seen LOPA’s declare that there is no hazard because XYZ equipment protects against it. But a walk in the field later discovered that equipment was taken out of service a year ago and had not yet been replaced. Obviously getting the LOPA/Hazop right is the first step. The second step is to make sure one does a robust design and specifies good quality instruments that are a good fit for the application. For example, a vortex meter may be a great meter for some applications but a poor choice for others. Similarly certain valve designs may have limited value as a safety shutdown valve. Inexperienced engineers may specify Class VI shutoff for on-off valves thinking they are making the system safer, but Class V metal seat valves would stand up to the service much better in the long run since the soft elastomer seats can easily be destroyed in less than month of operation. The third leg of this triangle is using the equipment by exercising it and routinely testing the loop. Partial stroke testing the valves is a very good idea to keep valves from sticking. Also for new units that do not have extensive experience with a process, the SIF components (valves and sensors) should be inspected at the first shutdown to assess their condition. This needs to be done until a history with the installation can be established. Diagnostics also fall into this category, deviation alarms, stroke time and any other diagnostics that can help determine the SIS health is important. Hariharan Ramachandran’s Feedback The safety instrumented function has to be monitored and managed throughout its lifecycle. Each layer in a safety protection system must have the ability to be audited. SIS verification and validation process provides a high level of assurance that the SIS will operate in accordance with its safety requirements specification (SRS). The proof testing must be carried out periodically at the intervals specified in the safety requirement specification. There should be a mechanism for recording of SIF life event data (proof test results, failures, and demands) for comparison of actual to expected performance. Continuous evaluation and improvement is the key concept here in maintaining the SIS efficiently. Hariharan Ramachandran’s Second Question What is the best approach to eliminate the common cause failures in a safety critical system? Hunter Vegas’ Answer There are many ways that common cause failures can creep into a safety system design. Some of the more common ways include: Using a single orifice plate to feed redundant 2oo3 transmitters. Some make it even worse by using a single orifice tap to feed all three.  (Ideally it is best to get as much separation as possible – as a minimum have 3 different taps and individual impulse lines.  Better yet have completely different flow meters and if possible utilize different technologies to measure flow so that a single failure or abnormal process condition won’t affect them all.)  If the impulse lines of redundant transmitters require heat trace, it is best to use different sources of heat. (If they are fed with a single steam line its failure might impact all three readings. This might apply to a boiler drum level or an orifice plate.) Having the same technician calibrate all three meters simultaneously. (Sometimes he’ll get the calibration wrong and set up all three meters incorrectly.)  Some plants have the technician only calibrate one meter of the three each time. That way an incorrect calibration will stand out. Putting redundant transmitters (or valves) on the same I/O card. If it freezes or fails all of the readings are lost. Implementing SIS trips in the same DCS that controls the plant. Just adding a SIS contact to the solenoid circuit of an existing on/off valve. If the solenoid or actuator fails such that the valve fails open neither the DCS or SIS can trip it.  At least add a second solenoid but it is far better to add a separate shutdown valve. (Some put a trip solenoid on a control valve.  However if the control valve fails open the trip solenoid might not be able close it either.) Having a single device generate a 4-20mA signal for control and also generate a contact for a trip circuit. A single fault within the instrument might fail and take out both the 4-20mA and the trip signal.  (Using a SIS transmitter for control is really the same thing.) Hariharan Ramachandran’s Feedback Both, random and systematic events can induce common cause failure (CCF) in the form of single points of failure or the failure of redundant devices. Random hardware failures are addressed by Design architecture, diagnostics, estimation (analysis) of probabilistic failures, design techniques and measures (to IEC 61508‐7). Systematic failures are best addressed through the implementation of a protective management system, which overlays a quality management system with a project development process. A rigorous system is required to decrease systematic errors and enhance safe and reliable operation. Each verification, functional assessment, audit, and validation is aimed at reducing the probability of systematic error to a sufficiently low level. The management system should define work processes, which seek to identify and correct human error. Internal guidelines and procedures should be developed to support the day-to-day work processes for project engineering and on-going plant operation and maintenance. Procedures also serve as a training tool and ensure consistent execution of required activities. As errors or failures are detected, their occurrence should be investigated, so that lessons can be learned and communicated to potentially affected personnel. Hariharan Ramachandran’s Third Question An incident happened at a process plant, what are all the engineering aspects that needs to be verified during the Investigation? Len Laskowski’s Answer I would start at the beginning of the lifecycle look at Hazop and LOPA’s to see that they are done properly.  Look to see that documentation is correct; P&IDs, SRS, C&Es, MOC and test logs and procedures. Look to see where the break down occurred.  Were things specified correctly? Were the designs verified? Was the System correctly validated? Was proper training given? Look for test records once the system was commissioned. Hunter Vegas’ Answer Usually the first step is to determine exactly what happened separating conjecture from facts. Gather alarm logs, historian data, etc. while it is available. Individually interview any personnel involved as soon as possible to lock in the details. With that information in hand, begin to work backwards determining exactly what initiated the event and what subsequent failures occurred to allow it to happen. In most cases there will be a cascade of failures that actually enabled the event to happen. Then examine each failure to understand what happened and how it can be avoided in the future. Often there will be a number of changes implemented.  If the SIS system failed, then Len’s answer provides a good list of items to check. Hariharan Ramachandran’s Feedback Also verify if the device/equipment is appropriately used within the design intent. Hariharan Ramachandran’s Fourth Question What are all the critical factors involved in decommissioning a control systems? Len Laskowski’s Answer The most critical factor is good documentation. You need to know what is going to happen to your unit and other units in the plant once an instrument, valve, loop or interlock is decommissioned. A proper risk and impact assessment has to be carried out prior to the decommissioning. One must ask very early on in a project’s development if all units controlled by the system are planning to shut down at the same time. This is needed for maintenance and upgrades. Power distribution and other utilities are critical. One may not be able to demo a system because it would affect other units. In many cases, a system cannot be totally decommissioned until the next shutdown of the operating unit and it may require simultaneous shutdowns of neighboring units as well. Waste management strategy, regulatory framework and environmental safety control are the other factors to be considered. Hariharan Ramachandran’s Feedback A proper risk and impact assessment has to be carried out prior to the decommissioning. Waste management strategy, regulatory framework and environmental safety control are the other factors to be considered. ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program. Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • Webinar Recording: The Amazing World of ISA Standards

    The post Webinar Recording: The Amazing World of ISA Standards first appeared on the ISA Interchange blog site. This educational ISA webinar was presented by  Greg McMillan  in conjunction with the  ISA Mentor Program . Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now  Eastman Chemical ). Historically, predictive maintenance required very expensive technology and resources, like data scientists and domain experts, to be effective. Thanks to artificial intelligence (AI) methods such as machine learning making its way into the mainstream, predictive maintenance is now more achievable than ever. Our webinar will explore how machine learning is changing the game and greatly reducing the need for data scientists and domain experts. These technologies self-learn and autonomously monitor for data pattern anomalies. Not only does this make predictive maintenance far more practical than what was historically possible, but now predictions 30 days in advance are the norm. Don’t let the old way of doing predictive maintenance cause you loss in productivity any longer. This webinar covers: AI made real Leveraging existing technology for a higher ROI Learning from downtime event history How to never be blindsided by breakdown again About the Featured Presenter Nicholas P. Sands, P.E., CAP, serves as senior manufacturing technology fellow at DuPont , where he applies his expertise in automation and process control for the DuPont Safety and Construction business (Kevlar, Nomex, and Tyvek). During his career at DuPont, Sands has worked on or led the development of several corporate standards and best practices in the areas of automation competency, safety instrumented systems, alarm management, and process safety. Nick is: an ISA Fellow; co-chair of the ISA18 committee on alarm management; a director of the ISA101 committee on human machine interface; a director of the ISA84 committee on safety instrumented systems; and secretary of the IEC (International Electrotechnical Commission) committee that published the alarm management standard IEC62682. He is a former ISA Vice President of Standards and Practices and former ISA Vice President of Professional Development, and was a significant contributor to the development of ISA’s Certified Automation Professional program. He has written more than 40 articles and papers on alarm management, safety instrumented systems, and professional development, and is co-author of the new edition of A Guide to the Automation Body of Knowledge . Nick is a licensed engineer in the state of Delaware. He earned a bachelor of science degree in chemical engineering at Virginia Tech. ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program. About the Presenter Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • Missed Opportunities in Process Control - Part 3

    Here is the third part of a point blank decisive comprehensive list of what we really need to know in a detailed attempt to reduce the disparity between theory and practice. Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail. If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts . You can also get a comprehensive resource focused on what you really need to know for a successful automaton project including nearly a thousand best practices in the 98% new 2019 Process/Industrial Instruments and Controls Handbook Sixth Edition capturing the expertise of 50 leaders in industry. The following list reveals common misconceptions that need to be understood to seek real solutions that actually address the opportunities. Dead time dominant loops need Model Predictive Control or a Smith Predictor .  There are many reasons for Model Predictive Control but dead time dominance is not really one of them. Dead time compensation can be simply done by inserting a dead time block in the external-reset feedback path making a conventional PID an enhanced PID and then tuning the PID much more aggressively. This enhanced PID is much easier to implement than a Smith Predictor because there is no need to identify and update an open loop gain or primary open loop time constant and there is no loss of the controlled variable seen on the PID faceplate.  An additional pervasive misconception is that dead time dominant processes benefit the most from dead time compensation. It turns out that reduction in integrated error for an unmeasured process input load disturbance is much greater for lag dominant processes, especially near-integrating processes. While the improvement is significant, the performance of a lag dominant process is often already impressive provided the PID is tuned aggressively (e.g., integrating process tuning rules with minimum arrest time). For more details see the ISA Mentor Program Q&A Post " How to Improve Loop Performance for Dead Time Dominant Systems " The model dead time should not be smaller than the actual loop dead time .  For Model Predictive Control, Smith Predictors and an enhanced PID, a model dead time larger than the actual dead time by just 40% can lead to fast oscillations. These controllers are less sensitive to a model dead time smaller than the actual dead time. For a conventional PID, tuning based on a model dead time larger than the actual dead time just causes a sluggish response so in general conventional PID tuning is based on largest possible loop dead time. Cascade control loops will oscillate if cascade rule is violated . For small or slow setpoint changes or unmeasured load disturbances, the loop may not break out into oscillations. While it is not a good idea to violate the rule that the secondary loop be at least five times faster than the primary loop, there are simple fixes. The simplest and easiest fix is to turn on external-reset feedback that will prevent the primary loop integral mode from changing faster than the secondary loop can respond. It is important that external-reset feedback signal be the actual process variable of secondary loop. There is no need to slow down the tuning of the primary loop, which is the most common quick fix if the secondary loop tuning cannot be made faster. Limit cycles are inevitable from resolution limits . While one or more integrators anywhere in the system can cause a limit cycle from a resolution limit, turning on external-reset feedback can stop the limit cycle. The feedback for the external reset feedback must be a fast readback of the actual manipulated valve position or speed. Often readback signals are slow and changes or lack of changes in the readback of actuator shaft position are not representative of the actual ball or disk movement for on-off valves posing as control valves. While external-reset feedback can stop the limit cycle, there is an offset from the desired valve position. For some exothermic reactors, it may be better to have a fast limit cycle in the manipulated coolant temperature than an offset because tight temperature control is imperative and the oscillation is attenuated (averaged out) by the well-mixed reactor volume. Fast opening and slow closing surge valves will cause oscillations unless PID is tuned for slower valve response . It is desirable that surge valve be fast in terms of increasing and slow in terms of decreasing vent or recycle flow for compressor surge control. Generally, this was done in the field by restricting the actuator fill rate or enhancing exhaust rate by a quick exhaust valve since the surge valves are fail open. The controller had to be tuned to deal with the crude unknown rate limiting. Using different setpoint up and down rate limits on the analog output block and turning on external-reset feedback via a fast readback of the actual valve position make the adjustment much more exact and visible. The controller does not need to be tuned for the slow closing rate because the integral mode will not outrun the response of the valve. Prevention of oscillations at split range point requires a deadband in the split range block . A dead band anywhere in the loop adds a dead time that is the dead band divided by the rate of change of signal. Dead band will cause a limit cycle if there are two or more integrators anywhere in the control loop including the positioner, process, and cascade control. The best solution is a precise properly sized control valves with minimal backlash and stiction and a linear installed flow characteristic. External-reset feedback with setpoint rate limits can be added in the direction of opening or closing a valve at the split range point to instill patience and eliminate unnecessary crossings of the split range point. For small and large valves, the better solution is a valve position controller that gradually and smoothly moves the big valve to ensure the small valve manipulated by the process controller is in a good throttle position. Valve position controller integral action must be ten times slower than process controller integral action to prevent oscillations . External-reset feedback in the valve position controller with fast readback of actual big valve position and up and down setpoint limits on analog output block for large valve can provide slow gradual optimization but a fast getaway for abnormal operation to prevent running out of the small valve. This is called directional move suppression and is generally beneficial when valve position controllers are used to maximize feed or minimize compressor pressure or maximize cooling tower or refrigeration unit temperature setpoints. One of the advantages of Model Predictive Control is move suppression to slow down changes in the manipulated variable that would be disruptive. Here we have the additional benefit of the move suppression being directional with no need to retune. High PID gains causing fast large changes in PID output upset operators and other loops . The peak and integrated errors for unmeasured load disturbances are inversely proportional to the PID gain. A high PID gain is necessary to minimize these errors and to get to setpoint faster for setpoint changes. Too low of a PID gain is unsafe for exothermic reactor temperature control and can cause slow large amplitude oscillations in near-integrating and true integrating processes.  Higher PID gains can be used to increase loop performance without upsetting operators or other loops by turning on external-reset feedback, putting setpoint rate limits on the analog output block or secondary loop and providing an accurate fast feedback of manipulated valve position or process variable. Large control valve actuators and VFD rate limiting to prevent motor overload requires slowing down the PID tuning to prevent oscillations . Turning on external-reset feedback and using a fast accurate readback of valve position or VFD speed enables faster tuning to be used that makes the response to small changes in PID output much faster. Of course, the better solution is a faster valve or larger motor. Since there is always a slewing rate or speed rate limit in VFD setup using external-reset feedback with fast readback is good idea in general. Large analyzer cycle times require PID detuning to prevent oscillations . While the additional dead time that is 1.5 times the cycle time is excessive in terms of ability of loop to deal with unmeasured load disturbances, when this additional dead time is greater than the 63% process response time, an intelligent computation of integral action using external-reset feedback can enable the resulting enhanced PID gain to be as large as the inverse of the open loop gain for self-regulating processes even if the cycle time increases. This means the enhanced PID could be used with an offline analyzer with a very large and variable time between results reported. While load disturbances are not corrected until an analytical result is available, the enhanced PID does not become unstable. The intelligent calculation of the proportional (P), integral (I) and derivative (D) mode is only done when there is a change in the measurement. The time interval between the current and last result is used in I and D mode computations. The input to I mode computation is the external-reset feedback signal. If there is no response of the manipulated variable, I mode contribution does not change. An analyzer failure will not cause a PID response since there is no change in P or D mode contribution unless there is a new result or setpoint. The same benefits apply to wireless loops (additional dead time is ½ update rate). For more details see the Control Talk Blog “ Batch and continuous control with at-line and offline analyzer tips ”
  • Solutions for Unstable Industrial Processes

    The post Solutions for Unstable Industrial Processes first appeared on the ISA Interchange blog site. The following technical discussion is part of an occasional series showcasing the ISA Mentor Program , authored by Greg McMillan , industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical ). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants. In the  ISA Mentor Program , I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Caroline Cisneros. Negative resistance also known as positive feedback can cause processes to jump, accelerate and oscillate confusing the control system and the operator. These are characterized as open loop unstable processes. Not properly addressing these situations can result in equipment damage and plant shutdowns besides the loss of process efficiency. Here we first develop a fundamental understanding of the causes and then quickly move on to the solutions to keep the process safe and productive. Caroline Cisneros , a recent graduate of the University of Texas who became a protégé about a year ago, is gaining significant experience working with some of the best process control engineers in an advanced control applications group. Caroline asks a question about the dynamics that cause unstable processes. The deeper understanding gained as to the sources of instability can lead to process and control system solutions to minimize risk and to increase process performance. Caroline Cisneros’ Question What causes processes to be unstable when controllers are in manual? Greg McMillan’s Answer Fortunately, most processes are self-regulating by virtue of having negative feedback that provides a resistance to excursions (e.g., flow, liquid pressure, and continuous composition and temperature). These processes come to a steady state when the controller is in manual.   Somewhat less common are processes that have no feedback that will result in a ramp (e.g., batch composition and temperature, gas pressure and level). Fortunately, the ramp rate is quite slow except for gas pressure giving the operator time to intervene. There are a few processes where the deviation from setpoint can accelerate when in manual due to positive feedback. These processes should never be left in manual. We can appreciate how positive feedback causes problems in sound systems (e.g., microphones too close to speakers). We can also appreciate from circuit theory how negative resistance and positive feedback would cause an acceleration of a change in current flow. We can turn this insight into an understanding of how a similar situation develops for compressor, steam-jet ejector, exothermic reactor and parallel heat exchanger control. The compressor characteristic curves from the compressor manufacturer that are a plot of compressor pressure rise versus suction flow shows a curve of decreasing pressure rise for each speed or suction vane position whose  slope magnitude increases as the suction flow increases in the normal operating region. The pressure rise consequently decreases more as the flow increases opposing additional increases in compressor flow creating a positive resistance to flow. Not commonly seen is that compressor characteristic curve slope to the left of the surge point becomes zero as you decrease flow, which denotes a point on the surge curve, and then as the flow decreases further, the pressure rise decreases causing a further decrease in compressor flow creating a negative resistance to a decrease in flow. ISA Mentor Program The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program. When the flow becomes negative, the slope reverses sign creating a positive resistance with a shape similar to that seen in the normal operating region to the right of the surge point. The compressor flow then increases to a positive flow at which point the slope reverses sign creating negative resistance. The compressor flow jumps in about 0.03 seconds from the start of negative resistance to some point of positive resistance. The result is a jump in 0.03 seconds to negative flow across the negative resistance, a slower transition along positive resistance to zero flow, than a jump in 0.03 seconds across the negative resistance to a positive flow well to the right of the surge curve. If the surge valve is not open far enough, the operating point walks about 0.5 to 0.75 seconds along the positive resistance to the surge point. The whole cycle repeats itself with an oscillation period of 1 to 2 seconds. If this seems confusing, don’t feel alone. The PID controller is confused as well. Once a compressor gets into surge, the very rapid jumps and oscillations are too much for a conventional PID loop. Even a very fast measurement, PID execution rate and control valve response can’t deal with it alone. Consequently, the oscillation persists until an open loop backup activates and holds open the surge valves till the operating point is sustained well to the right of the surge curve for about 10 seconds at which point there is a bumpless transfer back to PID control. The solution is a very fast valve and PID working bumplessly with an open loop backup that detects a zero slope indicating an approach to surge or a rapid dip in flow indicating an actual surge. The operating point should always be kept well to the right of the surge point. For much more on compressor surge control see the article Compressor surge control: Deeper understanding, simulation can eliminate instabilities. The same shape but with much less of a dip in the compressor curve, sometimes occurs just to the right of the surge point. This local dip causes a jumping back and forth called buzzing. While the oscillation is much less severe than surge, the continual buzzing is disruptive to users. A similar sort of dip in a curve occurs in a plot of pumping rate versus absolute pressure for a steam-jet ejector. The result is a jumping across the path of negative resistance. The solution here is a different operating pressure or nozzle design, or multiple jets to reduce the operating range so that operation to one side or the other of the dip can be assured. Positive feedback occurs in exothermic reactors when the heat of reaction exceeds the cooling rate causing an accelerating rise in temperature that further increases the heat of reaction. The solution is to always insure the cooling rate is larger than the heat of reaction. However, in polymerization reactions the rate of reaction can accelerate so fast the cooling rate cannot be increased fast enough causing a shutdown or a severe oscillation. For safety and process performance, an aggressively tuned PID is essential where the time constants and dead time associated with heat transfer in cooling surface and thermowell and loop response are much less than the positive feedback time constant. Derivative action must be maximized and integral action must be minimized. In some cases a proportional plus derivative controller is used. The runway response of such reactors is characterized by a positive feedback time constant as shown in Figure 1 for an open loop response. The positive feedback time constant is calculated from the ordinary differential equations for the energy balance as shown in Appendix F of 101 Tips for a Successful Automation Career . The point of acceleration cannot be measured in practice because it is unsafe to have the controller in manual. A PID gain too low will allow a reactor to runaway since the PID controller is not adding enough negative feedback. There is a window of allowable PID gains that closes as the time constants from heat transfer surface and thermowell and the total loop dead time approach the positive feedback time constant. Figure 1: 1 Positive Feedback Process Open Loop Response Positive feedback can also occur when parallel heat exchanges have a common process fluid input each with outlet temperature controller(s) with a setpoint close to the boiling point or temperature resulting in vaporization of a component in the process fluid. Each temperature controller is manipulating a utility stream providing heat input. The control system is stable if the process flow is exactly the same to all exchangers. However, a sudden reduction in one  process flow causes overheating causing bubbles to form expanding back into the exchanger causing an increase in back pressure and hence a further decrease in process flow thru this hot exchanger. The increasing back pressure eventually forces all of the process flow into the colder heat exchanger making it colder. The high velocity in the hot exchanger from boiling and vaporization causes vibration and possibly damage to any discontinuity in its path from slugs of water. When nearly all of the water is pushed out of the hot exchanger, its temperature drops drawing feed that was going to the cold heat exchanger that causes the hot exchanger to overheat repeating the whole cycle. The solution is separate flow controllers and pumps for all streams so that changes in the flow to one exchanger do not affect another and a lower temperature setpoint. To summarize, to eliminate oscillations, the best solution is a process and equipment design that eliminates negative resistance and positive feedback. When this cannot provide the total solution, operating points may need to be restricted, loop dead time and thermowell time constant minimized and the controller gain increased with integral action decreased or suspended. Additional Mentor Program Resources See the ISA book  101 Tips for a Successful Automation Career  that grew out of this Mentor Program to gain concise and practical advice. See the  InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk  column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.). About the Author Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry . Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011. Connect with Greg
  • Missed Opportunities in Process Control - Part 2

    The post, Missed Opportunities in Process Control - Part 2 , first appeared on the ControlGlobal.com Control Talk blog. Here is the second part of a point blank decisive comprehensive list of what we really need to know in a detailed attempt to reduce the disparity between theory and practice. Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail. If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts . You can also get a comprehensive resource focused on what you really need to know for a successful automaton project including nearly a thousand best practices in the 98% new 2019 Process/Industrial Instruments and Controls Handbook Sixth Edition capturing the expertise of 50 leaders in industry. Ratio control instead of feedforward control. Most of the literature focuses on feedforward control. This is like flying blind for the operator. In most cases there is a flow measurement and the primary process loop dead time is large enough for there to be a cascade control using a secondary flow loop. A ratio controller is then setup whose input is the flow signal that would have been the feedforward signal. This could be a disturbance or wild flow or a feed flow in the applications involving most vessels (e.g., crystallizers, evaporators, neutralizers, reactors …) and columns. To provide a shortcut in categorization, I simply call it the “leader flow”. The secondary loop then is the “follower” flow whose setpoint is the “leader” flow multiplied by the ratio controller setpoint. A bias is applied to the Ratio controller output similar to what is done by a feedforward summer that is corrected by the primary loop. The operator can change the ratio setpoint and see the actual ratio after correction. Improvements can be made to the ratio setpoint based on recognizable persistent differences between set and the actual ratio. Many vessels and most columns are started up on ratio control until normal operating conditions are reached. When primary loops use an analyzer, ratio correction may be suspended when the analyzer misbehaves. If the flow measurement lacks sufficient rangeability, a flow can be computed from the installed flow characteristic and substituted for the flow measurement at low flows. A notable exception is avoidance of ratio control for steam header pressures since the dead time is too short for cascade control consequently necessitating feedforward control. Adaptation of feedforward gain and ratio control setpoint. A simple adaptive controller similar to a valve position controller (VPC) for optimization can be used. The adaptive controller setpoint is zero correction, its process variable is the current correction and its output is the feedforward gain or ratio setpoint. Like a VPC, the traditional approach would be a slow integral-only controller where the integral action is more than 10 times slower than in the primary loop controller. However, the opportunity for directional move suppression described next month can provide more flexibility and opportunity to deal with undesirable conditions. PID Form and Structure used in Industry. The literature often shows the “Independent” Form that computes the contribution of the P, I and D modes in parallel with the proportional gain not affecting the I and D modes. The name “Independent” is appropriate not only because the contribution of the modes are independent from each other but also because this Form is independent of what is normally used in today’s distributed control systems (DCSs). Often the tuning parameters for I and D are an integral and derivative gain, respectively rather than a time. The “Series” or “Real” Form necessarily used in pneumatic controllers was carried over to electronic controllers and is offered as an option in DCSs. The “Series” Form causes an interaction in the time domain that can be confusing but prevents the D mode contribution from exceeding the I mode contribution. Consequently, tuning where the D mode setting is larger the I mode setting do not cause oscillations. If these are settings are carried over to the “Ideal” Form more extensively today, the user is surprised by unsuspected fast oscillations. The different units for tuning settings also cause havoc. Some proportional modes still use proportional band in percent and integral settings could be repeats per minute repeats per second instead of minutes or seconds. Then you have the possibility of an integral gain and derivative gain in an Independent Form. Also, the names given by suppliers for Forms are not consistent. There are also 8 structures offering options to turn off the P and I mode or use setpoint weight factors for the P and D modes. The D mode is simply turned off by a zero setting (zero rate time or derivative gain). I am starting an ISA Standards Committee for PID algorithms and performance to address these issues and many more. For more on PID Forms see the ISA Mentor Q&A “ How do you convert tuning settings of an independent PID?” Sources of Deadband, Resolution, Sensitivity, and Velocity Limit. Deadband can originate from backlash in linkages or connections, deadband in split range configuration, and deadband in Variable Frequency Drive (VFD) setup. Resolution limitation can originate from stiction and analog to digital conversion or computation. Sensitivity limitations can originate from actuators, positioners, or sensors. Velocity Limits can originate from valve slewing rate set by positioner or booster relay capacity and actuator volume and from speed rate limits in VFD setup. Oscillations from Deadband, Resolution, Sensitivity, and Velocity Limit. Deadband can cause a limit cycle if there are two or more integrators in the process or control system including the positioner.Thus, a positioner with integral action will create a limit cycle in any loop with integral action in the controller. Positioners should have high gain proportional action and possibly some form of derivative action. Resolution can cause a limit cycle if there are two or more integrators in the process or control system including the positioner. Positioners with poor sensitivity have been observed to create essentially a limit cycle. A slow velocity limit causes oscillations that can be quite underdamped. Noise, resonance and attenuation. The best thing to do is eliminate the source of oscillations often due to the control system as detailed in the Control Talk Blog “ The Most Disturbing Disturbances are Self-Inflicted ”. Oscillation periods faster than the loop dead time is essentially noise. There is nothing the loop can do so the best thing is to ignore it. If the oscillation period is between one and ten dead times cause resonance. The controller tuning needs to be less aggressive to reduce amplification. If the oscillation period is more than 10 times the dead time, the controller tuning needs to be more aggressive to provide attenuation. Control Loops Transfer Variability. We would like to think a control loop makes variability completely disappear. What it does is transfer the variability from the controlled variable to the manipulated variable. For many level control loops, we want minimization of this transfer and give it the more positive terminology “maximization of absorption” of variability. This is done by less aggressive tuning that still prevents activation of alarms. The user must be careful that for near-integrating and true-integrating processes, the controller gain must not be decreased without increasing the integral time so that the product of the controller gain and integral time is greater than twice the inverse of the integrating process gain to prevent large slow rolling nearly underdamped oscillations with a period forty or more times the dead time. Overshoot of Controller Output. Some articles have advocated that the PID controller should be tuned so its output never overshoots the final resting value (FRV). While this may be beneficial for balanced self-regulating process particularly seen in refineries, it is flat out wrong and potentially unsafe for near-integrating, true integrating and runaway processes. In order to get to a new setpoint or recover from a disturbance, the controller output must overshoot the FRV. This generally requires that the integral mode not dominate the proportional and derivative mode contributions. Integrating process tuning rules are used. Hidden Factor in Temperature, Composition and pH Loops . The process gain in these loops for continuous and fed-batch operations is generally plotted versus a ratio of manipulated flow to feed flow. To provide a process gain with the proper units, you need to divide by the feed flow. Most people don’t realize the process gain is inversely proportional to feed flow. This is particularly a problem at low production rates resulting in a very large hidden factor. For much more on this see the Control Talk Blog “ Hidden factor in Our Most Important Loops ”. Variable Jacket Flow. If the flow to a vessel jacket is manipulated for temperature control, you have a double whammy. The low flow for a low cooling or heating demand causes an increase in the process gain per the hidden factor and an increase in the process dead time due to the larger transportation delay. The result is often a burst of oscillations from tuning that would be fine at normal operating conditions. A constant jacket flow should be maintained by recirculation and the manipulation of coolant or heating utility makeup flow (preferably steam flow to a steam injector) for high heat demands. The utility return flow is made equal to the makeup flow by a pressure controller on jacket output manipulating return flow.
1 2 3 4 5 »