*Posts on this page are from the Control Talk blog, which is one of the ControlGlobal.com blogs for process automation and instrumentation professionals and Greg McMillan’s contributions to the ISA Interchange blog.

Tips for New Process Automation Folks
    • 1 Aug 2019

    Missed Opportunities in Process Control - Part 6

    The post, Missed Opportunities in Process Control - Part 6, first appeared on the ControlGlobal.com Control Talk blog.

    Here is the Sixth part of a point blank decisive comprehensive list of what we really need to know in a detailed attempt to reduce the disparity between theory and practice. Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail. If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts.

    You can also get a comprehensive resource focused on what you really need to know for a successful automaton project including nearly a thousand best practices in the 98% new 2019 Process/Industrial Instruments and Controls Handbook Sixth Edition capturing the expertise of 50 leaders in industry.

    1. Add small amounts of dissolved carbon dioxide (DCO2) and conjugate salts to make computed titration curves match laboratory titration curves. The great disparity between theoretical and actual titration curves is due to conjugate salts and incredibly small amounts of DCO2 from simple exposure to air and a corresponding amount of carbonic acid created. Instead of titration curve slopes and thus process gains increasing by 6 orders of magnitude as you go from 0 to 7 pH for strong acids and strong bases, in reality the slope increases by 2 orders of magnitude, still a lot but 4 orders of magnitude off. Thus, control system analysis and supposed linearization by translation of the controlled variable from pH to hydrogen ion concentration by the use of theoretical equations for a strong acid and strong base is off by 4 orders of magnitude. I made this mistake early in my career (about 40 years ago) but learned at the start of the 1980s that DCO2 was the deal breaker. I have seen the theoretical linearization published by others about 20 years ago and most recently just last year.  For all pH systems, the slope between 4 and 7 pH is greatly moderated due to the carbonic acid pKa = 6.35 at 25 degrees Centigrade. The titration curve is also flattened within two pH of logarithmic acid dissociation constant (pKa) of an acid or base that has a conjugate salt. To match computer generated titration curves to laboratory titration curves, add small amounts DCO2 and conjugate salts as detailed in the Chemical Processing feature article “Improve pH control”.
    2. Realize there is a multiplicative effect for biological process kinetics that creates restrictions on experimental methods to analyze or predict cell growth or product formation. While the incentive is greater for high value biologic products, there are challenges with models of biological processes due to multiplicative effects (neural networks and data analytic models assume additive effects). Almost every first principle model (FPM) has specific growth rate and product formation the result of a multiplication of  factors each between 0 and 1 to detail the effect of temperature, pH, dissolved oxygen, glucose, amino acid (e.g., glutamine), and inhibitors (e.g., lactic acid). Thus, each factor changes the effect of every other factor. You can understand this by realizing that if the temperature is too high, cells are not going to grow and may in fact die. It does not matter if there is enough oxygen or glucose. Similarly if there is not enough oxygen, it does not matter if the all the other conditions are fine. One way to address this problem is to make all factors as close to 1 and as constant as possible except for the one of interest. It has been shown data analytics can be used to identify the limitation and/or inhibition FPM parameter for one condition, such as the effect of glucose concentration via the Michaelis-Menten equation if all other factors are constant and nearly 1.
    3. Take advantage of the great general applicability and ease of parameter adjustment in Michaelis-Menten equations for the effect of concentrations and Convenient Cardinal equations for the effect of temperature and pH on biological processes. The Mimic bioreactor model in a digital twin takes advantage of these breakthroughs in first principle modeling. For temperature and pH, Convenient Cardinal equations are used where the optimum temperature for growth and production phases is simply the temperature or pH setpoint including any shifts for batch phases. The minimum and maximum temperatures complete the parameter settings. This is a tremendous advancement to traditional uses of Arrhenius equations for temperature and Villadsen-Nielsen equations for pH that required parameters not readily available set with a precision to the sixth or seventh decimal place. Generalized Michaelis-Menten equations shown to be useful for modeling intracellular dynamics can model the extracellular limitation and inhibition effects of concentrations. The equations provide a link between macroscopic and microscopic kinetic pathways. If the limiting or inhibition effect is negligible or needs to be temporarily removed, the limitation and inhibition parameter is simply set to 0 g/L and 100 g/L, respectively. The biological significance and ease of setting parameters is particularly important since most kinetics are not completely known and what is defined can be quite subject to restrictions on operating conditions. These revolutionary equations enable the same generalized kinetic model to be used for all types of cells. Previously, yeast cells (e.g., ethanol production), fungal cells (e.g., antibiotic production), bacterial cells (e.g., simple proteins), and mammalian cells (e.g., complex proteins) had specialized equations developed that did not generally carry over to different cells and products.  
    4. Always use smart sensitive valve positioners with good feedback of actual position tuned with a high gain and no integral action on true throttling valves (please read the following despite its length since misunderstandings are pervasive and increasing). A very big and potentially dangerous mistake persists today from a decades old rule that positioners should not be used on fast loops. The omission of a sensitive and tuned valve positioner can increase limit cycle period and amplitude by an order of magnitude and severely jeopardize rangeability and controllability. Without a positioner, some valves may require a 25% change in signal to open meaning that controlling below 25% signal is unrealistic. As a young lead  I&E engineer for the world’s largest acrylonitrile plant back in the mid-1970s, I used the rule about fast loops. The results were disastrous. I had to hurriedly install positioners on all the loops during startup. A properly designed, installed and tuned positioner should have a response time of less than 0.5 seconds. A positioner gain greater than 10 is sought with rate action added when offered. Positioners developed in the 1960s were proportional only with a gain of 50 or more and a high sensitivity (0.1%). Since then spool positioners with extremely poor sensitivity (2%) have been offered and integral action included and even recommended in some misguided documents by major suppliers. Do not use integral action in the positioner despite default settings to the contrary. A volume booster can be added to the positioner output to make the response time faster. Using a volume booster instead of a positioner is dangerous as explained in next point. If you cannot achieve the 0.5% response time, you have something wrong with type of valve, packing, positioner, installation and/or tuning of the positioner and is not a reason for saying that you should not use positioners on fast loops. An increasing threat ever since the 1970s has been on-off valves posing as throttling valves. They are much less expensive, in the piping spec and have much tighter shutoff. The actuator shaft feedback may change even though the actual ball or disk has not changed for a signal change of 8% or more. In this case, even the best positioner is of little help since it is being lied to as to actual position. Valve specifications have an entry for leakage but typically have nothing on valve backlash, stiction, and response time and actuator sensitivity. I can’t seem to get even a discussion started as to how to get this changed and how rangeability and controllability are so adversely affected. If you need a faster response or are stuck with an on-off valve, then you need to consider a variable frequency drive with a pulse width modulated inverter (see point 35 in part 4 of this series). Also be aware that theoretical studies based solely on process dynamics are seriously flawed since most fast loops, sensors, transmitters, signal filter, scan and execution times are a larger source of time constants and dead time than the actual process making the loop much slower than what is shown in studies based on process response as noted in point 9 in part 1. For much more on how to deal with this increasing threat read the Control articles “Is your control valve an imposter?” and “How to specify valves and positioners that don’t compromise control”.
    5. Put a volume booster on output of positioner with a bypass valve opened just enough to make the valve stroking time much faster recognizing that replacing a positioner with a booster poses a major safety risk. Another decades old rule said to replace a positioner with a booster on fast loops. For piston actuators, a positioner is required to work at all. For diaphragm actuators, a volume booster instead of a positioner creates a positive feedback (flexure of diaphragm changes volume and consequently pressure seen by booster high outlet port sensitivity) causing a fail open butterfly valve to slam shut from fluid forces on the disk. This has happened to me on a huge compressor and was subsequently avoided on the next project when I showed I could position 24 inch fail open butterfly valves for furnace pressure control by simply grabbing the shaft due to positive feedback from booster and diaphragm actuator combination. Since properly sized diaphragm actuators generally have an order of magnitude better sensitivity than piston actuators and the operating pressure of newer diaphragm actuators has been increased, diaphragm actuators are increasingly the preferred solution.
    6. Understand and address the reality that there are processes that have either a dead time dominant and balanced self-regulating, near-integrating, true integrating, or runaway response. Most of the literature studies balanced self-regulating processes where the process time constant is about the same size as the process dead time. Some studies address dead time dominant processes where the dead time is much greater than the process time constant. Dead time dominant processes are less frequent and mostly occur when there is a large dead time from process transportation delay (e.g., plug flow volumes or conveyors) or analyzer sample and cycle time (see points 1 and 9 in part 3 on how to address these applications). The more important loops tend to be near-integrating where the process time constant is more than 4 times larger than the process dead time, true integrating where the process will continually ramp when the controller is in manual and runaway where the process deviation will accelerate when the controller is in manual. Continuous temperature and composition loops on volumes with some degree of mixing due to reflux, recycle or agitation have a near-integrating response. Batch composition and temperature have a true integrating response. The runaway response occurs in highly exothermic typically polymerization reactors but is never actually observed because it is too dangerous to put the controller in manual during a reaction long enough to see much acceleration. Most gas pressure loops and of course, nearly all level loops have an integrating response.  It is critical to tune PID controllers on near-integrating, true integrating and runaway processes with maximum gain, minimum reset action and maximum rate action so that the PID can provide the negative feedback action missing in these processes. As a practical matter, near-integrating, true integrating, and runaway processes are tuned with integrating process tuning rules where the initial ramp rate is used to estimate an integrating process gain.
    7. Maximize synergy between chemical engineering, biochemical engineering, electrical engineering, mechanical engineering and computer science. All of these degrees bring something to the party for a successful automation system implementation. The following simplification provides some perspective: chemical and biochemical engineers offer process knowledge, electrical engineers offer control, instrumentation and electrical system knowledge, mechanical engineers offer equipment and piping knowledge, and computer scientists offer data historian and industrial internet knowledge. All of these people plus operators should be involved in process control improvements and whatever is expected from the next big thing (e.g., Industrial Internet of Things, Digitalization, Big Data and Industry 4.0). The major technical societies especially AIChE, IEE, ISA, and ASME should see the synergy of exchange of knowledge rather than the current view of other societies being competition.
    8. Identify and document justifications to develop new skills, explore new opportunities and innovate. Increasing emphasis on reducing project costs is overloading practitioners to the point they don’t have time to attend short courses, symposiums or even online presentations. Contributing factors are loss of expertise from retirements, fear of making any changes, and present day executives who have no industry experience and are focused on financials, such as reducing project expenditures and shortening project schedules. At this point practitioners must be proactive and do investigation on their own time of opportunities and process metrics. Developing skills with the digital twin can be way of defining and showing associates and management the type and value of improvements as noted in all points in part 5. The digital twin with demonstrated key performance indicators (KPI) showing value of the increases in process capacity or efficiency, plus data analytics and Industry 4.0 can lead to people teaching people, eliminating silos, spurring creativity and deeper involvement, nurturing a sense of community and common objectives, and connecting the layers of automation and expertise so everybody knows everybody. To advance our profession, practitioners should seek to publish what is learned, which can be done generically without disclosing proprietary data.
    9. Use inferential measurements periodically corrected by at-line analyzers to provide fast analytical measurements of key process compositions. First principle models or experimental models identified by model predictive control or data analytics software can be used to provide immediate composition measurements with no delay associated with process sample system and analyzer cycle time. The inferential measurement result is synchronized with an at-line analyzer result by the insertion of a dead time equal to sample transportation delay plus 1.5 times the analyzer cycle time. A fraction, usually less than 0.5 of the difference between the inferential measurement and at-line analyzer result after elimination of outliers, is added to correct the inferential measurement whenever there is an updated at-line analyzer result.
    10. Use inline analyzers and at-line analyzers whose sensors are in the process or near the process, respectively. There are many inline sensors available today (e.g., conductivity, chlorine, density, dissolved carbon dioxide, dielectric spectroscopy, dissolved oxygen, focused beam reflectance, laser based measurements, pH, turbidity, and viscosity). The next best alternative is an at-line analyzer located as close as possible to the process connection to minimize sample transportation delay. The practice of locating all analyzers in one building creates horrendous dead time. An example of an innovative fast at-line analyzer capable of extensive sensitive measurements of components plus cell concentration and size for biological processes is the Nova Bioprofile Flex. Chromatographs, near infrared, mass spectrometers, nuclear magnetic resonance, and MLT gas analyzers using a combination of non-dispersive infrared, ultraviolet and visible spectroscopy with electrochemical and paramagnetic sensors have increased functionality and maintainability.
    • 17 Jul 2019

    How Often Do Measurements Need to Be Calibrated?

    The post How Often Do Measurements Need to Be Calibrated? first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Greg Breitzke.

    Greg Breitzke is an E&I reliability specialist – instrumentation/electrical for Stepan. Greg has focused his career on project construction and commissioning as a technician, supervisor, or field engineer. This is his first in-house role, and he is tasked with reviewing and updating plant maintenance procedures for I&E equipment.

    Greg Breitzke’s Question

    I am working through an issue that can be beneficial to other Mentor Program Participants. NFPA 70B provides a detailed description for the prescribed maintenance and frequency based on equipment type, making the electrical portion fairly straight forward. The instrumentation is another matter. We are working to consolidate an abundance of current procedures based on make/model, to a reduced list based on technology. The strategy is to “right size” frequencies for calibration and functional testing; decreasing non-value maintenance to have the ability to increase value added activities, within the existing head count.  

    My current plan for the instrumentation consists of: 

    1. Sort through the historical paper files with calibration records to determine how long a device has remained in tolerance before a correction was applied,
    2. Compare data against any work orders written against the asset that may reduce the frequency,
    3. Apply safety factors relative to the device impact on safety, regulatory compliance, quality, custody transfer, basic control, or indication only.

    I am trying to provide a reference baseline for review of these frequencies, but having little luck in the industry standards I have access to.  Is there a standard or RAGAGEP for calibration and functional testing frequency min/max by technology, that I can reference for a baseline?

    Nick Sands’ Answer

    The ISA recommended practice is not on the process of calibration but on a calibration management system: ISA-RP105.00.01-2017, Management of a Calibration Program for Industrial Automation and Control Systems. While I contributed, Leo Staples would be a good person for more explanation.

    For SIS, there is a requirement to perform calibration, which is comparison against a standard device, within a documented frequency and with documented limits, and correction when outside of limits. This is also required by OSHA for critical equipment under the PSM regulation. EPA has similar requirements under PSRM of course. Correction when out of limits is considered a failed proof test of the instrument in some cases, potentially affecting the reliability of the safety function. Paul Gruhn would be a good person for more explanation.

    Paul Gruhn’s Answer

    The ISA/IEC 61511 is performance based and does not mandate specific frequencies. Devices must be tested as some interval to make sure they perform as intended. The frequency required will be based on many different factors (e.g., SIL (performance) target, failure rate of the device in that service, diagnostic coverage, redundancy used (if any), etc.).

    Leo Staples’ Answer

    Section 5.6 ISA of the ISA technical report ISA-RP105.00.01-2017 addresses in detail calibration verification intervals or frequencies. Users should establish calibration intervals for a loop/component based on the following:

    • criticality of the loop/component
    • the performance history of the loop/component
    • the ruggedness/stability of the component(s)
    • the operating environment.

    Exceptions include SIS related devices where calibration intervals are established to meet SIL requirements. Other factors that can drive calibration intervals include contracts regulatory requirements.

    The idea for the technical report came about after years of frustration dealing with ambiguous gas measurement contracts and government regulations. In many cases these simply stated users should follow good industry practices when addressing all aspects of calibrations.

    Calibration intervals alone do not address the other major factors that affect measurement accuracy. These include the accuracy of the calibration equipment, knowledge of the calibration personnel, adherence to defined calibration procedures, and knowledge of the personnel responsible for the calibration program. I have lots of war stories if anyone is interested.

    One of the last things that I did at my company before I retired was develop a Calibration Program Standard Operating Procedure (SOP) based on ISA-RP105.00.01-2017. The SOP was designed for use in the Generation, Transmission & Distribution, and other Division of the Company. Some of you may find this funny, but it was even used to determine the calibration frequency for NERC CIP physical security entry control point devices. Initially personnel from the Physical Security Department were testing these devices monthly only because that was what they had always done. While this was before the SOP was established my team used the concepts in establishing the calibration intervals for these devices. This work was well received by the auditors. As a side note, the review of monthly calibration intervals for these devices found the practices caused more problems than it prevented.

    The ISA recommended practice is not on the process of calibration, but on a calibration management system: ISA-RP105.00.01-2017, Management of a Calibration Program for Industrial Automation and Control Systems.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program.

    Greg McMillan’s Answer

    The measurement drift can provide considerable guidance in that when the number of months between calibrations multiplied by drift per month approaches the allowable error, it is time for a calibration check. Most transmitters today have a low drift rate but thermocouples and most electrodes have a drift rate much larger than the transmitter. The past records of calibration results will provide an update on actual drift for an application. Also, fouling of sensors, particularly electrodes, is an issue revealed in 86% response time during calibration tests (often overlooked). The sensing element is the most vulnerable component in nearly all measurements. Calibration checks should be made more frequently at the beginning to establish a drift rate and near the end of the sensor life when drift and failure rates accelerate. Sensor life for pH electrodes can decrease from a year to a few weeks due to high temperature, solids, strong acids and bases (e.g., caustic) and poisonous ions (e.g., cyanide). For every 25 oC increase in temperature, the electrode life is cut in half unless a high temperature glass is used.

    Accuracy is particularly important for primary loops (e.g., composition, pH, and temperature) to ensure you are at the right operating point. For secondary loops whose setpoint is corrected by a primary loop, accuracy is less of an issue. For all loops, the 5 Rs (reliability, resolution, repeatability, rangeability and response time) are important for measurements and valves.

    Drift in a primary loop sensor shows up as a different average controller output for a given production rate assuming no changes in raw materials, utilities, or equipment. Fouling of a sensor shows up as an increase in dead time and oscillation loop period.

    Middle signal selection using 3 separate sensors provides an incredible amount of additional intelligence and reliability reducing unnecessary maintenance. Drift shows up as a sensor with a consistently increasing average deviation from the middle value. The resulting offset is obvious. Coating shows up as a sensor lagging changes in the middle value. A decrease in span shows up as a sensor falling short of middle value for a change in setpoint.

    The installed accuracy greatly depends upon installation details and process fluid particularly taking into account sensor location in terms of seeing a representative indication of the process with minimal measurement noise. Changes in phase can be problematic for nearly all sensors. Impulse lines and capillary systems are a major source of poor measurement performance as detailed in the Control Talk columns Prevent pressure transmitter problems and Your DP problems could be a result of improper use of purges, fills, capillaries and seals.

    At the end of this post, I give a lot more details on how to minimize drift and maximize accuracy and repeatability by better temperature and pH sensors and through middle signal selection.

    Free Calibration Essentials eBook

    For an additional educational resource, download Calibration Essentials, an informative eBook produced by ISA and Beamex. The free e-book provides vital information about calibrating process instruments today. To download the eBook, click this link.

    Hunter Vegas’ Answer

    There is no easy answer to this very complicated question. Unfortunately the answer is ‘it depends’ but I’ll do my best to cover the main points in this short reply.

    1) Yes there are some instrument technologies that have a tendency to drift more than others. A partial list of ‘drifters’ might include:

      • pH (drifts for all kinds of reasons – aging of probe, temperature, caustic/acid concentration, fouling, etc. etc.)
      • Thermocouples (tend to drift more than RTDs especially at high temperature or in hydrogen service)
      • Turbine meters in something other than very clean, lubricating service will tend to age and wear out so they will read low as they age. However cavitation can make them intermittently read high.
      • Vortex meters with piezo crystals can age over time and their low flow cut out increases.
      • Any flow/pressure transmitter with a diaphragm seal can drift due to process temperature and/or ambient temperature.
      • Most analyzers (oxygen, CO, chromatographs, LEL)
      • This list could go on and on.

    2) Some instrument technologies don’t drift as much. I’ve had good success with Coriolis and radar. (Radar doesn’t usually drift as much as it just cuts out. Coriolis usually works or it doesn’t. Obviously there are situations where either can drift but they are better than most.) DP in clean service with no diaphragm seals is usually pretty trouble free, especially the newer transmitters that are much more stable.

    3) The criticality of the service obviously impacts how often one needs to calibrate. Any of these issues could dramatically impact the frequency:

      • Is it a SIS instrument? The proof testing frequency will be decided by the SIS calculations.
      • Is it an environmental instrument? The state/feds may require calibrations on a particular frequency.
      • Is it a custody transfer meter? If you are selling millions of pounds of X a year you certainly want to make sure the meter is accurate or you could be giving away a lot of product!
      • Is it a critical control instrument that directly affects product quality or throughput?

    4) Obviously if a frequency is dictated by the service then that is the end of that. Once those are out of the way one can usually look at the service and come up with at least a reasonable calibration frequency as a starting point. Start calibrating at that frequency and then monitor history. If you are checking a meter every six months and have checked a meter 4 times in the last two years and the drift has remained less than 50% of the tolerance, then dropping back to a 12 month calibration cycle make perfect sense. Similarly if you calibrate every 6 months and find the meter drift is > 50% every calibration then you probably need to calibrate more often. However if the meter is older it may be cheaper to replace the meter with a new transmitter which is more stable.

    5) The last comment I’ll make is to make sure you are actually calibrating something that matters. I could go on for pages about companies who are diligently calibrating their instrumentation but aren’t actually calibrating their instrumentation. In other words they go through the motions, fill out the paperwork, and can point to reams of calibration logs yet they aren’t adequately testing the instrument loop and it could still be completely wrong. (For instance, shooting a temperature transmitter loop but not actually checking the RTD or thermocouple that feeds it, using a simulator to shoot a 4-20mA signal into the DCS to check the DCS reading but not actually testing the instrument itself, etc. They often check one small part of the loop and after a successful test, consider the whole loop ‘calibrated’.

    Greg McMillan’s Answer

    The Process/Industrial Instruments and Controls Handbook Sixth Edition 2019 edited by me and Hunter Vegas provide insight on how to maximize accuracy and minimize drift for most types of measurements. The following excerpt written by me is for temperature:

    Temperature

    The repeatability, accuracy and signal strength are two orders of magnitude better for an RTD compared to a TC. The drift for a RTD below 400 oC is also two orders of magnitude less than a TC. The 1 to 20oC drift per year of a TC is of particular concern for biological and chemical reactor and distillation control because of the profound effect on product quality from control at the wrong operating point. The already exceptional accuracy for a Class A RTD of 0.1oC can be improved to 0.02 oC by “sensor matching” where the four constants of a Callendar-Van-Dusen (CVD) equation provided by the supplier for the sensor are entered into the transmitter. The main limit to accuracy of an RTD is the wiring.

    The use of three extension lead wires between the sensor and transmitter or input card can enable the measurement to be compensated for changes in resistance in the lead wires due to temperature assuming the change is exactly the same for both lead wires. The use of four extension lead wires enables total compensation that accounts for the inevitable uncertainty in resistance of lead wires. Standard lead wires have a tolerance of 10% in resistance. For 500 feet of 20 gauge lead wire, the error could be as large as 26oC for a 2-wire RTD and 2.6oC a 3-wire RTD. The “best practice” is to use a 4 wire RTD unless the transmitter is located close to the sensor, preferably on the sensor. The transmitter accuracy is about 0.1oC.

    A handheld signal generator of resistance and voltage can be used to simulate the sensor to check or change a transmitter calibration. The sensor connected to the transmitter with linearization needs to be inserted in a dry block simulator. A bath can be used for low temperatures to test thermowell response time but a dry block is better for calibration. The accuracy of the reference temperature sensor in the block or bath should be 4 times more accurate than the sensor being tested. The block or bath readout resolution must be better than the best possible precision of the sensor. The block or bath calibration system should have accuracy traceable to the National Metrology Institute of the user country (NIST in USA).

    The accuracy at the normal setpoint to ensure the proper process operating point must be confirmed by a temperature test with a block. For factory assembled and calibrated sensor and thermowell with integral temperature transmitter, a single point temperature test in a dry block is usually sufficient with minimal zero or offset adjustment needed. For an RTD with “sensor matching,” adjustment is often not needed. For field calibration, the temperature of the block must be varied to cover the calibration range to set the linearization, span and zero adjustments. For field assembly, it would be wise to check the 63% response time in a bath.

    Middle Signal Selection

    The best solution in terms of increasing reliability, maintainability, and accuracy for all sensors with different durations of process service is automatic selection of the middle value for the loop process variable (PV). A very large chemical intermediates plant extended middle signal selection to all measurements that in combination with triple redundant controller essentially eliminated the one or more spurious trips per year. Middle signal selection was a requirement for all pH loops in Monsanto and Solutia.

    The return on investment for the additional electrodes from improved process performance and reduced life cycle costs is typically more than enough to justify the additional capital costs for biological and chemical processes if the electrode life expectancy has been proven to be acceptable in lab tests for harsh conditions. The use of the middle signal inherently ignores a single failure of any type including the most insidious failure that gives a pH value equal to the set point. The middle value reduces noise without the introduction of the lag from damping adjustment or signal filter and facilitates monitoring the relative speed of the response and drift, which are indicative of measurement and reference electrode coatings, respectively. The middle value used as the loop PV for well-tuned loops will reside near the set point regardless of drift.

    A drift in one of the other electrodes is indicative of a plugging or poisoning of its reference. If both of the other electrodes are drifting in the same direction, the middle value electrode probably has a reference problem. If the change in pH for a set point change is slower or smaller for one of the other electrodes, it indicates a coating or loss in efficiency, respectively for the subject glass electrode. Loss of pH glass electrode efficiency results from deterioration of glass surface due to chemical attack, dehydration, non-aqueous solvents, and aging accelerated by high process temperatures. Decreases in glass electrode shunt resistance caused by exposure of O-rings and seals to a harsh or hot process can also cause a loss in electrode efficiency.

    pH Electrodes

    Here is some detailed guidance on pH electrode calibration from the ISA book Essentials of Modern Measurements and Final Control Elements.

    Buffer Calibrations

    Buffer calibrations use two buffer solutions, usually at least 3 pH units apart, which allow the pH analyzer to calculate a new slope and zero value, corresponding to the particular characteristics of the sensor to more accurately derive pH from the milliVolt and temperature signals.

    • The slope and zero value derived from a buffer calibration provide an indication of the condition of the glass electrode from the magnitude of its slope, while the zero value gives an indication of reference poisoning or asymmetry potential, which is an offset within the pH electrode itself.
    • The slope of pH electrode tend to decrease from an initial value relatively close to the theoretical value of 59.16 mV/pH, largely due in many cases to the development of a high impedance short within the sensor, which forms a shunt of the electrode potential.
    • Zero offset values will generally lie within + 15 mV due to liquid junction potential, larger deviations are indications of poisoning.
    • Buffer solutions have a stated pH value at 25°C, but the stated value changes with temperature especially for stated values that are 7 pH or above. The buffer value at the calibration temperature should be used or errors will result.
    • The values of a buffer at temperatures other than 25°C are usually listed on the bottle, or better, the temperature behavior of the buffer can be loaded into the pH transmitter allowing it to use the correct buffer value at calibration.
    • Calibration errors can also be caused by buffer calibrations done in haste, which may not allow the pH sensor to fully respond to the buffer solution.
    • This will cause errors, especially in the case of a warm pH sensor not being given enough time to cool down to the temperature of the buffer solution.
    • pH transmitters employ a stabilization feature, which prevents the analyzer from accepting a buffer pH reading that has not reached a prescribed level of stabilization, in terms of pH change per time.

    pH Standardization

    Standardization is a simple zero adjustment of a pH analyzer to match the reading of a sample of the process solution made using a laboratory or portable pH analyzer. Standardization eliminates the removal and handling of electrodes and the upset to the equilibrium of the reference electrode junction. Standardization also takes into account the liquid junction potential from high ionic strength solutions and non-aqueous solvents in chemical reactions that would not be seen in buffer solutions. For greatest accuracy, samples should be immediately measured at the sample point with a portable pH meter.

    If a lab sample measurement value is used, it must be time stamped and the lab value compare to a historical online value for a calibration adjustment. The middle signal selected value from three electrodes of different ages can be used instead of a sample pH provided that a dynamic response to load disturbances or setpoint changes of at least two electrodes is confirmed. If more than one electrode is severely coated, aged, broken or poisoned, the middle signal is no longer representative of actual process pH.

    • Standardization is most useful for zeroing out a liquid junction potential, but some caution should be used when using the zero adjustment.
    • A simple standardization does not demonstrate that the pH sensor is responding to pH, as does a buffer calibration, and in some cases, a broken pH electrode can result in a believable pH reading, which may be standardized to a grab sample value.
    • A sample can be prone to contamination from the sample container or even exposure to air; high purity water is a prime example, a referee measurement must be exposed to a flowing sample using a flowing reference electrode.
    • A reaction occurring in the sample may not have reached completion when the sample was taken, but will have completed by the time it reaches the lab.
    • Discrepancies between the laboratory measurement and an on-line measurement at an elevated temperature may be due to the solution pH being temperature dependent. Adjusting the analyzer’s solution temperature compensation (not a simple zero adjustment) is the proper course of action.
    • It must be remembered that the laboratory or portable analyzer used to adjust the on-line measurement is not a primary pH standard, as is a buffer solution, and while it is almost always assumed that the laboratory is right, this is not always the case.

    The calibration of pH electrodes for non-aqueous solutions is even more challenging as discussed in the Control Talk column The wild side of pH measurement.

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 17 Jul 2019

    How Often Do Measurements Need to Be Calibrated?

    The post How Often Do Measurements Need to Be Calibrated? first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Greg Breitzke.

    Greg Breitzke is an E&I reliability specialist – instrumentation/electrical for Stepan. Greg has focused his career on project construction and commissioning as a technician, supervisor, or field engineer. This is his first in-house role, and he is tasked with reviewing and updating plant maintenance procedures for I&E equipment.

    Greg Breitzke’s Question

    I am working through an issue that can be beneficial to other Mentor Program Participants. NFPA 70B provides a detailed description for the prescribed maintenance and frequency based on equipment type, making the electrical portion fairly straight forward.  The instrumentation is another matter.  We are working to consolidate an abundance of current procedures based on make/model, to a reduced list based on technology.  The strategy is to “right size” frequencies for calibration and functional testing; decreasing non-value maintenance to have the ability to increase value added activities, within the existing head count.  

    My current plan for the instrumentation consists of: 

    1. Sort through the historical paper files with calibration records to determine how long a device has remained in tolerance before a correction was applied,
    2. Compare data against any work orders written against the asset that may reduce the frequency,
    3. Apply safety factors relative to the device impact on safety, regulatory compliance, quality, custody transfer, basic control, or indication only.

    I am trying to provide a reference baseline for review of these frequencies, but having little luck in the industry standards I have access to.  Is there a standard or RAGAGEP for calibration and functional testing frequency min/max by technology, that I can reference for a baseline?

    Nick Sands’ Answer

    The ISA recommended practice is not on the process of calibration but on a calibration management system: ISA-RP105.00.01-2017, Management of a Calibration Program for Industrial Automation and Control Systems. While I contributed, Leo Staples would be a good person for more explanation.

    For SIS, there is a requirement to perform calibration, which is comparison against a standard device, within a documented frequency and with documented limits, and correction when outside of limits. This is also required by OSHA for critical equipment under the PSM regulation. EPA has similar requirements under PSRM of course. Correction when out of limits is considered a failed proof test of the instrument in some cases, potentially affecting the reliability of the safety function. Paul Gruhn would be a good person for more explanation.

    Paul Gruhn’s Answer

    The ISA/IEC 61511 is performance based and does not mandate specific frequencies. Devices must be tested as some interval to make sure they perform as intended. The frequency required will be based on many different factors (e.g., SIL (performance) target, failure rate of the device in that service, diagnostic coverage, redundancy used (if any), etc.).

    Leo Staples’ Answer

    Section 5.6 ISA of the ISA technical report ISA-RP105.00.01-2017 addresses in detail calibration verification intervals or frequencies.  Users should establish calibration intervals for a loop/component based on the following:

    • criticality of the loop/component
    • the performance history of the loop/component
    • the ruggedness/stability of the component(s)
    • the operating environment.

    Exceptions include SIS related devices where calibration intervals are established to meet SIL requirements. Other factors that can drive calibration intervals include contracts regulatory requirements.

    The idea for the technical report came about after years of frustration dealing with ambiguous gas measurement contracts and government regulations. In many cases these simply stated users should follow good industry practices when addressing all aspects of calibrations.

    Calibration intervals alone do not address the other major factors that affect measurement accuracy. These include the accuracy of the calibration equipment, knowledge of the calibration personnel, adherence to defined calibration procedures, and knowledge of the personnel responsible for the calibration program. I have lots of war stories if anyone is interested.

    One of the last things that I did at my company before I retired was develop a Calibration Program Standard Operating Procedure (SOP) based on ISA-RP105.00.01-2017. The SOP was designed for use in the Generation, Transmission & Distribution, and other Division of the Company. Some of you may find this funny, but it was even used to determine the calibration frequency for NERC CIP physical security entry control point devices. Initially personnel from the Physical Security Department were testing these devices monthly only because that was what they had always done. While this was before the SOP was established my team used the concepts in establishing the calibration intervals for these devices. This work was well received by the auditors. As a side note, the review of monthly calibration intervals for these devices found the practices caused more problems than it prevented.

    The ISA recommended practice is not on the process of calibration, but on a calibration management system: ISA-RP105.00.01-2017, Management of a Calibration Program for Industrial Automation and Control Systems.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program.

    Greg McMillan’s Answer

    The measurement drift can provide considerable guidance in that when the number of months between calibrations multiplied by drift per month approaches the allowable error, it is time for a calibration check. Most transmitters today have a low drift rate but thermocouples and most electrodes have a drift rate much larger than the transmitter. The past records of calibration results will provide an update on actual drift for an application. Also, fouling of sensors, particularly electrodes, is an issue revealed in 86% response time during calibration tests (often overlooked). The sensing element is the most vulnerable component in nearly all measurements. Calibration checks should be made more frequently at the beginning to establish a drift rate and near the end of the sensor life when drift and failure rates accelerate. Sensor life for pH electrodes can decrease from a year to a few weeks due to high temperature, solids, strong acids and bases (e.g., caustic) and poisonous ions (e.g., cyanide). For every 25 oC increase in temperature, the electrode life is cut in half unless a high temperature glass is used.

    Accuracy is particularly important for primary loops (e.g., composition, pH, and temperature) to ensure you are at the right operating point. For secondary loops whose setpoint is corrected by a primary loop, accuracy is less of an issue. For all loops, the 5 Rs (reliability, resolution, repeatability, rangeability and response time) are important for measurements and valves.

    Drift in a primary loop sensor shows up as a different average controller output for a given production rate assuming no changes in raw materials, utilities, or equipment. Fouling of a sensor shows up as an increase in dead time and oscillation loop period.

    Middle signal selection using 3 separate sensors provides an incredible amount of additional intelligence and reliability reducing unnecessary maintenance. Drift shows up as a sensor with a consistently increasing average deviation from the middle value. The resulting offset is obvious. Coating shows up as a sensor lagging changes in the middle value. A decrease in span shows up as a sensor falling short of middle value for a change in setpoint.

    The installed accuracy greatly depends upon installation details and process fluid particularly taking into account sensor location in terms of seeing a representative indication of the process with minimal measurement noise. Changes in phase can be problematic for nearly all sensors. Impulse lines and capillary systems are a major source of poor measurement performance as detailed in the Control Talk columns Prevent pressure transmitter problems and Your DP problems could be a result of improper use of purges, fills, capillaries and seals.

    At the end of this post, I give a lot more details on how to minimize drift and maximize accuracy and repeatability by better temperature and pH sensors and through middle signal selection.

    For an additional educational resource, download Calibration Essentials, an informative e-book produced by ISA and Beamex. The free e-book provides vital information about calibrating process instruments today. To download the e-book, click this link.

    Hunter Vegas’ Answer

    There is no easy answer to this very complicated question. Unfortunately the answer is ‘it depends’ but I’ll do my best to cover the main points in this short reply.

    1) Yes there are some instrument technologies that have a tendency to drift more than others. A partial list of ‘drifters’ might include:

      • pH (drifts for all kinds of reasons – aging of probe, temperature, caustic/acid concentration, fouling, etc. etc.)
      • Thermocouples (tend to drift more than RTDs especially at high temperature or in hydrogen service)
      • Turbine meters in something other than very clean, lubricating service will tend to age and wear out so they will read low as they age. However cavitation can make them intermittently read high.
      • Vortex meters with piezo crystals can age over time and their low flow cut out increases.
      • Any flow/pressure transmitter with a diaphragm seal can drift due to process temperature and/or ambient temperature.
      • Most analyzers (oxygen, CO, chromatographs, LEL)
      • This list could go on and on.

    2) Some instrument technologies don’t drift as much. I’ve had good success with Coriolis and radar. (Radar doesn’t usually drift as much as it just cuts out. Coriolis usually works or it doesn’t. Obviously there are situations where either can drift but they are better than most.) DP in clean service with no diaphragm seals is usually pretty trouble free, especially the newer transmitters that are much more stable.

    3) The criticality of the service obviously impacts how often one needs to calibrate. Any of these issues could dramatically impact the frequency:

      • Is it a SIS instrument? The proof testing frequency will be decided by the SIS calculations.
      • Is it an environmental instrument? The state/feds may require calibrations on a particular frequency.
      • Is it a custody transfer meter? If you are selling millions of pounds of X a year you certainly want to make sure the meter is accurate or you could be giving away a lot of product!
      • Is it a critical control instrument that directly affects product quality or throughput?

    4) Obviously if a frequency is dictated by the service then that is the end of that. Once those are out of the way one can usually look at the service and come up with at least a reasonable calibration frequency as a starting point. Start calibrating at that frequency and then monitor history. If you are checking a meter every six months and have checked a meter 4 times in the last two years and the drift has remained less than 50% of the tolerance, then dropping back to a 12 month calibration cycle make perfect sense. Similarly if you calibrate every 6 months and find the meter drift is > 50% every calibration then you probably need to calibrate more often. However if the meter is older it may be cheaper to replace the meter with a new transmitter which is more stable.

    5) The last comment I’ll make is to make sure you are actually calibrating something that matters. I could go on for pages about companies who are diligently calibrating their instrumentation but aren’t actually calibrating their instrumentation. In other words they go through the motions, fill out the paperwork, and can point to reams of calibration logs yet they aren’t adequately testing the instrument loop and it could still be completely wrong. (For instance, shooting a temperature transmitter loop but not actually checking the RTD or thermocouple that feeds it, using a simulator to shoot a 4-20mA signal into the DCS to check the DCS reading but not actually testing the instrument itself, etc. They often check one small part of the loop and after a successful test, consider the whole loop ‘calibrated’.

    Greg McMillan’s Answer

    The Process/Industrial Instruments and Controls Handbook Sixth Edition 2019 edited by me and Hunter Vegas provide insight on how to maximize accuracy and minimize drift for most types of measurements. The following excerpt written by me is for temperature:

    Temperature

    The repeatability, accuracy and signal strength are two orders of magnitude better for an RTD compared to a TC. The drift for a RTD below 400 oC is also two orders of magnitude less than a TC. The 1 to 20oC drift per year of a TC is of particular concern for biological and chemical reactor and distillation control because of the profound effect on product quality from control at the wrong operating point. The already exceptional accuracy for a Class A RTD of 0.1oC can be improved to 0.02 oC by “sensor matching” where the four constants of a Callendar-Van-Dusen (CVD) equation provided by the supplier for the sensor are entered into the transmitter. The main limit to accuracy of an RTD is the wiring.

    The use of three extension lead wires between the sensor and transmitter or input card can enable the measurement to be compensated for changes in resistance in the lead wires due to temperature assuming the change is exactly the same for both lead wires. The use of four extension lead wires enables total compensation that accounts for the inevitable uncertainty in resistance of lead wires. Standard lead wires have a tolerance of 10% in resistance. For 500 feet of 20 gauge lead wire, the error could be as large as 26oC for a 2-wire RTD and 2.6oC a 3-wire RTD. The “best practice” is to use a 4 wire RTD unless the transmitter is located close to the sensor, preferably on the sensor. The transmitter accuracy is about 0.1oC.

    A handheld signal generator of resistance and voltage can be used to simulate the sensor to check or change a transmitter calibration. The sensor connected to the transmitter with linearization needs to be inserted in a dry block simulator. A bath can be used for low temperatures to test thermowell response time but a dry block is better for calibration. The accuracy of the reference temperature sensor in the block or bath should be 4 times more accurate than the sensor being tested. The block or bath readout resolution must be better than the best possible precision of the sensor. The block or bath calibration system should have accuracy traceable to the National Metrology Institute of the user country (NIST in USA).

    The accuracy at the normal setpoint to ensure the proper process operating point must be confirmed by a temperature test with a block. For factory assembled and calibrated sensor and thermowell with integral temperature transmitter, a single point temperature test in a dry block is usually sufficient with minimal zero or offset adjustment needed. For an RTD with “sensor matching,” adjustment is often not needed. For field calibration, the temperature of the block must be varied to cover the calibration range to set the linearization, span and zero adjustments. For field assembly, it would be wise to check the 63% response time in a bath.

    Middle Signal Selection

    The best solution in terms of increasing reliability, maintainability, and accuracy for all sensors with different durations of process service is automatic selection of the middle value for the loop process variable (PV). A very large chemical intermediates plant extended middle signal selection to all measurements that in combination with triple redundant controller essentially eliminated the one or more spurious trips per year. Middle signal selection was a requirement for all pH loops in Monsanto and Solutia.

    The return on investment for the additional electrodes from improved process performance and reduced life cycle costs is typically more than enough to justify the additional capital costs for biological and chemical processes if the electrode life expectancy has been proven to be acceptable in lab tests for harsh conditions. The use of the middle signal inherently ignores a single failure of any type including the most insidious failure that gives a pH value equal to the set point. The middle value reduces noise without the introduction of the lag from damping adjustment or signal filter and facilitates monitoring the relative speed of the response and drift, which are indicative of measurement and reference electrode coatings, respectively. The middle value used as the loop PV for well-tuned loops will reside near the set point regardless of drift.

    A drift in one of the other electrodes is indicative of a plugging or poisoning of its reference. If both of the other electrodes are drifting in the same direction, the middle value electrode probably has a reference problem. If the change in pH for a set point change is slower or smaller for one of the other electrodes, it indicates a coating or loss in efficiency, respectively for the subject glass electrode. Loss of pH glass electrode efficiency results from deterioration of glass surface due to chemical attack, dehydration, non-aqueous solvents, and aging accelerated by high process temperatures. Decreases in glass electrode shunt resistance caused by exposure of O-rings and seals to a harsh or hot process can also cause a loss in electrode efficiency.

    pH Electrodes

    Here is some detailed guidance on pH electrode calibration from the ISA book Essentials of Modern Measurements and Final Control Elements.

    Buffer Calibrations

    Buffer calibrations use two buffer solutions, usually at least 3 pH units apart, which allow the pH analyzer to calculate a new slope and zero value, corresponding to the particular characteristics of the sensor to more accurately derive pH from the milliVolt and temperature signals.

    • The slope and zero value derived from a buffer calibration provide an indication of the condition of the glass electrode from the magnitude of its slope, while the zero value gives an indication of reference poisoning or asymmetry potential, which is an offset within the pH electrode itself.
    • The slope of pH electrode tend to decrease from an initial value relatively close to the theoretical value of 59.16 mV/pH, largely due in many cases to the development of a high impedance short within the sensor, which forms a shunt of the electrode potential.
    • Zero offset values will generally lie within + 15 mV due to liquid junction potential, larger deviations are indications of poisoning.
    • Buffer solutions have a stated pH value at 25°C, but the stated value changes with temperature especially for stated values that are 7 pH or above. The buffer value at the calibration temperature should be used or errors will result.
    • The values of a buffer at temperatures other than 25°C are usually listed on the bottle, or better, the temperature behavior of the buffer can be loaded into the pH transmitter allowing it to use the correct buffer value at calibration.
    • Calibration errors can also be caused by buffer calibrations done in haste, which may not allow the pH sensor to fully respond to the buffer solution.
    • This will cause errors, especially in the case of a warm pH sensor not being given enough time to cool down to the temperature of the buffer solution.
    • pH transmitters employ a stabilization feature, which prevents the analyzer from accepting a buffer pH reading that has not reached a prescribed level of stabilization, in terms of pH change per time.

    pH Standardization

    Standardization is a simple zero adjustment of a pH analyzer to match the reading of a sample of the process solution made using a laboratory or portable pH analyzer. Standardization eliminates the removal and handling of electrodes and the upset to the equilibrium of the reference electrode junction. Standardization also takes into account the liquid junction potential from high ionic strength solutions and non-aqueous solvents in chemical reactions that would not be seen in buffer solutions. For greatest accuracy, samples should be immediately measured at the sample point with a portable pH meter.

    If a lab sample measurement value is used, it must be time stamped and the lab value compare to a historical online value for a calibration adjustment. The middle signal selected value from three electrodes of different ages can be used instead of a sample pH provided that a dynamic response to load disturbances or setpoint changes of at least two electrodes is confirmed. If more than one electrode is severely coated, aged, broken or poisoned, the middle signal is no longer representative of actual process pH.

    • Standardization is most useful for zeroing out a liquid junction potential, but some caution should be used when using the zero adjustment.
    • A simple standardization does not demonstrate that the pH sensor is responding to pH, as does a buffer calibration, and in some cases, a broken pH electrode can result in a believable pH reading, which may be standardized to a grab sample value.
    • A sample can be prone to contamination from the sample container or even exposure to air; high purity water is a prime example, a referee measurement must be exposed to a flowing sample using a flowing reference electrode.
    • A reaction occurring in the sample may not have reached completion when the sample was taken, but will have completed by the time it reaches the lab.
    • Discrepancies between the laboratory measurement and an on-line measurement at an elevated temperature may be due to the solution pH being temperature dependent. Adjusting the analyzer’s solution temperature compensation (not a simple zero adjustment) is the proper course of action.
    • It must be remembered that the laboratory or portable analyzer used to adjust the on-line measurement is not a primary pH standard, as is a buffer solution, and while it is almost always assumed that the laboratory is right, this is not always the case.

    The calibration of pH electrodes for non-aqueous solutions is even more challenging as discussed in the Control Talk column The wild side of pH measurement.

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 25 Jun 2019

    Missed Opportunities in Process Control - Part 5

    Here is the Fifth part of a point blank decisive comprehensive list of what we really need to know in a detailed attempt to reduce the disparity between theory and practice. Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail. If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts.

    You can also get a comprehensive resource focused on what you really need to know for a successful automaton project including nearly a thousand best practices in the 98% new 2019 Process/Industrial Instruments and Controls Handbook Sixth Edition capturing the expertise of 50 leaders in industry.

    1. Simple computation of a process variable rate of change with minimal noise and fast updates enable PID and MPC to optimize batch profiles. Many batch profiles have a key process variable that responds only in one direction. A one directional response occurs for temperature when there is only heating and no cooling or vice versa. Similarly, a one direction response occurs for pH when there is only a base and no acid flow or vice versa. Most batch composition responses are one directional in that the product concentration only increases with time. Integral action assumes that the direction of a process variable (PV) can be changed. Integral action can be turned off by choosing a PID structure of proportional-only (P-only) or proportional-derivative (PD). Integral action is an inherent aspect of MPC so unless some special modifications are employed, MPC is not used. A solution that enables both MPC and PID control opening up optimization opportunities is to translate the controlled variable (CV) to a PV rate of change (ΔPV/Δt). The new CV can change in both directions and the integrating response of a batch PV is now a self-regulating response of a batch CV with a possible steady state (constant slope). Furthermore, the CV is now representative of the batch profile slope and the profile can be optimized. Typically, a steep slope for the start and a gradual slope for the finish of a batch is best. The rate of change calculation simply passes the PV through a dead time block and the output of the block that is the old PV is subtracted from the input of the block that is the new PV. This change in PV is divided by the block dead time chosen to maximize the signal to noise ratio. For much more on feedback control opportunities for batch reactors see the Control feature article “Unlocking the Secret Profiles of Batch Reactors”.  
    2. Process variable rate of change computation identifies a compressor surge curve and actual occurrences of surge.  A similar computation detailed for batch control can be used to identify a surge point by realizing it occurs when the slope is zero indicated by the change in discharge pressure (ΔP) divided by the change in suction flow (ΔF) becomes zero  (ΔP/ΔF =0). Thus, the operating point on a characteristic curve can be monitored when the there is a significant rate of change of flow by dividing the change in discharge pressure by the change in suction flow using dead time blocks to create an old PV that is subtracted from the new PV with the dead time parameter again chosen to maximize the signal to noise ratio. Approaches to the surge point as a result of a decrease in suction flow can be identified by a slope that becomes very small realizing seeking to see a slope that is zero is too late. At a zero slope, the process is unstable and the suction flow will jump to a negative value in less than 0.06 seconds indicating surge. A PV rate of change (ΔPV/Δt) calculation described for batch control can be used to detect and count surge cycles, but the dead time parameter setting must be small (e.g., 0.2 seconds). The detection of surge can be used to trigger an open loop backup that will prevent additional surge cycles. See the Control feature article “Compressor surge control: Deeper understanding, simulation can eliminate instabilities” for much more enlightenment on the very detrimental and challenging dynamics of compressor surge response and control.
    3. Simple future value computation provides better operator understanding, batch end point prediction and full throttle setpoint response. The same calculation of PV rate of change multiplied by an intelligent time interval and added to the current PV can provide immediate updates as to the future value of the PV with a good signal to noise ratio. The time interval should be greater than the total loop dead time since any action taken at that moment by an operator or control system, does not have an effect seen until after the total loop dead time. Humans don’t understand this and expect to see in a few seconds the effect of changes made. This leads to successive actions that are counter product and PID tuning that tends to emphasize integral action since it is always driving the output in the direction to correct and difference between the setpoint (SP) and PV even if overshoot is eminent. For more on the opportunities see the Control Talk Blog “Future Values are the Future” and the Control feature article “Full Throttle Batch and Startup Response”.
    4. Process Analytic Technology (PAT) opportunities are greater than ever.  The primary focus of the PAT initiative by the FDA is to reduce variability by gaining a better understanding of the process and to encourage pharmaceutical manufacturers to continuously improve processes. PAT is defined in Section IV of the “Guidance for Industry” as follows: The Agency considers PAT to be a system for designing, analyzing, and controlling manufacturing through timely measurements (i.e., during processing) of critical quality and performance attributes of raw and in-process materials and processes, with the goal of ensuring final product quality. It is important to note that the term analytical in PAT is viewed broadly to include chemical, physical, microbiological, mathematical, and risk analysis conducted in an integrated manner. The goal of PAT is to enhance understanding and control the manufacturing process, which is consistent with our current drug quality system: quality cannot be tested into products; it should be built-in or should be by design.” New and improved on-line analyzers include dissolved carbon dioxide help ensure good cell conditions, turbidity and dielectric spectroscopy offer measurement of cell concentration and viability and at-line analyzers such as mass spectrometers provide off gas concentration for computing oxygen uptake rate (OUR), and the Nova Bioprofile Flex can provide fast and precise analysis of concentrations of medium components such as glucose, lactate, glutamine, ammonium, sodium, and potassium besides cell concentration, viability, size and osmolality.  The Aspectrics encoded photometric NIR can possibly be calibrated to measure the same components plus possibly indicators of weakening cells. Batch end point and profile optimization can be done.  The digital twin described in the Control feature article “Virtual plant virtuosity“ can be a great asset in increasing process understanding and developing, testing and tuning process control improvements so that implementation is seamless making the documentation for management of change proactive and efficient.  The digital twin kinetics for cell growth and product formation have been greatly improved in the last five years enabling high fidelity bioreactor models that are easy to fit largely eliminating the need for proprietary research data. Thus, today bioprocess digital twins make opportunities a reality described in the Bioprocess International, Process Design Supplement feature article “PAT Tools for Accelerated process Development and Improvement”.
    5. Faster and more productive data analytics by use of digital twin. Nonintrusive incredibly more relevant inputs can be found and an intelligent and extensive Design of Experiments (DOE) conducted by use of the digital twin without affecting the process.  Process control depends on identification of the dynamics and changes in process variables for changes in other process variables most notably flows over a wide operating range due to nonlinearity and interactions. Most plant data used for developing principal component analysis (PCA) and predictions by partial least squares (PLS) does not show sufficient changes in the process inputs or process outputs and does not cover the complete possible operating range especially startup and abnormal conditions when data analytics is most needed. First principal models are exceptional at identifying process gains and can be improved to enable the identification of dynamics by including valve backlash, stiction and response times, mixing and measurement lags, transportation delays, analyzer cycle times, and transmitter and controller update rates. The identification of dynamics is essential for dynamic compensation of inputs for continuous processes so that a process input change is synchronized with a corresponding process output change. Dynamic compensation is not needed for prediction of batch end points but could be useful for batch profile control where the translation of batch component concentration or temperature to a rate of change (batch slope) gives a steady state like a continuous process.
    6. Better paring of controlled and manipulated variables by use of digital twin.   The accurate steady state process gains of first principal models enable a comprehensive gain array analysis that is the key tool for finding the proper pairing of variables. Integrating process gains can be converted to steady state gains by computing and using the PV rate of change via the simple computation described in item 41.
    7. Online process performance metrics for practitioners and executives enabling justification and optimization of process control improvements by use of digital twin.  Online process metrics computing  a moving average metric of process capacity or efficiency for a shift, batch, month and quarter can provide the analysis and incentive for operators, process and automation engineers, maintenance technicians, managers and executives. The monthly and quarterly analysis periods are of greatest interest to people making business decisions. The shift and batch analysis periods are of greatest use to operators and process engineers. All of this is best developed and tested with a digital twin. For more on the identification and value of online metrics, see the Control Talk column “Getting innovation back into process control”.
    8. Nonintrusive adaptation of first principle models by MPC in digital twin can be readily done.  It is not commonly recognized that the fidelity of a first principal model is seen in how well the manipulated flows in the digital twin match those in the plant. The digital twin has the same controllers, setpoints and tuning as in the actual plant. Differences in the manipulated flows trajectories for disturbances and operating point changes are indicative of a mismatch of dynamics. Differences in steady state flows are indicative of a mismatch of process parameters. In either case, a MPC whose setpoints are the plant steady state or rate of change flows and whose controlled variables are the digital twin steady state or rate of change flows, can be manipulate process or dynamic parameters to adapt the model in the digital twin. The models for the MPC can be identified by conventional methods using just the models in the digital twin. The MPC ability to adapt the model can be tested and then actually used via the digital twin inputs of actual plant flows. For an adaptation of bioreactor model example, see Advanced Process Control Chapter 18 in A Guide to Automation Body of Knowledge Third Edition.
    9. Realization that PID algorithms in industry seldom use Ideal Form.  Not sure why but most text books and professors show and teach the Ideal Form. In the Ideal Form, the proportional mode gain only affects the proportional mode resulting in an integral gain and derivative gain for the other modes. Most industrial PIDs use a Form where the proportional mode tuning setting (e.g., gain or proportional band) affects all modes. The integral tuning setting is either repeats per unit time (e.g., repeats per minute) or time (e.g., seconds) and the derivative setting is a rate time (e.g., seconds). The behavior and tuning settings are quite different severely reducing the possible synergy between industry and academia.
    10. Realization that PID algorithms in industry seldom use engineering units.   Even stranger to me is the common misconception by professors that the PID algorithm works in engineering units. I have only seen this in one particular PLC and the MeasureX sheet thickness control system. In nearly all other PLC and DCS control systems, the algorithm works in percent of controlled variable and manipulated variable signals.  The effect of valve installed flow characteristic and measurement span has a straightforward effect on valve and measurement gains and consequently the open loop gain and PID tuning. If a PID algorithm works in engineering units, these straightforward effects are lost and simply changing the type of engineering units (e.g., going from lbs per minute to cubic feet per hour) can have a huge effect on tuning a PID working in engineering units (no effect on a PID working in percent). This disconnect is also severely reducing the possible synergy between industry and academia.  
    • 12 Jun 2019

    Basic Guidelines for Control Valve Selection and Sizing

    The post Basic Guidelines for Control Valve Selection and Sizing first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Hiten Dalal.

    Hiten Dalal, PE, PMP, is senior automation engineer for products pipeline at Kinder Morgan, Inc. Hiten has extensive experience in pipeline pressure and flow control.

    Hiten Dalal’s Question

    Are there basic rule of thumb guidelines for control valve sizing outside of relying on the valve supplier and using the valve manufacturer’s sizing program?

    Hunter Vegas’ Answer

    Selecting and sizing control valves seems to have become a lost art. Most engineers toss it over the fence to the vendor along with a handful of (mostly wrong) process data values, and a salesperson plugs the values into a vendor program which spits out a result.  Control valves often determine the capability of the control system, and a poorly sized and selected control valve will make tight control impossible regardless of the control strategy or tuning employed. Selecting the right valve matters!

    There are several aspects of sizing/selecting a control valve that must be addressed:

    Determine what the valve is supposed to do

    • Is this valve used for tight control or is ‘loose’ control acceptable. (For instance, are you trying to control a flow within a very tight margin across a broad range of process conditions or are you simply throttling a charge flow down as it approaches setpoint to avoid overshoot). The requirements for one situation are quite different from the other.
    • Is this valve supposed to provide control or tight shutoff? A valve can almost never do both. If you need both, then add a separate on/off shutoff valve. 

    Understand the TRUE process conditions

    • What is the minimum flow that the valve must control?
    • What is the maximum flow that the valve must pass?
    • What are the TRUE upstream/downstream pressures and differential pressure across the valves in those conditions? (Note that the P1 and DP at low flow rates will usually be much higher than at full flow rates. If you see a valve spec showing the same DP valve for high and low flow conditions it will be wrong 95%+ of the time.
    • What is the min/max temperature the valve might see? Don’t forget about clean out/steam out conditions or abnormal conditions that might subject a valve to high steam temperatures.
    • What is the process fluid? Is it always the same or could it be a mix of products?

    Note that gathering this data is probably the hardest to do. It often takes a sketch of the piping, an understanding of the process hydraulics, and examination of the system pump curves to determine the real pressure drops under various conditions. Note too that the DP may change when you select a valve since it might require pipe reducers/expanders to be installed in a pipe that is sized larger.

    Understand the installed flow characteristic of the valve 

    This can be another difficult task.  Ideally the control valve response should be linear (from the control system’s perspective).  If the PID output changes 5%, the process should respond in a similar fashion regardless of where the output is.  (In other words 15% to 20% or 85% to 90% should ideally generate the same process response). If the valve response is non-linear, control becomes much more difficult.  (You can tune for one process condition but if conditions change the dynamics change and now the tuning doesn’t work nearly as well.)  The valve response is determined by a number of items including:

    • The characteristics of the valve itself. (It might be linear, equal percent, quick opening, or something else.)
    • The DP of the process – The differential pressure across the valve is typically a function of the flow (the higher the flow, the lower the DP across the valve). This will generate a non-linear function.
    • System pressure and pump curves – pumps often have non-linear characteristics as well, so the available pressure will vary with the flow.

    The user has to understand all of these conditions so he/she can pick the right valve plug. Ideally you pick a valve characteristic that will offset the non-linear effects of the process and make the overall response of the system linear. 

    If the pressure drop is high, you may have a cavitation, flashing, or choked flow situation 

    That complicates matter still further because now you’ll need to know a lot more about the process fluid itself. If you are faced with cavitation or flashing you may need to know the vapor pressure and critical pressure of the fluid. This information may be readily available or not if the fluid is a mix of products. Choked flow conditions are usually accompanied with noise problems and will also require additional fluid data to perform the calculations. Realize too that the selection of the valve internals will have a big impact on the flow rates, response, etc.  (You’ll be looking at anti-cav trim, diffusers, etc.)

    Armed with all of that information (and it is a lot of information) you can finally start sizing/selecting the valve 

    Usually the vendor’s program is a good place to start, but some programs are much better than others because some have more process data ‘built in’ and have the advanced calculations required to handle cavitation, flashing, choked flow, and noise calculations.  Others are very simplistic and may not handle the more advanced conditions. Theoretically you could use any vendor’s program to do any valve but obviously the vendor program will typically have only its valve data built in so if you use a different program you’ll have to enter that data (if you can find it!) One caution about this – some vendors have different valve constants which can be difficult to convert.  

    The procedure for finally choosing the valve is (roughly) as follows:

    • Run a down and dirty calc to just see what you have. What is the required Cv at min and max flows? Do I have cavitation/flashing/choking issues?
    • Assuming no cavitation/flashing/choking then you can take the result and start to select a particular valve. The selection process includes:
      1. Pick an acceptable valve body type. (Reciprocating control valves with a digital positioner and a good guide design will provide the tightest control.  However other body styles might be acceptable depending on the requirements and budget.)
      2. Pick the right valve characteristic to provide an overall linear response.
      3. Now look at the offering of that valve and trim style and pick a valve with the proper range of CVs. Usually you want some room above the max flow and you want to make sure you are able to control at the minimum flow and not be bumping off the seat. Note that you may have to go to a different valve body (or even manufacturer) to meet your desired characteristic and Cv. 
      4. Make sure the valve body/seals are compatible with your process fluid and the temperature.
    • If there is cavitation/flashing/choking then things get a lot more complicated so I’ll save that for another lesson.
    •  

    Hope this helped. It was probably a bit more than you were wanting but control valve selection and sizing is a lot more complicated than most realize.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program.

    Greg McMillan’s Answer

    Hunter did a great job of providing detailed concise advice. My offering here is to help avoid the common problems from an inappropriate focus on maximizing valve capacity, minimizing valve pressure drop, minimizing valve leakage and minimizing valve cost. All these things have resulted in “on-off valves” posing as “throttling valves” creating problems of poor actuator and positioner sensitivity, excessive backlash and stiction, unsuspected nonlinearity, poor rangeability, and smart positioners giving dumb diagnostics.

    While certain applications, such as pH control, are particularly sensitive to these valve problems, nearly all loops will suffer from backlash and stiction exceeding 5% (quite common with many “on-off valves”) causing limit cycles that can spread through the process. These “on-off valves” are quite attractive because of the high capacity and low pressure drop, leakage and cost. To address leakage requirements, a separate tight shutoff valve should be used in series with a good throttling valve and coordinated to open and close to enable a good throttling valve to smoothly do its job.

    Unfortunately there is nothing on a valve specification sheet that requires the valve have a reasonably precise and timely response to signals and not create oscillations from a loop simply being in automatic making us extremely vulnerable to common misconceptions. The most threatening one that comes to mind in selection and sizing is that rangeability is determined by how well a minimum Cv matches the theoretical characteristic. In reality, the minimum Cv cannot be less than the backlash and stiction near the seat. Most valve suppliers will not provide backlash and stiction for positions less than 40% because of the great increase from the sliding stem valve plug riding the seat or the rotary disk or ball rubbing the seal.  Also, tests by the supplier are for loose packing. Many think piston actuators are better than diaphragm actuators.

    Maybe the physical size and cost is less and the capability for thrust and torque higher, but the sensitivity is an order of magnitude less and vulnerability to actuator seal problems much greater. Higher pressure diaphragm actuators are now available enabling use on larger valves and pressure drops. One more major misconception is that boosters should be used instead of positioners on fast loops. This is downright dangerous due to positive feedback between flexure of diaphragm slightly changing actuator pressure and extremely high booster outlet port sensitivity. To reduce response time, the booster should be put on the positioner output with a bypass valve opened just enough to stop high frequency oscillations by allowing the positioner to see the much greater actuator and booster volume.

    The following excerpt from the Control Talk blog Sizing up valve sizing opportunities provides some more detailed warnings:

    We are pretty diligent about making sure the valve can supply the maximum flow. In fact, we can become so diligent we choose a valve size much greater than needed thinking bigger is better in case we ever need more. What we often do not realize is that the process engineer has already built in a factor to make sure there is more than enough flow in the given maximum (e.g., 25% more than needed). Since valve size and valve leakage are prominent requirements on the specification sheet if the materials of construction requirements are clear, we are setup for a bad scenario of buying a larger valve with higher friction.

    The valve supplier is happy to sell a larger valve and the piping designer is happier that not much or any of a pipe reducer is needed for valve installation and the pump size may be smaller. The process is not happy. The operators are not happy looking at trend charts unless the trend chart time and process variable scales are so large the limit cycle looks like noise. Eventually everyone will be unhappy.

    The limit cycle amplitude is large because of greater friction near the seat and the higher valve gain. The amplitude in flow units is the percent resolution (e.g., % stick-slip) multiplied by the valve gain (e.g., delta pph per delta % signal). You get a double whammy from a larger resolution limit and a larger valve gain. If you further decide to reduce the pressure drop allocated to the valve as a fraction of total system pressure drop to less than 0.25, a linear characteristic becomes quick opening greatly increasing the valve gain near the closed position. For a fraction much less than 0.25 and an equal percentage trim you may be literally and figuratively bottoming out for the given R factor that sets the rangeability for the inherent flow characteristic (e.g., R=50).

    What can you do to lead the way and become the “go to” resource for intelligent valve sizing?

    You need to compute the installed flow characteristic for various valve and trim sizes as discussed in the Jan 2016 Control Talk post Why and how to establish installed valve flow characteristics. You should take advantage of supplier software and your company’s mechanical engineer’s knowledge of the piping system design and details.

    You must choose the right inherent flow characteristic. If the pressure drop available to the control valve is relatively constant, then linear trim is best because the installed flow characteristic is then the inherent flow characteristic. The valve pressure drop can be relatively constant due to a variety of reasons most notably pressure control loops or changes in pressure in the rest of the piping system being negligible (fictional losses in system piping negligible). For more on this see the 5/06/2015 Control Talk blog Best Control Valve Flow Characteristic Tips.

    On the installed flow characteristic you need to make sure the valve gain in percent (% flow per % signal) from minimum to maximum flow does not change by more than a factor of 4 (e.g., 0.5 to 2.0) with the minimum gain greater than 0.25 and the maximum gain less than 4. For sliding stem valves, this valve gain requirement corresponds to minimum and maximum valve positions of 10% and 90%. For many rotary valves, this requirement corresponds to minimum and maximum disk or ball rotations of 20 degrees and 50 degrees.

    Furthermore, the limit cycle amplitude being the resolution in percent multiplied by the valve gain in flow units (e.g., pph per %) and by the process gain in engineering units (e.g., pH per pph) must be less than the allowable process variability (e.g., pH). The amplitude and conditions for a limit cycle from backlash is a bit more complicated but still computable. For sliding stem valves, you have more flexibility in that you may be able to change out trim sizes as the process requirements change. Plus, sliding stem valves generally have a much better resolution if you have a sensitive diaphragm actuator with plenty of thrust or torque and a smart positioner.

    The books Tuning and Control Loop Performance Fourth Edition and Essentials of Modern Measurements and Final Elements have simple equations to compute the installed flow characteristic and the minimum possible Cv for controllability based on the theoretical inherent flow characteristic, valve drop to total system drop pressure ratio and the resolution limit.

    Here is some guidance from “Chapter 4 – Best Control Valves and Variable Frequency Drives” of Process/Industrial Instruments and Controls Handbook Sixth Edition that Hunter and I just finished with the contributions of 50 experts in our profession to address nearly all aspects of achieving the best automation project performance.

    Use of ISA Standard for Valve Response Testing

    The effect of resolution limits from stiction and dead band from backlash are most noticeable for changes in controller output less than 0.4% and the effect of rate limiting is greatest for changes greater than 40%. For PID output changes of 2%, a poor valve or VFD design and setup are not very noticeable. An increase in PID gain resulting in changes in PID output greater than 0.4% can reduce oscillations from poor positioner design and dead band.

    The requirements in terms of 86% response time and travel gain (change in valve position divided by change in signal) should be specified for small, medium and large signal changes. In general, the travel gain requirement is relaxed for small signal changes due to effect of backlash and stiction, and the 86% response time requirement is relaxed for large signal changes due to the effect of rate limiting. The measurement of actual valve travel is problematic for on-off valves posing as throttling valves because the shaft movement is not disk or ball movement. The resulting difference between shaft position and actual ball or disk position has been observed in several applications to be as large as 8 percent.

    Best Practices

    Use sizing software with physical properties for worst case operating conditions. The minimum valve position must be greater than backlash and deadband. Based on a relatively good installed flow characteristic valve gains (valve drop to system pressure drop ratio greater than 0.25), there are minimum and maximum positions during sizing to minimize nonlinearity to less than 4:1. For sliding stem valves, the minimum and maximum valve positions are typically 10% and 90%, respectively. For many rotary valves, the minimum and maximum disk or ball rotations are typically 20 degrees and 50 degrees, respectively. The range between minimum and maximum positions or rotations can be extended by signal characterization to linearize the installed flow characteristic.

    1. Include effect of piping reducer factor on effective flow coefficient
    2. Select valve location and type to eliminate or reduce damage from flashing
    3. Preferably use a sliding stem valve (size permitting) to minimize backlash and stiction unless crevices and trim causes concerns about erosion, plugging, sanitation, or accumulation of solids particularly monomers that could polymerize and for single port valves install “flow to open” to eliminate bathtub stopper swirling effect
    4. If a rotary valve is used, select valve with splined shaft to stem connection, integral cast of stem with ball or disk, and minimal seal friction to minimize backlash and stiction
    5. Use Teflon and for higher temperature ranges use Ultra Low Friction (ULF) packing
    6. Compute the installed valve flow characteristic for worst case operating conditions
    7. Size actuator to deliver more than 150% of the maximum torque or thrust required
    8. Select actuator and positioner with threshold sensitivities of 0.1% or better
    9. Ensure total valve assembly dead band is less than 0.4% over the entire throttle range
    10. Ensure total valve assembly resolution is better than 0.2% over the entire throttle range
    11. Choose inherent flow characteristic and valve to system pressure drop ratio that does not cause the product of valve and process gain divided by process time constant to change more than 4:1 over entire process operating point range and flow range
    12. Tune positioner aggressively for application without integral action with readback that indicates actual plug, disk or ball travel instead of just actuator shaft movement
    13. Use volume boosters on positioner output with booster bypass valve opened enough to assure stability to reduce valve 86% response time for large signal changes
    14. Use small (0.2%) as well as large step changes (20%) to test valve 86% response time
    15. Use ISA standard and technical report relaxing expectations on travel gain and 86% response time for small and large signal changes, respectively

    For much more on valve response see the Control feature article How to specify valves and positioners that do not compromise control.

    The best book I have for understanding the many details of valve design is Control Valves for the Chemical Process Industries written by Bill Fitzgerald and published by McGraw-Hill. The book that is specifically focused on this Q&A topic is Control Valve Selection and Sizing written by Les Driskell and published by ISA.  Most of my books in my office are old like me. Sometimes newer versions do not exist or are not as good.

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    Image Credit: Wikipedia

    • 15 May 2019

    What Are the Opportunities for Nonlinear Control in Process Industry Applications?

    The post What Are the Opportunities for Nonlinear Control in Process Industry Applications? first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. These questions come from Flavio Briquente and Syed Misbahuddin.

    Model predictive control (MPC) has a proven successful history of providing extensive multivariable control and optimization. The applications in refineries are extensive forcing the PID in most cases to take a backseat. These processes tend to employ very large MPC matrices and employ extensive optimization by Linear Programs (LP). The models are linear and may be switched for different product mixtures. The plants tend to have a more constant production rates and greater linearity than seen in specialty chemical and biological processes.

    MPC is also widely used in petrochemical plants. The applications in other parts of the process industry are increasing but tend to use much smaller MPC matrices focused on a unit operation. MPC offers dynamic decoupling, disturbance and constraint control. To do the same in PID requires dynamic compensation of decoupling and feedforward signals and override control. The software to accomplish dynamic compensation for the PID is not explained or widely used. Also, interactions and override control involving more than two process variables is more challenging than practitioners can address.  MPC is easier to tune and has an integrated LP for optimization.

    Flavio Briguente is an advanced process control consultant at Evonik in North America, and is one of the original protégés of the ISA Mentor Program. Flavio has expertise in model predictive control and advanced PID control. He has worked at Rohm, Haas Company, and Monsanto Company. At Monsanto, he was appointed to the manufacturing technologist program, and served as the process control lead at the Sao Jose dos Campos plant in Brazil and a technical reference for the company’s South American sites. During his career, Flavio focused on different manufacturing processes, and made major contributions in optimization, advanced control strategies, Six Sigma and capital projects. He earned a chemical engineering degree from the University of São Paulo, a post-graduate degree in environmental engineering from FAAP, a master’s degree in automation and robotics from the University of Taubate, and a PhD in material and manufacturing processes from Aeronautics Institute of Technology. 

    Syed Misbahuddin is an advanced process control engineer for a major specialty chemicals company with experience in model predictive control and advanced PID control. Before joining industry, he received a master’s degree in chemical engineering with a focus on neural network-based controls. Additionally, he is trained as a Six Sigma Black Belt, which focuses on utilizing statistical process controls for variability reduction. This combination helps him implement controls utilizing physics-based, as well as, data-driven methods.

    The considerable experience and knowledge of Flavio and Syed blurs the line between protégé and resource leading to exceptionally technical and insightful questions and answers.

    Flavio Briguente’s Questions

    Can the existing MPC/APC techniques be applied for batch operation? Is there a non-linear MPC application available? Is there a known case in operation for chemical industry? What are the pros and cons of linear versus nonlinear MPC?

    Mark Darby’s Answers

    MPC was originally developed for continuous or semi-continuous processes. It is based on a receding horizon where the prediction and control horizons are fixed and shifted forwarded each execution of the controller. Most MPCs include an optimizer that optimizes the steady state at the end of the horizon, which the dynamic part of the MPC steers towards. 

    Batch processes are by definition non-steady-state and typically have an end-point condition that must be met at batch end and usually have a trajectory over time that controlled variables (CVs) are desired to follow. As a result, the standard MPC algorithm is not appropriate for batch processes and must be modified (note: there may be exceptions to this based on the application).  I am aware of MPC batch products available in the market, but I have no experience with them. Due to the nonlinear nature of batch processes, especially those involving exothermic reaction, a nonlinear MPC may be necessary.

    By far, the majority of MPCs applied industrially utilize a linear model. Many of the commercial linear packages include previsions for managing nonlinearities, such as using linearizing transformations, changing the gain, dynamics, or the models themselves. A typical approach is to apply a nonlinear static transformation to a manipulated variable or a controlled variable, commonly called Hammerstein and Wiener transformations. An example is characterizing the valve-flow relationship or controlling the logarithm of a distillation composition. Transformations are performed before or after the MPC engine (optimization) so that a linear optimization problem is retained. 

    Given the success of modeling chemical processes it may be surprising that linear, empirically developed models are still the norm.  The reason is that it is still quicker and cheaper to develop an empirical model and linear models most often perform well for the majority of processes, especially with the nonlinear capabilities mentioned previously.   

    Nonlinear MPC applications tend to be reserved for those applications where nonlinearities are present in both system gains and dynamic responses and the controller must operate at significantly different targets. Nonlinear MPC is routinely applied in polymer manufacturing.  These applications typically have less than five manipulated variables (MVs). A range of models have been used in nonlinear MPC, including neural nets, first principles, and hybrid models that combine first principle and empirical models.

    A potential disadvantage of developing a nonlinear MPC application is the time necessary to develop and validate the model. If a first principle model is used, lower level PID loops must also be modeled if the dynamics are significant (i.e., cannot be ignored).  With empirical modeling, the dynamics of the PID loops are embedded in the plant responses. Compared to a linear model, a nonlinear model will also require more computation time, so one would need to ensure that the controller can meet the required execution period based on the dynamics of the process and disturbances. In addition, there may be decisions around how to update the mode, i.e., which parameters or biases to adjust. For these reasons, nonlinear MPC is reserved for those applications that cannot be adequately controlled with linear MPC.

    My opinion is that we’ll be seeing more nonlinear applications once it becomes easier to develop nonlinear models. I see hybrid models being critical to this.  Known information would be incorporated and unknown parts would be described using empirically models using a range of techniques that might include machine learning. Such an approach might actually reduce the time of model development compared to linear approaches.

    Greg McMillan’s answers

    MPC for batch operations can be achieved by the translation of the controlled variable from batch temperature or composition with a unidirectional response (e.g., increasing temperature or composition) to the slope of the batch profile (temperature or composition rate of change) as noted in my article Get the Most out of Your Batch you then have a continuous type of process with a bi-directional response. There is still potentially a nonlinearity issue. For a perspective on the many challenges see my blog Why batch processes are difficult.

    I agree with Mark Darby that the use of hybrid systems where nonlinear models are integrated could be beneficial. My preference would be in the following order in terms of ability to understand and improve:

    1. first principle calculations
    2. simple signal characterizations
    3. principle components analysis (PCA) and partial least squares (PLS)
    4. neural networks (NN)

    There is an opportunity to use principle components for neural network inputs to eliminate correlations between inputs and to reduce the number of inputs. You are much more vulnerable with black box approaches like neural networks to inadequacies in training data. More details about the use of NN and recent advances will be discussed in a subsequent question by Syed.

    There is some synergy to be gained by using the best of what each of the above have to offer. In the literature and in practices, experts in a particular technology often do not see the benefit of other technologies. There are exceptions as seen in papers referenced in my answer to the next question. I personally see benefits in running a first principle model (FPM) to understand causes and effects and to identify process gains. Not realized is that the FPM parameters in a virtual plant that uses a digital twin running real time using the same setpoints as the actual plant can be adapted by use of a MPC. In the next section we will see how NN can be used to help a FPM.

    Signal characterization is a valuable tool to address nonlinearities in the valve and process as detailed in my blog Unexpected benefits of signal characterizers. I tried using NN to predict pH for a mixture of weak acids and bases and found better results from the simple use of a signal characterizer. Part of the problem is that the process gain is inversely proportional to production rate as detailed in my blog Hidden factor in our most important control loops.

    Since dead time mismatch has a big effect on MPC performance as detailed in the ISA Mentor Post How to Improve Loop Performance for Dead Time Dominant Systems, an intelligent update of dead time simply based on production rate for a transportation delay can be beneficial.

    Syed Misbahuddin’s follow-up question

    Recently, there has been an increased focus on the use of deep neural networks for artificial intelligence (AI) applications. Deep signifies many hidden layers. Recurrent neural networks have also been able in some cases to insure relationships are cause and effect rather than just correlations. They use a rather black box approach with models built from training data. How successful are deep neural networks in process control?

    Greg McMillan’s answers

    Pavilion Technologies in Austin has integrated Neural Networks with Model Predictive Control. Successful applications in the optimization of ethanol processes have been reported a decade ago. In the Pavilion 1996 white paper “The Process Perfector: The next step to Multivariable Control and Optimization” it appears that process gains possibly, from step testing of FPM or bump testing of actual process for an MPC, were used as the starting point. The NN was then able to provide a nonlinear model of the dynamics given the steady state gains. I am not sure what complexity of dynamics can be identified. The predictions of NN for continuous processes have the most notable successes in plug flow processes where there is no appreciable process time constant and the process dynamics simplify to a transportation delay. Examples of successes of NN for plug flow include dryer moisture, furnace CO, and kiln or catalytic reactor product composition prediction. Possible applications also exist for inline systems and sheets in pulp and paper processes and for extruders and static mixers.

    While the incentive is greater for high value biologic products, there are challenges with models of biological processes due to multiplicative effects (neural networks and data analytic models assume additive effects). Almost, every first principle model (FPM) has specific growth rate and product formation the result of a multiplication of  factors each between 0 and 1 to detail the effect of temperature, pH, dissolved oxygen, glucose, amino acid (e.g., glutamine), and inhibitors (e.g., lactic acid). Thus, each factor changes the effect of every other factor. You can understand this by realizing that if the temperature is too high, cells are not going to grow and may in fact die. It does not matter if there is enough oxygen or glucose. Similarly if there is not enough oxygen, it does not matter if the all the other conditions are fine. One way to address this problem is to make all factors as close to one and as constant as possible except for the factor of greatest interest. It has been shown data analytics can be used to identify the limitation and/or inhibition FPM parameter for one condition, such as the effect of glucose concentration via the Michaelis-Menten equation if all other factors are constant and nearly one.

    Process control is about changes in process inputs and consequential changes in process outputs. If there is no change, you cannot identify the process gain or dynamics. We know this is necessary in the identification of models for MPC and PID tuning and feedforward control. We often forget this in the data sets used to develop data models. A smart Design of Experiments (DOE) is really best to get the data sets to show changes in process outputs for changes in process inputs and to cover the range of interest. If setpoints are changed for different production rates and products, existing historical data may be rich enough if carefully pruned. Remember neural network models like statistical models are correlations and not cause and effect. Review by people knowledgeable in the process and control system is essential.

    Time synchronization of process inputs with process outputs is needed for continuous but not necessarily for batch models, explaining the notable successes in predicting batch end points. Often delays are inserted on continuous process inputs. This is sufficient for plug flow volumes, such as dryers, where the dynamics are principally a transport delay. For back mixed volumes such as vessels and columns a time lag and delay should be used that is dependent upon production rate. Neural network (NN) models are more difficult to troubleshoot than data analytic models and are vulnerable to correlated inputs (data analytics benefits from principle component analysis and drill down to contributors). NN models can introduce localized reversal of slope and bizarre extrapolation beyond training data not seen in data analytics. Data analytics’ piecewise linear fit can successfully model nonlinear batch profiles. To me this is similar in principle to the use of signal characterizers to provide a piecewise fit of titration curves.

    Process inputs and outputs that are coincidental are an issue for process diagnostics and predictions by MVSPC and NN models. Coincidences can come and go and never even appear again. They can be caused by unmeasured disturbances (e.g., concentrations of unrealized inhibiters and contaminants), operator actions (e.g., largely unpredictable and unrepeatable), operating states (e.g., controllers not in highest mode or at output limits), weather (e.g., blue northerners), poor installations (e.g., unsecured capillary blowing in wind), and just bad luck.

    I found a 1998 Hydrocarbon Processing article by Aspen Technology Inc. “Applying neural networks” that provides practical guidance and opportunities for hybrid models.

    The dynamics can be adapted and cause and effect relationships increased by advancements associated with recurrent neural networks as discussed in Chapter 2 Neural Networks with Feedback and Self-Organization in The Fundamentals of Computational Intelligence: System Approach by Mikhail Z. Zgurovsky and Yuriy P. Zaychenko (Springer 2016).

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program.

    Mark Darby’s answers

    The companies best known for neural net-based controllers are Pavilion (now Rockwell) and AspenTech. There have been multiple papers and presentations by these companies over the past 20 years with many successful applications in polymers. It’s clear from reading these papers that their approaches have continued to evolve over time and standard approaches have been developed. Today both approaches incorporate first principles models and make extensive use of historical data. For polymer reactor applications, the FPM involves dynamic reaction heat and mass balance equations and historical data is used to develop steady-state property predictions. Process testing time is needed only to capture or confirm dynamic aspects of the models. 

    Enhancements to the neural networks used in control applications have been reported.  AspenTech addressed the extrapolation challenges of neural nets with bounded derivatives.  Pavilion makes use of constrained neural nets in their fitting of models.

    Rockwell describes a different approach to the modeling and control of a fed-batch ethanol process in a presentation made at the 2009 American Control Conference, titled “Industrial Application of Nonlinear Model Predictive Control Technology for Fuel Ethanol Fermentation.”  The first step was the development of a kinetic model based on the structure of a FPM.  Certain reaction parameters in the nonlinear state space model were modeled using a neural net.  The online model is a more efficient non-linear model, fit from the initial model that handles nonlinear dynamics.  Parameters are fit by a gain constrained neural-net.  The nonlinear model is described in a Hydrocarbon Processing article titled Model predictive control for nonlinear processes with varying dynamics.

    To Syed’s follow-up question about deep neural networks, Deep neural networks require more parameters, but techniques have been developed that help deal with this. I have not seen results in process control applications, but it will be interesting to see if these enhancements developed and used by the google-types will be useful for our industries.      

    In addition to Greg’s citings, I wanted to mention a few other articles that describe approaches to nonlinear control. A FPM-based nonlinear controller was developed by ExxonMobil, primarily for polymer applications.  It is described in a paper presented at the Chemical Process Control VI conference (2001) titled “Evolution of a Nonlinear Model Predictive Controller,” and in a subsequent paper presented at another conference, Assessment and future directions of nonlinear model predictive control (2005), entitled NLMPC: A Platform for Optimal Control of Feed- or Product-Flexible Manufacturing. The motivation for a first principles model-based MPC for polymers included the nonlinearity associated with both gains and dynamics, constraint handling, control of new grades not previous produced, and the portability of the model/controller to other plants. In the modeling step, the estimation of model parameters in the FPM (parameter estimation) was a cited as a challenge. State estimation of the CVs, in light of unmeasured disturbances, is considered essential for the model update (feedback step). Finally, the increased skills necessary to support and maintain the nonlinear controller was mentioned, in particular, to diagnosis and correct convergence problems.

    A hybrid modeling approach to batch processes is described in a 2007 conference presentation at the 8th International IFAC Symposium on Dynamics and Control of Process Systems by IPCOS, titled “An Efficient Approach for Efficient Modeling and Advanced Control of Chemical Batch Processes.” The motivation for the nonlinear controller is the nonlinear behavior of many batch processes. Here, fundamental relationships were used for mass and energy balances and an empirical model for the reaction energy (which includes the kinetics), which was fit from historical data. The controller used the MPC structure, modified for the batch process. Future prediction of the CVs in the controller were made using the hybrid model, whereas the dynamic controller incorporated linearizations of the hybrid model.

    I think it is fair to say that there is a lack of nonlinear solvers tailored to hybrid modeling. An exception is the freely available software environments APMonitor and GEKKO developed by John Hedengren’s group at BYU. It solves dynamic optimization problems with first principle or hybrid models. It has built-in functions for model building, updating, and control. Here is a link to the website that contain references and videos for a range of nonlinear applications, including a batch distillation application.

    Hunter Vegas’ answers

    I worked with neutral networks quite a bit when they first came out in the late 1990s. I have not tried working with them much since but I will pass on my findings which I expect are as applicable now as they were then.

    Neural networks sound useful in principle. Give a neural network a pile of training data, let it ‘discover’ correlations between the inputs and the output data, then reverse those correlations in order to create a model which can be used for control. Unfortunately actually creating such a neural network and using it for control is much harder than it looks. Some reasons for this are:

    1. Finding training data is hard. Most of the time of the system is running fairly normal and tends to draw flat lines.  Only during upsets does it actually move around and provide the neural network useful information. Therefore you only want to feed the networks upset data to train it. Then you need to find more upset data to test it. Finding that much upset data is are not so easy to do. (If you train it on normal data, the neural network learns to draw straight lines which does not do much for control.)
    2. Finding the correlations is not so easy. The marketing literature suggests you just feed it the data and the network “figures it out.”  In reality that doesn’t usually happen. It may be that the correlations involve the derivative of an input, or the correlation is shifted in time, or perhaps there is correlation of a mathematical combination of inputs involving variables with different time shifts. Long story short – the system usually doesn’t ‘figure it out’ – YOU DO!  After playing with it for a while and testing and re-testing data you will start to see the correlations yourself which allows you to help the network focus on information that matters.  In many cases you actually figure out the correlation and the neural network just backs you up to confirm it.
    3. Implementing a multi variable controller is always a challenge. The more variables you add, the lower the reliability becomes.  Implementing any multivariable controller is a challenge because you have to make it smart enough to know how to handle input data failures gracefully. So even when you have a model, turning it into a robust controller that can manipulate the process is not always such an easy thing.

    I am not saying neutral networks do not work – I actually had very good success with them. However when all was said and done I pretty much figured out the correlations myself through trial and error and was able to utilize that information to improve control. I wrote a paper on the topic and won an ISA award because neural networks were all the rage at that time, but the reality was I just used the software to reinforce what I learned during the ‘network training’ process.

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 1 May 2019

    Missed Opportunities in Process Control - Part 4

    The post, Missed Opportunities in Process Control - Part 4, first appeared on the ControlGlobal.com Control Talk blog.

    Here is the Fourth part of a point blank decisive comprehensive list of what we really need to know in a detailed attempt to reduce the disparity between theory and practice. Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail. If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts.

    You can also get a comprehensive resource focused on what you really need to know for a successful automaton project including nearly a thousand best practices in the 98% new 2019 Process/Industrial Instruments and Controls Handbook Sixth Edition capturing the expertise of 50 leaders in industry.

    1. Eliminate air gap in thermowells to make a temperature response much faster. Contrary to popular opinion, the type of sensor is not a significant factor in the speed of the temperature response in the process industry. While an RTD may be a few seconds slower than a TC, the annular clearance around the sheath can cause an order of magnitude larger measurement time lag.  Additionally, a tip not touching the bottom of the thermowell can be even worse. Air is a great insulator as seen in the design of more energy efficient windows. Spring loaded tight fitting sheathed sensors in stepped metal thermowells of the proper insertion length are best.  Ceramic protection tubes cause a large measurement lag due to poor thermal conductivity.  Low fluid velocities can cause an increase in the lag as well. See the Control Talk column “A meeting of minds” on how to get the most precise and responsive temperature measurement.

    2. Use the best glass and sufficient velocity to keep pH measurement fast by reducing aging and coatings. A aged glass electrode due to even moderately high temperature (e.g.,  > 30 oC), chemical attack from strong acids or strong bases (e.g., caustic)  or dehydration from not being wetted or exposed to non-aqueous solvents can increase the sensor lag time by  orders of magnitude. High temperature glass and specific ion resistant glasses are incredibly beneficial to sustain accuracy and a clean healthy electrode sensor lag of just a few seconds. Velocities must be greater than 1 fps for fast response and greater than 5 fps to prevent fouling that can also increase sensor lag by orders of magnitude by almost imperceptible coatings. This is helpful for thermowells as well but the adverse effects in terms of slower response time are not as dramatic as seen for pH.  Electrodes must be kept wetted and exposure to non-aqueous solvents and harsh process conditions reduced by automatically retractable assemblies with ability to soak in buffer solutions. See the Control Talk column “Meeting of minds encore” on how to get the most precise and responsive pH measurement.

    3. Avoid the measurement lag becoming the primary lag. If the measurement lag becomes larger than the largest process time constant, the trend charts may look better due to attenuation of oscillation amplitude by the filtering effect. The PID gain may even be able to be increased because the PID does not know where the primary lag came from.  The key is that the actual amplitude of the process oscillation and the peak error is larger (often unknown unless a special separate fast measurement is installed).  Seen on the trend charts is the fact the period of oscillation is larger possibly to the point of creating a sustained oscillation. Besides slow electrodes and thermowells, this situation can occur simply due to transmitter damping or signal filter time settings. For compressor surge and many gas pressure control systems, the filter time and transmitter damping settings must not exceed 0.2 sec. For a much greater understanding, see the Control Talk Blog “Measurement Attenuation and Deception Tips”.    

    4. Real rangeability of a control valve depends upon ratio of valve drop to system pressure drop, actuator and positioner sensitivity, backlash, and stiction. Often rangeability is based on the deviation from an inherent flow characteristic leading to statements that a rotary valve often designed for on-off control as having the greatest rangeability. The real definition should depend upon minimum controllable flow that is a function of the installed flow characteristic, sensitivity, backlash and stiction near the closed position, all of which are generally worse for these on-off valves that supposedly have the best rangeability. The best valve rangeability is achieved with a valve drop to system pressure drop ratio greater than 0.25, generously sized diaphragm actuators, digital positioner tuned with high gain and no integral action, low friction packing (e.g., Enviro-Seal), and a sliding stem valve. If a rotary valve must be used, there should be a splined shaft to stem connection and stem integrally cast with ball or disk to minimize backlash and a low friction seal or ideally no seal to minimize stiction. A graduated v-notch ball or contoured butterfly should be used to improve flow characteristic.  Equations to compute the actual valve rangeability based on pressure drop ratio and resolution are given in Tuning and Control Loop Performance Fourth Edition.

    5. Real rangeability of a variable frequency drive (VFD) depends upon ratio of static pressure to system pressure drop, motor design, inverter type, input card resolution and dead band setting.  The best VFD rangeability and response is achieved by a static pressure to system pressure drop ratio less than 0.25, a generously sized TEFC motor with 1.15 service factor and Class F insulation, pulse width modulated inverter, speed to torque cascade control in inverter, and no dead band or rate limiting in the inverter setup.

    6. Identify and minimize transportation delays. The delay for a temperature or composition change to propagate from the point of manipulated change to the process or sensor is simply the process volume divided by the process flow rate. Normal process design procedures do not recognize the detrimental effect of dead time. The biggest example is equipment design guidelines that have a dip tube designed to be large in diameter extending down toward the impeller. Missing is the understanding the incredibly large dead time for pH control where the reagent flow is a gph or less and the dip tube volume is a gallon or more.  When the reagent valve is closed, the dip tube is back filled with process fluid from migration of high to low concentrations. To get the reagent to displace the process fluid takes more than an hour. When the reagent valve shuts off, it may take hours before reagent stops dripping and migrating into the process. To go from acid to base in split range control may take hours to displace the acid in the dip tube. The same thing happens to go from base to acid. The stiction is also highest at the closure position. When you consider pH is so sensitive, it is no wonder that pH systems oscillate across the split range point.

    7. The real rangeability of flow meters depends upon the signal to noise ratio at low flows, minimum velocity, and whether accuracy is a percent of scale or reading. The best flow rangeability is achieved by meters with accuracy in percent of reading, minimal noise at low flows and least effect of low velocities including the possible transition to laminar flow. Consequently Coriolis flow meters have the best rangeability (e.g., 200:1) and magmeters have the next best rangeability (e.g., 50:1). Most rangeability statements for other meters is based on a ratio of maximum to minimum meter velocity and turbulent flow and do not take into account the actual maximum flow experienced is much less than meter capacity.

    8. Use Coriolis flow meters for stoichiometric control and heating value control. The Coriolis flowmeter has the greatest accuracy with a mass flow measurement independent of composition. This capability is key to keeping flows in the right ratio particularly for reactants per the factors in the stoichiometric equation for the reaction (mole flow rate is simply mass flow rate divided by molecular weight of reactant). For waste fuels, the heat release rate upon combustion is a strong function of the mass flow greatly facilitating optimization of supplemental fuel use. Nearly all ratio control systems could benefit from true mass flow measurements with great accuracy and rangeability. For more on what you need to know to achieving what the Coriolis meter is capable of, see the Control Talk Column “Knowing the best is the best”.

    9. Identify and minimize the total dead time.  Dead time is easily identified on a properly scaled trend chart as simply the time delay between a manual output change or setpoint change and the start of the change in the process variable being controlled. The least disruption is usually by simply putting PID momentarily in manual and making a small output change simulating a load disturbance.  The test should be done at different production rates and run times. The dead time tends to be largest at low production rates due to larger transportation delays and slower heat transfer rates and sensor response. Dead time also tends to increase with production run time due to fouling or frosting of heat transfer surfaces. See the Control Talk Blog “Deadtime, the Simple Easy Key to Better Control” for a more extensive explanation of why I would be out of a job if the dead time was zero.

    10. Identify and minimize the ultimate period. This goes hand in hand with knowing and reducing the total loop dead time. The ultimate period in most loops is simply 4 times the dead time in a first order approximation where a secondary time constant is taken as creating additional dead time. Dead time dominant loops have a smaller ultimate period that approaches 2 times the dead time for a pure dead time loop (extremely rare).  Input oscillations with a period between ½ and twice the ultimate period result in resonance requiring less aggressive tuning.  Input oscillations less than ½ the ultimate period can be considered to be noise requiring filtering and less aggressive tuning. Oscillations periods greater than twice the ultimate period, are attenuated by more aggressive tuning. Note that input oscillations persist when the PID is in manual. For damped oscillations that only appear when the PID is in auto, an oscillation period close to ultimate period indicates too high a PID gain and more than twice the ultimate period indicates too low a PID reset time. A damped oscillation period approaching or exceeding 10 times the ultimate period indicates a violation of the gain window for near-integrating, true integrating or runaway processes. Oscillations greater than four times the ultimate period with constant amplitude are limit cycles due to backlash (dead band) or stiction (resolution limit). See the Control Talk Blogs “Controller Attenuation and Resonance Tips” and “Processes with no Steady State in PID Time Frame Tips” for more guidance.   

    • 15 Apr 2019

    How to Implement Effective Safety Instrumented Systems for Process Automation Applications

    The post How to Implement Effective Safety Instrumented Systems for Process Automation Applications first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Hariharan Ramachandran.

    Hariharan starts an enlightening conversation introducing platform independent key concepts for an effective safety instrumented system with the Mentor Program resource Len Laskowski, a principal technical SIS consultant, and Hunter Vegas, co-founder of the Mentor Program.

    Hariharan Ramachandran, a recent resource added to the ISA Mentor Program, is a control and safety systems professional with various levels of experience in the field of Industrial control, safety and automation. He has worked for various companies and executed global projects for oil and gas and petrochemical industries gaining experience in the entire life cycle of industrial automation and safety projects.

    Len Laskowski is a principal technical SIS consultant for Emerson Automation Solutions, and is a voting member of ISA84, Instrumented Systems to Achieve Functional Safety in the Process Industries.

    Hunter Vegas, P.E., has worked as an instrument engineer, production engineer, instrumentation group leader, principal automation engineer, and unit production manager. In 2001, he entered the systems integration industry and is currently working for Wunderlich-Malec as an engineering project manager in Kernersville, N.C. Hunter has executed thousands of instrumentation and control projects over his career, with budgets ranging from a few thousand to millions of dollars. He is proficient in field instrumentation sizing and selection, safety interlock design, electrical design, advanced control strategy, and numerous control system hardware and software platforms. Hunter earned a B.S.E.E. degree from Tulane University and an M.B.A. from Wake Forest University.

    Hariharan Ramachandran’s First Question

    How is the safety integrity level (SIL) of a critical safety system maintained throughout the lifecycle?

    Len Laskowski’s Answer

    The answer might sound a bit trite by the simple answer is by diligently following the lifecycle steps from beginning to end. Perform the design correctly and verify that it has been executed correctly. The SIS team should not blindly accept HAZOP and LOPA results at face value. The design that the LOPAs drive is no better than the team that determined the LOPA and the information they were provided. Often the LOPA results are based on incomplete or possibly misleading information. I believe a good SIS design team should question the LOPA and seek to validate its assumptions. I have seen LOPA’s declare that there is no hazard because XYZ equipment protects against it. But a walk in the field later discovered that equipment was taken out of service a year ago and had not yet been replaced. Obviously getting the LOPA/Hazop right is the first step.

    The second step is to make sure one does a robust design and specifies good quality instruments that are a good fit for the application. For example, a vortex meter may be a great meter for some applications but a poor choice for others. Similarly certain valve designs may have limited value as a safety shutdown valve. Inexperienced engineers may specify Class VI shutoff for on-off valves thinking they are making the system safer, but Class V metal seat valves would stand up to the service much better in the long run since the soft elastomer seats can easily be destroyed in less than month of operation. The third leg of this triangle is using the equipment by exercising it and routinely testing the loop. Partial stroke testing the valves is a very good idea to keep valves from sticking. Also for new units that do not have extensive experience with a process, the SIF components (valves and sensors) should be inspected at the first shutdown to assess their condition. This needs to be done until a history with the installation can be established. Diagnostics also fall into this category, deviation alarms, stroke time and any other diagnostics that can help determine the SIS health is important.

    Hariharan Ramachandran’s Feedback

    The safety instrumented function has to be monitored and managed throughout its lifecycle. Each layer in a safety protection system must have the ability to be audited. SIS verification and validation process provides a high level of assurance that the SIS will operate in accordance with its safety requirements specification (SRS). The proof testing must be carried out periodically at the intervals specified in the safety requirement specification. There should be a mechanism for recording of SIF life event data (proof test results, failures, and demands) for comparison of actual to expected performance. Continuous evaluation and improvement is the key concept here in maintaining the SIS efficiently.

    Hariharan Ramachandran’s Second Question

    What is the best approach to eliminate the common cause failures in a safety critical system?

    Hunter Vegas’ Answer

    There are many ways that common cause failures can creep into a safety system design. Some of the more common ways include:

    • Using a single orifice plate to feed redundant 2oo3 transmitters. Some make it even worse by using a single orifice tap to feed all three.  (Ideally it is best to get as much separation as possible – as a minimum have 3 different taps and individual impulse lines.  Better yet have completely different flow meters and if possible utilize different technologies to measure flow so that a single failure or abnormal process condition won’t affect them all.) 
    • If the impulse lines of redundant transmitters require heat trace, it is best to use different sources of heat. (If they are fed with a single steam line its failure might impact all three readings. This might apply to a boiler drum level or an orifice plate.)
    • Having the same technician calibrate all three meters simultaneously. (Sometimes he’ll get the calibration wrong and set up all three meters incorrectly.)  Some plants have the technician only calibrate one meter of the three each time. That way an incorrect calibration will stand out.
    • Putting redundant transmitters (or valves) on the same I/O card. If it freezes or fails all of the readings are lost.
    • Implementing SIS trips in the same DCS that controls the plant.
    • Just adding a SIS contact to the solenoid circuit of an existing on/off valve. If the solenoid or actuator fails such that the valve fails open neither the DCS or SIS can trip it.  At least add a second solenoid but it is far better to add a separate shutdown valve. (Some put a trip solenoid on a control valve.  However if the control valve fails open the trip solenoid might not be able close it either.)
    • Having a single device generate a 4-20mA signal for control and also generate a contact for a trip circuit. A single fault within the instrument might fail and take out both the 4-20mA and the trip signal.  (Using a SIS transmitter for control is really the same thing.)

    Hariharan Ramachandran’s Feedback

    Both, random and systematic events can induce common cause failure (CCF) in the form of single points of failure or the failure of redundant devices.

    Random hardware failures are addressed by Design architecture, diagnostics, estimation (analysis) of probabilistic failures, design techniques and measures (to IEC 61508‐7).

    Systematic failures are best addressed through the implementation of a protective management system, which overlays a quality management system with a project development process. A rigorous system is required to decrease systematic errors and enhance safe and reliable operation. Each verification, functional assessment, audit, and validation is aimed at reducing the probability of systematic error to a sufficiently low level.

    The management system should define work processes, which seek to identify and correct human error. Internal guidelines and procedures should be developed to support the day-to-day work processes for project engineering and on-going plant operation and maintenance. Procedures also serve as a training tool and ensure consistent execution of required activities. As errors or failures are detected, their occurrence should be investigated, so that lessons can be learned and communicated to potentially affected personnel.

    Hariharan Ramachandran’s Third Question

    An incident happened at a process plant, what are all the engineering aspects that needs to be verified during the Investigation?

    Len Laskowski’s Answer

    I would start at the beginning of the lifecycle look at Hazop and LOPA’s to see that they are done properly.  Look to see that documentation is correct; P&IDs, SRS, C&Es, MOC and test logs and procedures. Look to see where the break down occurred.  Were things specified correctly? Were the designs verified? Was the System correctly validated? Was proper training given? Look for test records once the system was commissioned.

    Hunter Vegas’ Answer

    Usually the first step is to determine exactly what happened separating conjecture from facts. Gather alarm logs, historian data, etc. while it is available. Individually interview any personnel involved as soon as possible to lock in the details. With that information in hand, begin to work backwards determining exactly what initiated the event and what subsequent failures occurred to allow it to happen. In most cases there will be a cascade of failures that actually enabled the event to happen. Then examine each failure to understand what happened and how it can be avoided in the future. Often there will be a number of changes implemented.  If the SIS system failed, then Len’s answer provides a good list of items to check.

    Hariharan Ramachandran’s Feedback

    Also verify if the device/equipment is appropriately used within the design intent.

    Hariharan Ramachandran’s Fourth Question

    What are all the critical factors involved in decommissioning a control systems?

    Len Laskowski’s Answer

    The most critical factor is good documentation. You need to know what is going to happen to your unit and other units in the plant once an instrument, valve, loop or interlock is decommissioned. A proper risk and impact assessment has to be carried out prior to the decommissioning. One must ask very early on in a project’s development if all units controlled by the system are planning to shut down at the same time. This is needed for maintenance and upgrades. Power distribution and other utilities are critical. One may not be able to demo a system because it would affect other units. In many cases, a system cannot be totally decommissioned until the next shutdown of the operating unit and it may require simultaneous shutdowns of neighboring units as well. Waste management strategy, regulatory framework and environmental safety control are the other factors to be considered.

    Hariharan Ramachandran’s Feedback

    A proper risk and impact assessment has to be carried out prior to the decommissioning. Waste management strategy, regulatory framework and environmental safety control are the other factors to be considered.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program.

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 8 Apr 2019

    Webinar Recording: The Amazing World of ISA Standards

    The post Webinar Recording: The Amazing World of ISA Standards first appeared on the ISA Interchange blog site.

    This educational ISA webinar was presented by Greg McMillan in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).

    Historically, predictive maintenance required very expensive technology and resources, like data scientists and domain experts, to be effective. Thanks to artificial intelligence (AI) methods such as machine learning making its way into the mainstream, predictive maintenance is now more achievable than ever. Our webinar will explore how machine learning is changing the game and greatly reducing the need for data scientists and domain experts. These technologies self-learn and autonomously monitor for data pattern anomalies. Not only does this make predictive maintenance far more practical than what was historically possible, but now predictions 30 days in advance are the norm. Don’t let the old way of doing predictive maintenance cause you loss in productivity any longer.

    This webinar covers:

    • AI made real
    • Leveraging existing technology for a higher ROI
    • Learning from downtime event history
    • How to never be blindsided by breakdown again

    About the Featured Presenter
    Nicholas P. Sands, P.E., CAP, serves as senior manufacturing technology fellow at DuPont, where he applies his expertise in automation and process control for the DuPont Safety and Construction business (Kevlar, Nomex, and Tyvek). During his career at DuPont, Sands has worked on or led the development of several corporate standards and best practices in the areas of automation competency, safety instrumented systems, alarm management, and process safety. Nick is: an ISA Fellow; co-chair of the ISA18 committee on alarm management; a director of the ISA101 committee on human machine interface; a director of the ISA84 committee on safety instrumented systems; and secretary of the IEC (International Electrotechnical Commission) committee that published the alarm management standard IEC62682. He is a former ISA Vice President of Standards and Practices and former ISA Vice President of Professional Development, and was a significant contributor to the development of ISA’s Certified Automation Professional program. He has written more than 40 articles and papers on alarm management, safety instrumented systems, and professional development, and is co-author of the new edition of A Guide to the Automation Body of Knowledge. Nick is a licensed engineer in the state of Delaware. He earned a bachelor of science degree in chemical engineering at Virginia Tech.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program.

    About the Presenter
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 27 Mar 2019

    Missed Opportunities in Process Control - Part 3

    Here is the third part of a point blank decisive comprehensive list of what we really need to know in a detailed attempt to reduce the disparity between theory and practice. Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail. If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts.

    You can also get a comprehensive resource focused on what you really need to know for a successful automaton project including nearly a thousand best practices in the 98% new 2019 Process/Industrial Instruments and Controls Handbook Sixth Edition capturing the expertise of 50 leaders in industry.

    The following list reveals common misconceptions that need to be understood to seek real solutions that actually address the opportunities.

    1. Dead time dominant loops need Model Predictive Control or a Smith Predictor.  There are many reasons for Model Predictive Control but dead time dominance is not really one of them. Dead time compensation can be simply done by inserting a dead time block in the external-reset feedback path making a conventional PID an enhanced PID and then tuning the PID much more aggressively. This enhanced PID is much easier to implement than a Smith Predictor because there is no need to identify and update an open loop gain or primary open loop time constant and there is no loss of the controlled variable seen on the PID faceplate.  An additional pervasive misconception is that dead time dominant processes benefit the most from dead time compensation. It turns out that reduction in integrated error for an unmeasured process input load disturbance is much greater for lag dominant processes, especially near-integrating processes. While the improvement is significant, the performance of a lag dominant process is often already impressive provided the PID is tuned aggressively (e.g., integrating process tuning rules with minimum arrest time). For more details see the ISA Mentor Program Q&A Post "How to Improve Loop Performance for Dead Time Dominant Systems"
    2. The model dead time should not be smaller than the actual loop dead time.  For Model Predictive Control, Smith Predictors and an enhanced PID, a model dead time larger than the actual dead time by just 40% can lead to fast oscillations. These controllers are less sensitive to a model dead time smaller than the actual dead time. For a conventional PID, tuning based on a model dead time larger than the actual dead time just causes a sluggish response so in general conventional PID tuning is based on largest possible loop dead time.
    3. Cascade control loops will oscillate if cascade rule is violated. For small or slow setpoint changes or unmeasured load disturbances, the loop may not break out into oscillations. While it is not a good idea to violate the rule that the secondary loop be at least five times faster than the primary loop, there are simple fixes. The simplest and easiest fix is to turn on external-reset feedback that will prevent the primary loop integral mode from changing faster than the secondary loop can respond. It is important that external-reset feedback signal be the actual process variable of secondary loop. There is no need to slow down the tuning of the primary loop, which is the most common quick fix if the secondary loop tuning cannot be made faster.
    4. Limit cycles are inevitable from resolution limits. While one or more integrators anywhere in the system can cause a limit cycle from a resolution limit, turning on external-reset feedback can stop the limit cycle. The feedback for the external reset feedback must be a fast readback of the actual manipulated valve position or speed. Often readback signals are slow and changes or lack of changes in the readback of actuator shaft position are not representative of the actual ball or disk movement for on-off valves posing as control valves. While external-reset feedback can stop the limit cycle, there is an offset from the desired valve position. For some exothermic reactors, it may be better to have a fast limit cycle in the manipulated coolant temperature than an offset because tight temperature control is imperative and the oscillation is attenuated (averaged out) by the well-mixed reactor volume.
    5. Fast opening and slow closing surge valves will cause oscillations unless PID is tuned for slower valve response. It is desirable that surge valve be fast in terms of increasing and slow in terms of decreasing vent or recycle flow for compressor surge control. Generally, this was done in the field by restricting the actuator fill rate or enhancing exhaust rate by a quick exhaust valve since the surge valves are fail open. The controller had to be tuned to deal with the crude unknown rate limiting. Using different setpoint up and down rate limits on the analog output block and turning on external-reset feedback via a fast readback of the actual valve position make the adjustment much more exact and visible. The controller does not need to be tuned for the slow closing rate because the integral mode will not outrun the response of the valve.
    6. Prevention of oscillations at split range point requires a deadband in the split range block. A dead band anywhere in the loop adds a dead time that is the dead band divided by the rate of change of signal. Dead band will cause a limit cycle if there are two or more integrators anywhere in the control loop including the positioner, process, and cascade control. The best solution is a precise properly sized control valves with minimal backlash and stiction and a linear installed flow characteristic. External-reset feedback with setpoint rate limits can be added in the direction of opening or closing a valve at the split range point to instill patience and eliminate unnecessary crossings of the split range point. For small and large valves, the better solution is a valve position controller that gradually and smoothly moves the big valve to ensure the small valve manipulated by the process controller is in a good throttle position.
    7. Valve position controller integral action must be ten times slower than process controller integral action to prevent oscillations. External-reset feedback in the valve position controller with fast readback of actual big valve position and up and down setpoint limits on analog output block for large valve can provide slow gradual optimization but a fast getaway for abnormal operation to prevent running out of the small valve. This is called directional move suppression and is generally beneficial when valve position controllers are used to maximize feed or minimize compressor pressure or maximize cooling tower or refrigeration unit temperature setpoints. One of the advantages of Model Predictive Control is move suppression to slow down changes in the manipulated variable that would be disruptive. Here we have the additional benefit of the move suppression being directional with no need to retune.
    8. High PID gains causing fast large changes in PID output upset operators and other loops. The peak and integrated errors for unmeasured load disturbances are inversely proportional to the PID gain. A high PID gain is necessary to minimize these errors and to get to setpoint faster for setpoint changes. Too low of a PID gain is unsafe for exothermic reactor temperature control and can cause slow large amplitude oscillations in near-integrating and true integrating processes.  Higher PID gains can be used to increase loop performance without upsetting operators or other loops by turning on external-reset feedback, putting setpoint rate limits on the analog output block or secondary loop and providing an accurate fast feedback of manipulated valve position or process variable.
    9. Large control valve actuators and VFD rate limiting to prevent motor overload requires slowing down the PID tuning to prevent oscillations. Turning on external-reset feedback and using a fast accurate readback of valve position or VFD speed enables faster tuning to be used that makes the response to small changes in PID output much faster. Of course, the better solution is a faster valve or larger motor. Since there is always a slewing rate or speed rate limit in VFD setup using external-reset feedback with fast readback is good idea in general.
    10. Large analyzer cycle times require PID detuning to prevent oscillations. While the additional dead time that is 1.5 times the cycle time is excessive in terms of ability of loop to deal with unmeasured load disturbances, when this additional dead time is greater than the 63% process response time, an intelligent computation of integral action using external-reset feedback can enable the resulting enhanced PID gain to be as large as the inverse of the open loop gain for self-regulating processes even if the cycle time increases. This means the enhanced PID could be used with an offline analyzer with a very large and variable time between results reported. While load disturbances are not corrected until an analytical result is available, the enhanced PID does not become unstable. The intelligent calculation of the proportional (P), integral (I) and derivative (D) mode is only done when there is a change in the measurement. The time interval between the current and last result is used in I and D mode computations. The input to I mode computation is the external-reset feedback signal. If there is no response of the manipulated variable, I mode contribution does not change. An analyzer failure will not cause a PID response since there is no change in P or D mode contribution unless there is a new result or setpoint. The same benefits apply to wireless loops (additional dead time is ½ update rate). For more details see the Control Talk Blog “Batch and continuous control with at-line and offline analyzer tips
    • 13 Mar 2019

    Solutions for Unstable Industrial Processes

    The post Solutions for Unstable Industrial Processes first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Caroline Cisneros.

    Negative resistance also known as positive feedback can cause processes to jump, accelerate and oscillate confusing the control system and the operator. These are characterized as open loop unstable processes. Not properly addressing these situations can result in equipment damage and plant shutdowns besides the loss of process efficiency. Here we first develop a fundamental understanding of the causes and then quickly move on to the solutions to keep the process safe and productive.

    Caroline Cisneros, a recent graduate of the University of Texas who became a protégé about a year ago, is gaining significant experience working with some of the best process control engineers in an advanced control applications group. Caroline asks a question about the dynamics that cause unstable processes. The deeper understanding gained as to the sources of instability can lead to process and control system solutions to minimize risk and to increase process performance.

    Caroline Cisneros’ Question

    What causes processes to be unstable when controllers are in manual?

    Greg McMillan’s Answer

    Fortunately, most processes are self-regulating by virtue of having negative feedback that provides a resistance to excursions (e.g., flow, liquid pressure, and continuous composition and temperature). These processes come to a steady state when the controller is in manual.  Somewhat less common are processes that have no feedback that will result in a ramp (e.g., batch composition and temperature, gas pressure and level). Fortunately, the ramp rate is quite slow except for gas pressure giving the operator time to intervene.

    There are a few processes where the deviation from setpoint can accelerate when in manual due to positive feedback. These processes should never be left in manual. We can appreciate how positive feedback causes problems in sound systems (e.g., microphones too close to speakers). We can also appreciate from circuit theory how negative resistance and positive feedback would cause an acceleration of a change in current flow. We can turn this insight into an understanding of how a similar situation develops for compressor, steam-jet ejector, exothermic reactor and parallel heat exchanger control.

    The compressor characteristic curves from the compressor manufacturer that are a plot of compressor pressure rise versus suction flow shows a curve of decreasing pressure rise for each speed or suction vane position whose  slope magnitude increases as the suction flow increases in the normal operating region. The pressure rise consequently decreases more as the flow increases opposing additional increases in compressor flow creating a positive resistance to flow. Not commonly seen is that compressor characteristic curve slope to the left of the surge point becomes zero as you decrease flow, which denotes a point on the surge curve, and then as the flow decreases further, the pressure rise decreases causing a further decrease in compressor flow creating a negative resistance to a decrease in flow.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program.

    When the flow becomes negative, the slope reverses sign creating a positive resistance with a shape similar to that seen in the normal operating region to the right of the surge point. The compressor flow then increases to a positive flow at which point the slope reverses sign creating negative resistance. The compressor flow jumps in about 0.03 seconds from the start of negative resistance to some point of positive resistance. The result is a jump in 0.03 seconds to negative flow across the negative resistance, a slower transition along positive resistance to zero flow, than a jump in 0.03 seconds across the negative resistance to a positive flow well to the right of the surge curve. If the surge valve is not open far enough, the operating point walks about 0.5 to 0.75 seconds along the positive resistance to the surge point. The whole cycle repeats itself with an oscillation period of 1 to 2 seconds. If this seems confusing, don’t feel alone. The PID controller is confused as well.

    Once a compressor gets into surge, the very rapid jumps and oscillations are too much for a conventional PID loop. Even a very fast measurement, PID execution rate and control valve response can’t deal with it alone. Consequently, the oscillation persists until an open loop backup activates and holds open the surge valves till the operating point is sustained well to the right of the surge curve for about 10 seconds at which point there is a bumpless transfer back to PID control. The solution is a very fast valve and PID working bumplessly with an open loop backup that detects a zero slope indicating an approach to surge or a rapid dip in flow indicating an actual surge. The operating point should always be kept well to the right of the surge point.

    For much more on compressor surge control see the article Compressor surge control: Deeper understanding, simulation can eliminate instabilities.

    The same shape but with much less of a dip in the compressor curve, sometimes occurs just to the right of the surge point. This local dip causes a jumping back and forth called buzzing. While the oscillation is much less severe than surge, the continual buzzing is disruptive to users.

    A similar sort of dip in a curve occurs in a plot of pumping rate versus absolute pressure for a steam-jet ejector. The result is a jumping across the path of negative resistance. The solution here is a different operating pressure or nozzle design, or multiple jets to reduce the operating range so that operation to one side or the other of the dip can be assured.

    Positive feedback occurs in exothermic reactors when the heat of reaction exceeds the cooling rate causing an accelerating rise in temperature that further increases the heat of reaction. The solution is to always insure the cooling rate is larger than the heat of reaction. However, in polymerization reactions the rate of reaction can accelerate so fast the cooling rate cannot be increased fast enough causing a shutdown or a severe oscillation. For safety and process performance, an aggressively tuned PID is essential where the time constants and dead time associated with heat transfer in cooling surface and thermowell and loop response are much less than the positive feedback time constant.

    Derivative action must be maximized and integral action must be minimized. In some cases a proportional plus derivative controller is used. The runway response of such reactors is characterized by a positive feedback time constant as shown in Figure 1 for an open loop response. The positive feedback time constant is calculated from the ordinary differential equations for the energy balance as shown in Appendix F of 101 Tips for a Successful Automation Career. The point of acceleration cannot be measured in practice because it is unsafe to have the controller in manual. A PID gain too low will allow a reactor to runaway since the PID controller is not adding enough negative feedback. There is a window of allowable PID gains that closes as the time constants from heat transfer surface and thermowell and the total loop dead time approach the positive feedback time constant.

    Figure 1: 1 Positive Feedback Process Open Loop Response

    Positive feedback can also occur when parallel heat exchanges have a common process fluid input each with outlet temperature controller(s) with a setpoint close to the boiling point or temperature resulting in vaporization of a component in the process fluid. Each temperature controller is manipulating a utility stream providing heat input. The control system is stable if the process flow is exactly the same to all exchangers. However, a sudden reduction in one  process flow causes overheating causing bubbles to form expanding back into the exchanger causing an increase in back pressure and hence a further decrease in process flow thru this hot exchanger.

    The increasing back pressure eventually forces all of the process flow into the colder heat exchanger making it colder. The high velocity in the hot exchanger from boiling and vaporization causes vibration and possibly damage to any discontinuity in its path from slugs of water. When nearly all of the water is pushed out of the hot exchanger, its temperature drops drawing feed that was going to the cold heat exchanger that causes the hot exchanger to overheat repeating the whole cycle. The solution is separate flow controllers and pumps for all streams so that changes in the flow to one exchanger do not affect another and a lower temperature setpoint.

    To summarize, to eliminate oscillations, the best solution is a process and equipment design that eliminates negative resistance and positive feedback. When this cannot provide the total solution, operating points may need to be restricted, loop dead time and thermowell time constant minimized and the controller gain increased with integral action decreased or suspended.

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 24 Feb 2019

    Missed Opportunities in Process Control - Part 2

    The post, Missed Opportunities in Process Control - Part 2, first appeared on the ControlGlobal.com Control Talk blog.

    Here is the second part of a point blank decisive comprehensive list of what we really need to know in a detailed attempt to reduce the disparity between theory and practice. Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail.

    If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts.

    You can also get a comprehensive resource focused on what you really need to know for a successful automaton project including nearly a thousand best practices in the 98% new 2019 Process/Industrial Instruments and Controls Handbook Sixth Edition capturing the expertise of 50 leaders in industry.

    1. Ratio control instead of feedforward control.  Most of the literature focuses on feedforward control. This is like flying blind for the operator. In most  cases there is a flow measurement and the primary process loop dead time is large enough for there to be a cascade control using a secondary flow loop. A ratio controller is then setup whose input is the flow signal that would have been the feedforward signal. This could be a disturbance or wild flow or a feed flow in the applications involving most vessels (e.g., crystallizers, evaporators, neutralizers, reactors …) and columns. To provide a shortcut in categorization, I simply call it the “leader flow”. The secondary loop then is the “follower” flow whose setpoint is the “leader” flow multiplied by the ratio controller setpoint. A bias is applied to the Ratio controller output similar to what is done by a feedforward summer that is corrected by the primary loop.  The operator can change the ratio setpoint and see the actual ratio after correction. Improvements can be made to the ratio setpoint based on recognizable persistent differences between set and the actual ratio. Many vessels and most columns are started up on ratio control until normal operating conditions are reached.  When primary loops use an analyzer, ratio correction may be suspended when the analyzer misbehaves. If the flow measurement lacks sufficient rangeability, a flow can be computed from the installed flow characteristic and substituted for the flow measurement at low flows. A notable exception is avoidance of ratio control for steam header pressures since the dead time is too short for cascade control consequently necessitating feedforward control.
    2. Adaptation of feedforward gain and ratio control setpoint. A simple adaptive controller similar to a valve position controller (VPC) for optimization can be used. The adaptive controller setpoint is zero correction, its process variable is the current correction and its output is the feedforward gain or ratio setpoint. Like a VPC, the traditional approach would be a slow integral-only controller where the integral action is more than 10 times slower than in the primary loop controller. However, the opportunity for directional move suppression described next month can provide more flexibility and opportunity to deal with undesirable conditions.
    3. PID Form and Structure used in Industry. The literature often shows the “Independent” Form that computes the contribution of the P, I  and D modes in parallel with the proportional gain not affecting the I and  D modes. The name “Independent” is appropriate not only because the contribution of the modes are independent from each other but also because this Form is independent of what is normally used in today’s distributed control systems (DCSs). Often the tuning parameters for I and D are an integral and derivative gain, respectively rather than a time. The “Series” or “Real” Form necessarily used in pneumatic controllers was carried over to electronic controllers and is offered as an option in DCSs. The “Series” Form causes an interaction in the time domain that can be confusing but prevents the D mode contribution from exceeding the I mode contribution. Consequently, tuning where the D mode setting is larger the I mode setting do not cause oscillations. If these are settings are carried over to the “Ideal” Form more extensively today, the user is surprised by unsuspected fast oscillations. The different units for tuning settings also cause havoc. Some proportional modes still use proportional band in percent and integral settings could be repeats per minute repeats per second instead of minutes or seconds. Then you have the possibility of an integral gain and derivative gain in an Independent Form. Also, the names given by suppliers for Forms are not consistent. There are also 8 structures offering options to turn off the P and I mode or use setpoint  weight factors for the P and D modes. The D mode is simply turned off by a zero setting (zero rate time or derivative gain).  I am starting an ISA Standards Committee  for PID algorithms and performance to address these issues and many more.  For more on PID Forms see the ISA Mentor Q&A “How do you convert tuning settings of an independent PID?”     
    4. Sources of Deadband, Resolution, Sensitivity, and Velocity Limit. Deadband can originate from backlash in linkages or connections, deadband in split range configuration, and deadband in Variable Frequency Drive (VFD) setup. Resolution limitation can originate from stiction and analog to digital conversion or computation. Sensitivity limitations can originate from actuators, positioners, or sensors.  Velocity Limits can originate from valve slewing rate set by positioner or booster relay capacity and actuator volume and from speed rate limits in VFD setup.
    5. Oscillations from Deadband, Resolution, Sensitivity, and Velocity Limit. Deadband can cause a limit cycle if there are two or more integrators in the process or control system including the positioner.Thus, a positioner with integral action will create a limit cycle in any loop with integral action in the controller. Positioners should have high gain proportional action and possibly some form of derivative action. Resolution can cause a limit cycle if there are two or more integrators in the process or control system including the positioner. Positioners with poor sensitivity have been observed to create essentially a limit cycle. A slow velocity limit causes oscillations that can be quite underdamped.
    6. Noise, resonance and attenuation. The best thing to do is eliminate the source of oscillations often due to the control system as detailed in the Control Talk Blog “The Most Disturbing Disturbances are Self-Inflicted”. Oscillation periods faster than the loop dead time is essentially noise. There is nothing the loop can do so the best thing is to ignore it. If the oscillation period is between one and ten dead times cause resonance. The controller tuning needs to be less aggressive to reduce amplification. If the oscillation period is more than 10 times the dead time, the controller tuning needs to be more aggressive to provide attenuation.
    7. Control Loops Transfer Variability. We would like to think a control loop makes variability completely disappear. What it does is transfer the variability from the controlled variable to the manipulated variable. For many level control loops, we want minimization of this transfer and give it the more positive terminology “maximization of absorption” of variability. This is done by less aggressive tuning that still prevents activation of alarms. The user must be careful that for near-integrating and true-integrating processes, the controller gain must not be decreased without increasing the integral time so that the product of the controller gain and integral time is greater than twice the inverse of the integrating process gain to prevent large slow rolling nearly underdamped oscillations with a period forty or more times the dead time.
    8. Overshoot of Controller Output. Some articles have advocated that the PID controller should be tuned so its output never overshoots the final resting value (FRV). While this may be beneficial for balanced self-regulating process particularly seen in refineries, it is flat out wrong and potentially unsafe for near-integrating, true integrating and runaway processes. In order to get to a new setpoint or recover from a disturbance, the controller output must overshoot the FRV. This generally requires that the integral mode not dominate the proportional and derivative mode contributions. Integrating process tuning rules are used.
    9. Hidden Factor in Temperature, Composition and pH Loops. The process gain in these loops for continuous and fed-batch operations is generally plotted versus a ratio of manipulated flow to feed flow. To provide a process gain with the proper units, you need to divide by the feed flow. Most people don’t realize the process gain is inversely proportional to feed flow. This is particularly a problem at low production rates resulting in a very large hidden factor. For much more on this see the Control Talk Blog “Hidden factor in Our Most Important Loops”.     
    10. Variable Jacket Flow. If the flow to a vessel jacket is manipulated for temperature control, you have a double whammy. The low flow for a low cooling or heating demand causes an increase in the process gain per the hidden factor and an increase in the process dead time due to the larger transportation delay. The result is often a burst of oscillations from tuning that would be fine at normal operating conditions. A constant jacket flow should be maintained by recirculation and the manipulation of coolant or heating utility makeup flow (preferably steam flow to a steam  injector) for high heat demands. The utility return flow is made equal to  the makeup flow by a pressure controller on jacket output manipulating return flow.
    • 20 Feb 2019

    What Skill Sets Do You Need to Excel at IIoT Applications in an Automation Industry Career?

    The post What Skill Sets Do You Need to Excel at IIoT Applications in an Automation Industry Career? first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Angela Valdes.

    The Industrial Internet of Things (IIoT) is the hot topic as seen in the many feature articles published. The much greater availability of data is hoped to provide the knowledge needed to sustain and improve plant safety, reliability and performance. Here we look at what are some of the practical issues and resources in achieving the expected IIoT benefits.

    Angela Valdes is a recently added resource in the ISA Mentor Program. Angela is the automation manager of the Toronto office for SNC-Lavalin. She has over 12 years of experience in project leadership and execution, framed under PMI, lean, agile and stage-gate methodologies. Angela seeks to apply her knowledge in process control and automation in different industries such as pharmaceutical, food and beverage, consumer packaged products and chemicals.

    Angela’s question

    What skill sets and ISA standards shall I start building/referencing in order to grow in the IIoT space and work field?

    Nick Sands’ answer

    The ISA communication division is forming a technical interest group in IIoT. The division has had presentations on the topic for several years at conferences. The leader will be announced in InTech magazine. The ISA95 standard committee is working on updating the enterprise – control system communication to better support IIoT concepts.

    Jim Cahill’s answer

    One tremendous resource would be to read most of Jonas Berge’s LinkedIn blog posts. He writes about IIoT and digital communications and the impact they can have on reliability, safety, efficiency and production. I recommend you send him a connection request to see when he has new things to post. One other person to connect with includes Terrance O’Hanlon of ReliabilityWeb.com. Searching on the #IIoT hashtag in Twitter and LinkedIn is also a very good way to discover new articles and influencers in these areas.

    Greg McMillan’s answer

    One of the things we need to be careful about is to make sure there are people with the expertise to use the data and associated software, such as data analytics. There was a misrepresentation in a feature article that IIoT would make the automation engineer obsolete when in fact the opposite is true. We need more process control engineers besides process analytical technology and IIoT experts to make the most out of the data. The data by itself can be overwhelming as seen in the series of articles “Drowning in Data; Starving for Information”: Part 1, Part 2, Part 3, and Part 4.

    Process control engineers with a fundamental knowledge of the process and the automation system need to intelligently analyze and make the associated improvements in instrumentation, valves, setpoints, tuning, control strategies, and use of controller features whether PID or MPC. Often lacking is the recognition of the importance of dynamics in the process and particularly the automation system. The process inputs must be synchronized with the process outputs for continuous processes before true correlations can be identified.

    Knowledge of process first principles is also needed to determine whether correlations are really cause and effect. While the solution would seem to be employing expert rules to the IIoT results, a word of caution here is that the attempts to develop and use real time expert systems in the 1980s and 1990s were largely failures wasting an incredible amount of time and money. Deficiencies in conditions, interrelationships and knowledge in the rules of logic implemented plus lack of visibility of interplay between rules and ability to troubleshoot rules led to a lot of false alerts resulting in the systems being turned off and eventually abandoned.

    Hunter Vegas’ answer

    There have been multiple “data revolutions” over the years, and I consider IIoT to be just another wave where new information is made available that wasn’t available before. Unfortunately the problem that bedeviled the previous data revolutions still remains today. More data is not necessarily useful unless the right information is delivered at the right time to a person who can act on it.  In many cases the operators have too much information now – when something goes wrong they get 1000 alarms and have to wade through the noise to try to figure out what went wrong and how to fix it.  

    IIoT data can undoubtedly be useful, but it takes a huge amount of time and effort to create an interface than can effectively present that information and still more time and effort to keep it up. All too often management reads a few trendy articles and thinks IIoT is something you buy or install and savings should just appear. Unfortunately most fail to appreciate the effort required to implement such a system and keep it working and adding value. Usually money is spent, people celebrate the glorious new system, then it falls out of favor and use and gets eliminated a short time later. 

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program.

    As far as I know there aren’t any specific standards associated with IIoT.  I do think that there are several skill sets that can you help you implement it:

    • Knowledge the latest alarm standards will help you understand how to identify alarm information/data that IS useful and how to make sure the operators get the important information in a timely fashion and not get buried with useless alarm data that doesn’t matter.
    • Knowledge of some of the new HMI design standards are useful to learn how to present the information in a meaningful way that lets the operator quickly understand a situation and correctly react to it.
    • Knowledge of getting the information into the system. That particular topic will depend upon your particular control system and how data flows into it.  It might come in via OPC, wireless, Hart, Modbus, Ethernet, or any number of other paths.  Each communication type will have its own challenges and security issues that must be addressed.
    • Knowledge of what matters to your plant. In an aging acid plant corrosion can be a big issue.  If you can add a handful of small wireless pipe thickness gauges in a few key spots that might have significant value.  If you have environmental problems and sumps located all over your facility it might be possible to add wireless analyzers to detect solvent spills and quickly react to them rather than having a spill hit the river outfall before you detect it. The key to all of this is to understand the plant’s ‘pain points’ and then determine a way to address it.  IIoT may offer an answer or it may be as simple as retuning a controller or replacing a poorly specified control valve with a better one.  Regardless, if calling it an “IIoT Project” gets you funding and you solve a problem then you are a hero regardless.

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 11 Feb 2019

    Webinar Recording: Practical Limits to Control Loop Performance

    The post Webinar Recording: Practical Limits to Control Loop Performance first appeared on the ISA Interchange blog site.

    This educational ISA webinar was presented by Greg McMillan in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).

    Part 2 provides a quick review of Part 1 and then discusses the contribution of each PID mode, why reset time is orders of magnitude too small for most composition and temperature loops, the ultimate and practical limits to control loop performance, the critical role of dead time, and when PID gain that is too high or too low causes more oscillation.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program.

    About the Presenter
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 11 Feb 2019

    Webinar Recording: Simple Loop Tuning Methods and PID Features to Prevent Oscillations

    The post Webinar Recording: Simple Loop Tuning Methods and PID Features to Prevent Oscillations first appeared on the ISA Interchange blog site.

    This educational ISA webinar was presented by Greg McMillan in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).

    Part 3 (the final part) describes simple tuning methods and the PID features that can be used to prevent the oscillations that plague our most important loops and to achieve the desired degree of tightness or looseness in level control. A general procedure is offered and a block diagram of the most effective PID structure, not shown anywhere else, is given followed by questions and answers.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program.

    About the Presenter
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 18 Jan 2019

    Missed Opportunities in Process Control - Part 1

    The post, Missed Opportunities in Process Control - Part 1, first appeared on the ControlGlobal.com Control Talk blog.

    I had an awakening as to the much greater than realized disconnect between what is said in the literature and courses and what we need to know as practitioners as I was giving guest lectures and labs to chemical engineering students on PID control. We are increasingly messed up. The disparity between theory and practice is exponentially growing because of leaders in process control leaving the stage and users today not given the time to explore and innovate and the freedom to publish. Much of what is out there is a distraction at best.  I decided to make a decisive pitch not holding back for sake of diplomacy. Here is the start of a point blank decisive comprehensive list in a six part series.

    Please read, think and take to heart the opportunities to increase the performance and recognized value of our profession. The list is necessarily concise in detail. If you want more information on these opportunities, please join the ISA Mentor Program and ask the questions whose answers can be shared via Mentor Q&A Posts.

    1. Recognizing and addressing actual load disturbance location. Most of the literature unfortunately shows disturbances entering the process output when in reality disturbances enter mostly as process inputs (e.g., feed flow, composition and temperature changes) passing through the primary process time constant. Thinking of disturbances on the process output leads to many wrong conclusions and mistakes, such as large primary  time constants are bad, tuning can be done primarily for setpoint changes,  feedforward and ratio control is not important, and algorithms like Internal Model Control are good alternatives to PID control.
    2. Tuning and tests to first achieve good load disturbance rejection and then good setpoint response. While most of the literature focuses on setpoint response tuning and testing, the first objective should be good load disturbance rejection particularly in chemical processes. Such tuning generally requires more aggressive proportional action. Testing is simply done by momentarily putting the PID in manual, changing the PID output and putting the PID back in auto. Tuning should minimize peak and integrated error from load disturbances taking into account needs to minimize resonance. To prevent overshoot in the setpoint response, a setpoint lead-lag can be used with lag time equal to reset time or a PID structure of proportional and derivative action on PV and integral action on error (PD on PV and I on E) can be used. If a faster setpoint response is needed, setpoint lead can be increased to ¼ lag time or a 2 Degrees of Freedom (2DOF) PID structure used with setpoint weight factors for the  proportional and derivative modes equal to 0.5 and 0.25, respectively. Rapid changes in signals to valves or secondary loops upsetting other loops from higher PID gain setting can be smoothed by setpoint rate limits on analog output blocks and secondary PIDs and turning on external-reset feedback (ERF). We will note the many other advantages of ERF and its facilitation of directional move suppression to intelligently slow down changes of manipulated flows in a disruptive direction in subsequent months (hope you can wait). In Model Predictive Control move suppression plays a key role. Here we can enable it with additional intelligence of direction without retuning PID.
    3. Minimum possible peak error is proportional to dead time and actual peak error is inversely proportional to PID gain. Peak error is important to prevent relief, alarm and SIS activation and environmental violation. The ultimate limit to what you can achieve in minimizing peak error is proportional to the total loop dead time. The practical limit as to what you actually achieve is inversely proportional to the product of the PID gain and open loop process gain. The maximum PID gain is inversely proportional to the total loop dead time. These relationships hold best for near-integrating, true integrating and runaway processes.
    4. Minimum possible integrated error is proportional to dead time squared and actual peak error is proportional to reset time and inversely proportional to PID gain. The integrated absolute error is the most common criteria sited in literature. It does provide a measure of the amount of process material that is off-spec. The ultimate limit to what you can achieve in minimizing integrated error is proportional to the total loop dead time squared. The practical limit as to what you actually achieve is proportional to reset time and inversely proportional to the product of the PID gain and open loop process gain. The minimum reset time is proportional and the maximum PID gain is inversely proportional to the total loop dead time.  These      relationships hold best for near-integrating, true integrating and runaway processes.
    5. Detuning a PID can be evaluated as an increase in implied dead time. The relationships cited in items 3 and 4 above can be understood by realizing that a larger than actual total loop dead time is the effect  on loop performance of a smaller PID gain and larger reset time setting than needed to prevent oscillations. This implied dead time is basically ½ and ¼ the summation of Lambda plus the actual dead time, for self-regulating and integrating processes, respectively.
    6. The effect of analyzer cycle time and wireless update rate depends on implied dead time and consequently tuning. You can prove almost any point you want to make about whether the effect of a discontinuous update is important or not by how you tune the PID. The dead time from an analyzer cycle time is 1½ times the cycle time. The dead time from a wireless device update or PID execution rate or sample rate is ½ the time interval between updates assuming no latency. How important this  additional dead time is seen in how big it is relative to the implied dead time. The conventional rule of thumb is that the dead time from discontinuous updates should be less than 10% of the total loop dead time (wireless update rates and PID execution rates less than 20% of dead time). This is only really true if you are pursing aggressive control where the implied dead time is near the actual dead time. A better recommendation would be a wireless update rate or PID execution rate less      than 20% of “original” implied dead time. I use the work “original” to remind us not to spiral into slowing down update and execution rates by increasing implied dead time and then further slowing down update and execution rates.
    7. The product of the PID gain and reset time must be greater than the inverse of the integrating process gain. Violation of this rule cause very large and very slow oscillations that are slightly damped taking hours to days to die out for vessels and columns, respectively. This is a common problem because in control theory courses we learned that high controller gain causes oscillations and the actual PID gain permitted for near integrating, true integrating and runaway processes is quite large (e.g., > 100). Most don’t think such a high PID gain is possible and don’t like sudden large movements in valves. Furthermore, integral action  provides the gradual action that will always be in a direction consistent  with error sign and will seek to exactly match up PV and SP meeting common expectations. The result is a reset time frequently set that is orders of magnitude too small making the product of PID gain and reset time less than the inverse of the integrating process gain causing confusing slow oscillations.
    8. The effective rate time should be less than ¼ the effective reset time. While PID controllers with a Series Form effectively prevented this due to interaction factors in the time domain, this is not the case for the other PID Forms. Not enforcing this limit is a common problem in migration projects since older controllers had the Series Form and most modern controllers use the ISA Standard Form. The result is erratic fast oscillations.
    9. Automation system dynamics affect the performance of most loops. This should be good news for us since this is much more under the control of the automation engineer and easier and cheaper to fix than process or equipment dynamics. Flow, pressure, inline temperature and composition (e.g., static mixer), and fluidized bed reactors are affected by sensor response time and final control element (e.g., valve and VFD) response  time. Pressure and surge control loops are also affected by PID execution rate.
    10. Reserve feedforward multiplier and ratio controller ratio correction for sheet lines and plug flow systems.  The conventional rule that on a plot of manipulated variable versus feedforward variable, a change in slope demands a feedforward multiplier and a change in intercept demands a feedforward summer is not really relevant. A feedforward multiplier introduces a change in controller gain that is counteracts the change in process gain. However, this is only useful for sheet lines and plug flow (e.g., static mixers and extruders) because for vessels and columns, the effect of back mixing from agitation and reflux or recirculation creates a process time constant that is proportional to the residence time. For decreases in feed flow the increase in process time constant from an increase in residence time negates the increase in process gain. Also, the most important error is often a bias error in the measurements. Span errors are smitten by a large span showing up mostly as a change in process gain much less than the other sources of changes in process  gain.  Also, the scaling and filtering of a feedforward summer signal and its correction is much easier.     
    • 14 Jan 2019

    How to Get Started with Effective Use of OPC

    The post How to Get Started with Effective Use of OPC first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    Encouraged to ask general questions that would help share knowledge, Nikki Escamillas provided several questions on OPC. Initially, the OPC standard was restricted to the Windows operating system with the acronym originally designating OLE (object linking and embedding) for process control.  OPC is the acronym for open platform communications that is much more widely used playing a key role in automation systems. We are fortunate to have answers to Nikki’s questions from a knowledgeable expert in higher level automation system communications, Tom Freiberger, product manager for industrial Ethernet in R&D engineering for Emerson Automation Solutions.

    Nikki Escamillas is a recently added protégé in the ISA Mentor Program. Nikki is an Automation Process Engineer for Republic Cement and Building Materials – Batangas Plant. Nikki specializes in process optimization and automation control, committed in minimizing cost and product quality improvement through effective time management and efficient use of resources and data analytics. Nikki has an excellent knowledge and experience of advanced process control principles and its application to different plant processes more specifically on cement and building materials manufacturing.

    Nikki Escamillas’ First Question

    How does OPC work?

    Tom Freiberger’s Answer

    OPC is a client/server protocol. The server has a list of data points (normally in a tree structure) that it provides. A client can connect to a server and pick a set of data points it wishes to use. The client can then read or write to those data points.  OPC is meant to be a common language for integrating products from multiple vendors. The OPC Foundation has a good introduction of OPC DA and UA at their website.

    Nikki Escamillas’ Second Question

    Does configuration of OPC DA differs from OPC UA?

    Tom Freiberger’s Answer

    Yes and no. The core concept of client/server and working with a set of data points remains consistent between the two, but the details of how to configure differ. The security configuration is the primary difference. OPC DA is based off of Microsoft’s DCOM technology, which means the security settings in the operating system are used. OPC UA runs on many operating systems and therefore the security settings are embedded into the configuration of the OPC application. OPC UA applications should use common terminology in their configuration, to ease integration between multiple vendors

    Nikki Escamillas’ Third Question

    Do we have any guidelines to follow when installing and configuring one OPC based upon its type?

    Tom Freiberger’s Answer

    Installation and configuration guidelines are going to be specific to the products being used. Some products are going to be limited on the number of data points that can be exchanged by a license or other application limitation. Some products may have performance limits. All of these details should be supplied in the documentation of the product.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program.

    Nikki Escamillas’ Fourth Question

    Could I directly make one computer to become OPC capable?

    Tom Freiberger’s Answer

    An OPC server or client by itself is just a means to transfer data. OPC is not very interesting without another application behind it to supply information. The computer you are attempting to add OPC to would need some other application to provide data. The vendor of that application would need to build OPC into their product. If the application with the data supports some other protocol to exchange data (like Modbus TCP, Ethernet/IP, or PROFINET) an OPC protocol converter could be used to interface with other OPC applications. If the application with the data has no means of extracting the information, there is nothing an OPC server or client can do.

    Nikki Escamillas’ Fifth Question

    Is it also possible to create a server to server communication between two OPC?

    Tom Freiberger’s Answer

    I believe there are options for this in the OPC protocol specification, but the details would be specific to the product being used. If it allows server to server connections, it should be listed in its documentation.

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 19 Dec 2018

    Webinar Recording: PID and Loop Tuning Options and Solutions for Industrial Applications

    The post Webinar Recording: PID and Loop Tuning Options and Solutions for Industrial Applications first appeared on the ISA Interchange blog site.

    This educational ISA webinar was presented by Greg McMillan in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).

    This is Part 1 of a series on the benefits of knowing your process and PID capability. Part 1 focuses on process behavior, the many loop objectives and different worlds of industrial applications, and the loop component’s contribution to the dynamic response.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program.

    About the Presenter
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 19 Dec 2018

    Webinar Recording: PID and Loop Tuning Options and Solutions for Industrial Applications

    The post Webinar Recording: PID and Loop Tuning Options and Solutions for Industrial Applications first appeared on the ISA Interchange blog site.

    This educational ISA webinar was presented by Greg McMillan in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).

    This is Part 1 of a series on the benefits of knowing your process and PID capability. Part 1 focuses on process behavior, the many loop objectives and different worlds of industrial applications, and the loop component’s contribution to the dynamic response.

    Join the ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about how you can join the ISA Mentor Program.

    About the Presenter
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 17 Dec 2018

    How to Improve Loop Performance for Dead Time Dominant Systems

    The post How to Improve Loop Performance for Dead Time Dominant Systems first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    Dead time is the source of the ultimate limit to control loop performance. The peak error is proportional to the dead time and the integrated error is dead time squared for load disturbances. If there was no dead time and no noise or interaction, perfect control would be theoretically possible. When the total loop dead time is larger than the open loop time constant, the loop is said to be dead time dominant and solutions are sought to deal with the problem.

    Anuj Narang is an advanced process control engineer at Spartan Controls Limited. He has more than 11 years of experience in the academics and the industry with a PhD in process control. He has designed and implemented large scale industrial control and optimization solutions to achieve sustainable and profitable process and control performance improvements for the customers in the oil and gas, oil sands, power and mining industry. He is a registered Professional Engineer with the Association of Professional Engineers and Geoscientists of Alberta, Canada.

    Anuj’s Question

    Is there any other control algorithm available to improve loop performance for dead time dominant systems other than using Smith predictor or model predictive control (MPC), both of which requires identification of process model?

    Greg McMillan’s Answer

    The solution cited for deadtime dominant loops is often a Smith predictor deadtime compensator (DTC) or model predictive control. There are many counter-intuitive aspects in these solutions. Not realized is that the improvement by the DTC or MPC is less for deadtime dominant systems than for lag dominant systems. Much more problematic is that both DTC and MPC are extremely sensitive to a mismatch between the compensator and model deadtime versus the actual total loop deadtime for a decrease besides an increase in the deadtime. Surprisingly, the consequences for the DTC and MPC are much greater for a decrease in plant dead time. For a conventional PID, a decrease in the deadtime just results in more robustness and slower control. For a DTC and MPC, a decrease in plant deadtime by as little as 25 percent can cause a big increase in integrated error and an erratic response.

    Of course, the best solution is to decrease the many source of dead time in the process and automation system (e.g., reduce transportation and mixing delays and use online analyzers with probes in the process rather than at-line analyzers with a sample transportation delay and an analysis delay that is 1.5 times the cycle time). An algorithmic mitigation of consequences of dead time first advocated by Shinskey and now particularly by me is to simply insert a deadtime block in the PID external-reset feedback path (BKCAL) with the deadtime updated to be always be slightly less than the actual total loop deadtime. Turning on external-reset feedback (e.g., dynamic reset limit) on and off enables and disables the deadtime compensation. Note that for transportation delays, this means updating the deadtime as the total feed rate or volume changes. This PID+TD implementation does not require the identification of the open loop gain and open loop time constant for inclusion as is required for a DTC or MPC. Please note that the external-reset feedback should be the result of a positive feedback implementation of integral action as described in the ISA Mentor Program webinar PID Options and Solutions – Part 3.

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program.

    There will be no improvement from a deadtime compensator if the PID tuning settings are left the same as they were before the DTC or by a deadtime block in external-reset feedback (PID+TD).  In fact the performance can be slightly worse for even an accurate deadtime. You need to greatly decrease the PID integral time toward a limit of the execution time plus any error in deadtime. The PID gain should also be increased. The equation for predicting integrated error as a function of PID gain and reset time settings is no longer applicable because it predicts an error less than the ultimate limit that is not possible. The integrated error cannot be less than the peak error multiplied by the deadtime. The ultimate limit is still present because we are not making deadtime disappear.

    If the deadtime is due to analyzer cycle time or wireless update rate, we can use an enhanced PID (e.g., PIDPlus) to effectively prevent the PID from responding between updates. If the open loop response is deadtime dominant mostly due to the analyzer or wireless device, the effect of a new error upon update results in a correction proportional to the PID gain multiplied by the open loop error. If the PID gain is set equal to the inverse of the open loop gain for a self-regulating process, the correction is perfect and takes care of the step disturbance in a single execution after an update in the PID process variable.

    The integral time should be set smaller than expected (about equal to the total loop deadtime that ends up being the PID execution time interval) and the positive feedback implementation of integral action must be used with external reset feedback enabled. The enhanced PID greatly simplifies tuning besides putting the integrated error close to its ultimate limit. Note that you do not see the true error that could’/ have started at any time in between updates but only see the error measured after the update.

    For more on the sensitivity to both increases and decrease in the total loop deadtime and open loop time constant, see the ISA books Models Unleashed: A Virtual Plant and Predictive Control Applications (pages 56-70 for MPC) and Good Tuning: A Pocket Guide 4th Edition (pages 118-122 for DTC). For more on the enhanced PID, see the ISA blog post How to Overcome Challenges of PID Control and Analyzer Applications via Wireless Measurements and the Control Talk blog post, Batch and Continuous Control with At-Line and Offline Analyzers Tips.

    The following figures from Models Unleashed shows how a MPC with two controlled variables (CV1 and CV2)  and two manipulated variables for a matrix with condition number three (CN = 3) responds to a doubling and a halving of the plant dead time (delay) when the total loop dead time is greater than the open loop time constant.

    Figure 1: Dead Time Dominant MPC Test for Doubled Plant Delay

     

    Figure 2: Dead Time Dominant MPC Test for Halved Plant Delay

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 17 Dec 2018

    How to Improve Loop Performance for Dead Time Dominant Systems

    The post How to Improve Loop Performance for Dead Time Dominant Systems first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    Dead time is the source of the ultimate limit to control loop performance. The peak error is proportional to the dead time and the integrated error is dead time squared for load disturbances. If there was no dead time and no noise or interaction, perfect control would be theoretically possible. When the total loop dead time is larger than the open loop time constant, the loop is said to be dead time dominant and solutions are sought to deal with the problem.

    Anuj Narang is an advanced process control engineer at Spartan Controls Limited. He has more than 11 years of experience in the academics and the industry with a PhD in process control. He has designed and implemented large scale industrial control and optimization solutions to achieve sustainable and profitable process and control performance improvements for the customers in the oil and gas, oil sands, power and mining industry. He is a registered Professional Engineer with the Association of Professional Engineers and Geoscientists of Alberta, Canada.

    Anuj’s Question

    Is there any other control algorithm available to improve loop performance for dead time dominant systems other than using Smith predictor or model predictive control (MPC), both of which requires identification of process model?

    Greg McMillan’s Answer

    The solution cited for deadtime dominant loops is often a Smith predictor deadtime compensator (DTC) or model predictive control. There are many counter-intuitive aspects in these solutions. Not realized is that the improvement by the DTC or MPC is less for deadtime dominant systems than for lag dominant systems. Much more problematic is that both DTC and MPC are extremely sensitive to a mismatch between the compensator and model deadtime versus the actual total loop deadtime for a decrease besides an increase in the deadtime. Surprisingly, the consequences for the DTC and MPC are much greater for a decrease in plant dead time. For a conventional PID, a decrease in the deadtime just results in more robustness and slower control. For a DTC and MPC, a decrease in plant deadtime by as little as 25 percent can cause a big increase in integrated error and an erratic response.

    Of course, the best solution is to decrease the many source of dead time in the process and automation system (e.g., reduce transportation and mixing delays and use online analyzers with probes in the process rather than at-line analyzers with a sample transportation delay and an analysis delay that is 1.5 times the cycle time). An algorithmic mitigation of consequences of dead time first advocated by Shinskey and now particularly by me is to simply insert a deadtime block in the PID external-reset feedback path (BKCAL) with the deadtime updated to be always be slightly less than the actual total loop deadtime. Turning on external-reset feedback (e.g., dynamic reset limit) on and off enables and disables the deadtime compensation. Note that for transportation delays, this means updating the deadtime as the total feed rate or volume changes. This PID+TD implementation does not require the identification of the open loop gain and open loop time constant for inclusion as is required for a DTC or MPC. Please note that the external-reset feedback should be the result of a positive feedback implementation of integral action as described in the ISA Mentor Program webinar PID Options and Solutions – Part 3.

    Join the ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about how you can join the ISA Mentor Program.

    There will be no improvement from a deadtime compensator if the PID tuning settings are left the same as they were before the DTC or by a deadtime block in external-reset feedback (PID+TD).  In fact the performance can be slightly worse for even an accurate deadtime. You need to greatly decrease the PID integral time toward a limit of the execution time plus any error in deadtime. The PID gain should also be increased. The equation for predicting integrated error as a function of PID gain and reset time settings is no longer applicable because it predicts an error less than the ultimate limit that is not possible. The integrated error cannot be less than the peak error multiplied by the deadtime. The ultimate limit is still present because we are not making deadtime disappear.

    If the deadtime is due to analyzer cycle time or wireless update rate, we can use an enhanced PID (e.g., PIDPlus) to effectively prevent the PID from responding between updates. If the open loop response is deadtime dominant mostly due to the analyzer or wireless device, the effect of a new error upon update results in a correction proportional to the PID gain multiplied by the open loop error. If the PID gain is set equal to the inverse of the open loop gain for a self-regulating process, the correction is perfect and takes care of the step disturbance in a single execution after an update in the PID process variable.

    The integral time should be set smaller than expected (about equal to the total loop deadtime that ends up being the PID execution time interval) and the positive feedback implementation of integral action must be used with external reset feedback enabled. The enhanced PID greatly simplifies tuning besides putting the integrated error close to its ultimate limit. Note that you do not see the true error that could’/ have started at any time in between updates but only see the error measured after the update.

    For more on the sensitivity to both increases and decrease in the total loop deadtime and open loop time constant, see the ISA books Models Unleashed: A Virtual Plant and Predictive Control Applications (pages 56-70 for MPC) and Good Tuning: A Pocket Guide 4th Edition (pages 118-122 for DTC). For more on the enhanced PID, see the ISA blog post How to Overcome Challenges of PID Control and Analyzer Applications via Wireless Measurements and the Control Talk blog post, Batch and Continuous Control with At-Line and Offline Analyzers Tips.

    The following figures from Models Unleashed shows how a MPC with two controlled variables (CV1 and CV2)  and two manipulated variables for a matrix with condition number three (CN = 3) responds to a doubling and a halving of the plant dead time (delay) when the total loop dead time is greater than the open loop time constant.

    Figure 1: Dead Time Dominant MPC Test for Doubled Plant Delay

     

    Figure 2: Dead Time Dominant MPC Test for Halved Plant Delay

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 21 Nov 2018

    How to Setup and Identify Process Models for Model Predictive Control

    The post How to Setup and Identify Process Models for Model Predictive Control first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    Luis Navas is an ISA Certified Automation Professional and electronic engineer with more than 11 years of experience in process control systems, industrial instrumentation and safety instrumented systems. Luis’ questions on evaporator control are important to improve evaporator concentration control and minimize steam consumption

    Luis Navas’ Introduction

    The process depicted in Figure 1 shows a concentrator with its process inputs and outputs. I have the following questions regarding the process testing in order to generate process models for a MPC in the correct way. I know that MPC process inputs must be perturbed to allow an identification and modeling of each process input and output relationship.  

    Figure 1: Variables for model predictive control of a concentrator

     

    Luis Navas’ First Question

    Before I start perturbing the feed flow or steam flow, should the disturbance be avoided or at least minimized? Or simply let it be as usual in the process since this disturbance is always present?

    Mark Darby’s Answer

    If it is not difficult, you can try to suppress the disturbance. That can help the model identification for the feed and steam. To get a model to the disturbance, you will want movement of the disturbance outside the noise level (best is four to five times). If possible, this may require making changes upstream (for example, LIC.SP or FIC.SP).

    Luis Navas’ Second Question

    What about the steam flow? Should it be maintained a fix flow, (FIC in MAN with a fix % open FCV), while perturbing the feed flow and in the same way when perturbing the steam flow, should the feed flow be fixed? I know some MPC software packages excite its outputs in a PRBS (Pseudo Random Binary Sequence) practically at the same time while the process testing is being executed, and through mathematics catches the input and output relationships, finally generating the model.  

    ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about the ISA Mentor Program.

    Mark Darby’s Answer

    Because the steam and feed setpoints are manipulated variables, it is best to keep them both in auto for the entire test. PRBS is an option, but it will take more setup effort to get the magnitudes and the average switching interval right. An option is to start with a manual test and switch to PRBS after you’ve got a feel for the process and the right step sizes. Note: a pretest should have already been conducted to identify instrument issues, control issues, tuning, etc. Much more detail is offered in my Section 9.3 on in the McGraw-Hill handbook Process/Industrial Instruments and Control Sixth Edition.

    Luis Navas’s Last Questions

    What are the pros & cons for process testing  if the manipulated variables are perturbed through FIC Setpoints, (closed loop), or through FIC Outputs, (open loop)? Or simply: should it be done according with the MPC design? What are the pros & cons if in the final design the FCVs are directly manipulated by the MPC block or through FICs, as MPC’s downstream blocks? I know in this case the FICs will be faster than MPC, so I expect a good approach is to retain them.

    Mark Darby’s Answers

    Correct – do according to the MPC design. Note sometimes the design will need to change during a step test as you learn more about the process. Flow controllers should normally be retained unless they often saturate. This is the same idea for justifying a cascade – to have the inner loop manage the higher frequency disturbances (so the slower executing MPC doesn’t have to). The faster executing inner loop also helps with linearization (for example, valve position to flow).  

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 21 Nov 2018

    How to Setup and Identify Process Models for Model Predictive Control

    The post How to Setup and Identify Process Models for Model Predictive Control first appeared on the ISA Interchange blog site.

    The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

    Luis Navas is an ISA Certified Automation Professional and electronic engineer with more than 11 years of experience in process control systems, industrial instrumentation and safety instrumented systems. Luis’ questions on evaporator control are important to improve evaporator concentration control and minimize steam consumption

    Luis Navas’ Introduction

    The process depicted in Figure 1 shows a concentrator with its process inputs and outputs. I have the following questions regarding the process testing in order to generate process models for a MPC in the correct way. I know that MPC process inputs must be perturbed to allow an identification and modeling of each process input and output relationship.  

    Figure 1: Variables for model predictive control of a concentrator

     

    Luis Navas’ First Question

    Before I start perturbing the feed flow or steam flow, should the disturbance be avoided or at least minimized? Or simply let it be as usual in the process since this disturbance is always present?

    Mark Darby’s Answer

    If it is not difficult, you can try to suppress the disturbance. That can help the model identification for the feed and steam. To get a model to the disturbance, you will want movement of the disturbance outside the noise level (best is four to five times). If possible, this may require making changes upstream (for example, LIC.SP or FIC.SP).

    Luis Navas’ Second Question

    What about the steam flow? Should it be maintained a fix flow, (FIC in MAN with a fix % open FCV), while perturbing the feed flow and in the same way when perturbing the steam flow, should the feed flow be fixed? I know some MPC software packages excite its outputs in a PRBS (Pseudo Random Binary Sequence) practically at the same time while the process testing is being executed, and through mathematics catches the input and output relationships, finally generating the model.  

    Join the ISA Mentor Program

    The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career.  Click this link to learn more about how you can join the ISA Mentor Program.

    Mark Darby’s Answer

    Because the steam and feed setpoints are manipulated variables, it is best to keep them both in auto for the entire test. PRBS is an option, but it will take more setup effort to get the magnitudes and the average switching interval right. An option is to start with a manual test and switch to PRBS after you’ve got a feel for the process and the right step sizes. Note: a pretest should have already been conducted to identify instrument issues, control issues, tuning, etc. Much more detail is offered in my Section 9.3 on in the McGraw-Hill handbook Process/Industrial Instruments and Control Sixth Edition.

    Luis Navas’s Last Questions

    What are the pros & cons for process testing  if the manipulated variables are perturbed through FIC Setpoints, (closed loop), or through FIC Outputs, (open loop)? Or simply: should it be done according with the MPC design? What are the pros & cons if in the final design the FCVs are directly manipulated by the MPC block or through FICs, as MPC’s downstream blocks? I know in this case the FICs will be faster than MPC, so I expect a good approach is to retain them.

    Mark Darby’s Answers

    Correct – do according to the MPC design. Note sometimes the design will need to change during a step test as you learn more about the process. Flow controllers should normally be retained unless they often saturate. This is the same idea for justifying a cascade – to have the inner loop manage the higher frequency disturbances (so the slower executing MPC doesn’t have to). The faster executing inner loop also helps with linearization (for example, valve position to flow).  

    Additional Mentor Program Resources

    See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant) and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 12 Nov 2018

    Webinar Recording: How to Use Modern Process Control to Maintain Batch-To-Batch Quality

    The post Webinar Recording: How to Use Modern Process Control to Maintain Batch-To-Batch Quality first appeared on the ISA Interchange blog site.

    This educational ISA webinar was presented by Greg McMillan. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).

    Understanding the difficulties of batch processing and the new technologies and techniques offered can lead to solutions by better automation and control that offer much greater increases in efficiency and capacity than usually obtained for continuous process. Industry veteran and author Greg McMillan discusses analyzing batch data, elevating the role of the operator, tuning key control loops, and setting up simple control strategies to optimize batch operations. The presentation concludes with an extensive list of best practices.

    About the Presenter
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn

    • 12 Nov 2018

    Webinar Recording: How to Use Modern Process Control to Maintain Batch-To-Batch Quality

    The post Webinar Recording: How to Use Modern Process Control to Maintain Batch-To-Batch Quality first appeared on the ISA Interchange blog site.

    This educational ISA webinar was presented by Greg McMillan. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).

    Understanding the difficulties of batch processing and the new technologies and techniques offered can lead to solutions by better automation and control that offer much greater increases in efficiency and capacity than usually obtained for continuous process. Industry veteran and author Greg McMillan discusses analyzing batch data, elevating the role of the operator, tuning key control loops, and setting up simple control strategies to optimize batch operations. The presentation concludes with an extensive list of best practices.

    About the Author
    Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

    Connect with Greg
    LinkedIn