How Often Do Measurements Need to Be Calibrated?

The post How Often Do Measurements Need to Be Calibrated? first appeared on the ISA Interchange blog site.

The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.

In the ISA Mentor Program, I am providing guidance for extremely talented individuals from countries such as Argentina, Brazil, Malaysia, Mexico, Saudi Arabia, and the USA. This question comes from Greg Breitzke.

Greg Breitzke is an E&I reliability specialist – instrumentation/electrical for Stepan. Greg has focused his career on project construction and commissioning as a technician, supervisor, or field engineer. This is his first in-house role, and he is tasked with reviewing and updating plant maintenance procedures for I&E equipment.

Greg Breitzke’s Question

I am working through an issue that can be beneficial to other Mentor Program Participants. NFPA 70B provides a detailed description for the prescribed maintenance and frequency based on equipment type, making the electrical portion fairly straight forward. The instrumentation is another matter. We are working to consolidate an abundance of current procedures based on make/model, to a reduced list based on technology. The strategy is to “right size” frequencies for calibration and functional testing; decreasing non-value maintenance to have the ability to increase value added activities, within the existing head count.  

My current plan for the instrumentation consists of: 

  1. Sort through the historical paper files with calibration records to determine how long a device has remained in tolerance before a correction was applied,
  2. Compare data against any work orders written against the asset that may reduce the frequency,
  3. Apply safety factors relative to the device impact on safety, regulatory compliance, quality, custody transfer, basic control, or indication only.

I am trying to provide a reference baseline for review of these frequencies, but having little luck in the industry standards I have access to.  Is there a standard or RAGAGEP for calibration and functional testing frequency min/max by technology, that I can reference for a baseline?

Nick Sands’ Answer

The ISA recommended practice is not on the process of calibration but on a calibration management system: ISA-RP105.00.01-2017, Management of a Calibration Program for Industrial Automation and Control Systems. While I contributed, Leo Staples would be a good person for more explanation.

For SIS, there is a requirement to perform calibration, which is comparison against a standard device, within a documented frequency and with documented limits, and correction when outside of limits. This is also required by OSHA for critical equipment under the PSM regulation. EPA has similar requirements under PSRM of course. Correction when out of limits is considered a failed proof test of the instrument in some cases, potentially affecting the reliability of the safety function. Paul Gruhn would be a good person for more explanation.

Paul Gruhn’s Answer

The ISA/IEC 61511 is performance based and does not mandate specific frequencies. Devices must be tested as some interval to make sure they perform as intended. The frequency required will be based on many different factors (e.g., SIL (performance) target, failure rate of the device in that service, diagnostic coverage, redundancy used (if any), etc.).

Leo Staples’ Answer

Section 5.6 ISA of the ISA technical report ISA-RP105.00.01-2017 addresses in detail calibration verification intervals or frequencies. Users should establish calibration intervals for a loop/component based on the following:

  • criticality of the loop/component
  • the performance history of the loop/component
  • the ruggedness/stability of the component(s)
  • the operating environment.

Exceptions include SIS related devices where calibration intervals are established to meet SIL requirements. Other factors that can drive calibration intervals include contracts regulatory requirements.

The idea for the technical report came about after years of frustration dealing with ambiguous gas measurement contracts and government regulations. In many cases these simply stated users should follow good industry practices when addressing all aspects of calibrations.

Calibration intervals alone do not address the other major factors that affect measurement accuracy. These include the accuracy of the calibration equipment, knowledge of the calibration personnel, adherence to defined calibration procedures, and knowledge of the personnel responsible for the calibration program. I have lots of war stories if anyone is interested.

One of the last things that I did at my company before I retired was develop a Calibration Program Standard Operating Procedure (SOP) based on ISA-RP105.00.01-2017. The SOP was designed for use in the Generation, Transmission & Distribution, and other Division of the Company. Some of you may find this funny, but it was even used to determine the calibration frequency for NERC CIP physical security entry control point devices. Initially personnel from the Physical Security Department were testing these devices monthly only because that was what they had always done. While this was before the SOP was established my team used the concepts in establishing the calibration intervals for these devices. This work was well received by the auditors. As a side note, the review of monthly calibration intervals for these devices found the practices caused more problems than it prevented.

The ISA recommended practice is not on the process of calibration, but on a calibration management system: ISA-RP105.00.01-2017, Management of a Calibration Program for Industrial Automation and Control Systems.

ISA Mentor Program

The ISA Mentor Program enables young professionals to access the wisdom and expertise of seasoned ISA members, and offers veteran ISA professionals the chance to share their wisdom and make a difference in someone’s career. Click this link to learn more about the ISA Mentor Program.

Greg McMillan’s Answer

The measurement drift can provide considerable guidance in that when the number of months between calibrations multiplied by drift per month approaches the allowable error, it is time for a calibration check. Most transmitters today have a low drift rate but thermocouples and most electrodes have a drift rate much larger than the transmitter. The past records of calibration results will provide an update on actual drift for an application. Also, fouling of sensors, particularly electrodes, is an issue revealed in 86% response time during calibration tests (often overlooked). The sensing element is the most vulnerable component in nearly all measurements. Calibration checks should be made more frequently at the beginning to establish a drift rate and near the end of the sensor life when drift and failure rates accelerate. Sensor life for pH electrodes can decrease from a year to a few weeks due to high temperature, solids, strong acids and bases (e.g., caustic) and poisonous ions (e.g., cyanide). For every 25 oC increase in temperature, the electrode life is cut in half unless a high temperature glass is used.

Accuracy is particularly important for primary loops (e.g., composition, pH, and temperature) to ensure you are at the right operating point. For secondary loops whose setpoint is corrected by a primary loop, accuracy is less of an issue. For all loops, the 5 Rs (reliability, resolution, repeatability, rangeability and response time) are important for measurements and valves.

Drift in a primary loop sensor shows up as a different average controller output for a given production rate assuming no changes in raw materials, utilities, or equipment. Fouling of a sensor shows up as an increase in dead time and oscillation loop period.

Middle signal selection using 3 separate sensors provides an incredible amount of additional intelligence and reliability reducing unnecessary maintenance. Drift shows up as a sensor with a consistently increasing average deviation from the middle value. The resulting offset is obvious. Coating shows up as a sensor lagging changes in the middle value. A decrease in span shows up as a sensor falling short of middle value for a change in setpoint.

The installed accuracy greatly depends upon installation details and process fluid particularly taking into account sensor location in terms of seeing a representative indication of the process with minimal measurement noise. Changes in phase can be problematic for nearly all sensors. Impulse lines and capillary systems are a major source of poor measurement performance as detailed in the Control Talk columns Prevent pressure transmitter problems and Your DP problems could be a result of improper use of purges, fills, capillaries and seals.

At the end of this post, I give a lot more details on how to minimize drift and maximize accuracy and repeatability by better temperature and pH sensors and through middle signal selection.

Free Calibration Essentials eBook

For an additional educational resource, download Calibration Essentials, an informative eBook produced by ISA and Beamex. The free e-book provides vital information about calibrating process instruments today. To download the eBook, click this link.

Hunter Vegas’ Answer

There is no easy answer to this very complicated question. Unfortunately the answer is ‘it depends’ but I’ll do my best to cover the main points in this short reply.

1) Yes there are some instrument technologies that have a tendency to drift more than others. A partial list of ‘drifters’ might include:

    • pH (drifts for all kinds of reasons – aging of probe, temperature, caustic/acid concentration, fouling, etc. etc.)
    • Thermocouples (tend to drift more than RTDs especially at high temperature or in hydrogen service)
    • Turbine meters in something other than very clean, lubricating service will tend to age and wear out so they will read low as they age. However cavitation can make them intermittently read high.
    • Vortex meters with piezo crystals can age over time and their low flow cut out increases.
    • Any flow/pressure transmitter with a diaphragm seal can drift due to process temperature and/or ambient temperature.
    • Most analyzers (oxygen, CO, chromatographs, LEL)
    • This list could go on and on.

2) Some instrument technologies don’t drift as much. I’ve had good success with Coriolis and radar. (Radar doesn’t usually drift as much as it just cuts out. Coriolis usually works or it doesn’t. Obviously there are situations where either can drift but they are better than most.) DP in clean service with no diaphragm seals is usually pretty trouble free, especially the newer transmitters that are much more stable.

3) The criticality of the service obviously impacts how often one needs to calibrate. Any of these issues could dramatically impact the frequency:

    • Is it a SIS instrument? The proof testing frequency will be decided by the SIS calculations.
    • Is it an environmental instrument? The state/feds may require calibrations on a particular frequency.
    • Is it a custody transfer meter? If you are selling millions of pounds of X a year you certainly want to make sure the meter is accurate or you could be giving away a lot of product!
    • Is it a critical control instrument that directly affects product quality or throughput?

4) Obviously if a frequency is dictated by the service then that is the end of that. Once those are out of the way one can usually look at the service and come up with at least a reasonable calibration frequency as a starting point. Start calibrating at that frequency and then monitor history. If you are checking a meter every six months and have checked a meter 4 times in the last two years and the drift has remained less than 50% of the tolerance, then dropping back to a 12 month calibration cycle make perfect sense. Similarly if you calibrate every 6 months and find the meter drift is > 50% every calibration then you probably need to calibrate more often. However if the meter is older it may be cheaper to replace the meter with a new transmitter which is more stable.

5) The last comment I’ll make is to make sure you are actually calibrating something that matters. I could go on for pages about companies who are diligently calibrating their instrumentation but aren’t actually calibrating their instrumentation. In other words they go through the motions, fill out the paperwork, and can point to reams of calibration logs yet they aren’t adequately testing the instrument loop and it could still be completely wrong. (For instance, shooting a temperature transmitter loop but not actually checking the RTD or thermocouple that feeds it, using a simulator to shoot a 4-20mA signal into the DCS to check the DCS reading but not actually testing the instrument itself, etc. They often check one small part of the loop and after a successful test, consider the whole loop ‘calibrated’.

Greg McMillan’s Answer

The Process/Industrial Instruments and Controls Handbook Sixth Edition 2019 edited by me and Hunter Vegas provide insight on how to maximize accuracy and minimize drift for most types of measurements. The following excerpt written by me is for temperature:

Temperature

The repeatability, accuracy and signal strength are two orders of magnitude better for an RTD compared to a TC. The drift for a RTD below 400 oC is also two orders of magnitude less than a TC. The 1 to 20oC drift per year of a TC is of particular concern for biological and chemical reactor and distillation control because of the profound effect on product quality from control at the wrong operating point. The already exceptional accuracy for a Class A RTD of 0.1oC can be improved to 0.02 oC by “sensor matching” where the four constants of a Callendar-Van-Dusen (CVD) equation provided by the supplier for the sensor are entered into the transmitter. The main limit to accuracy of an RTD is the wiring.

The use of three extension lead wires between the sensor and transmitter or input card can enable the measurement to be compensated for changes in resistance in the lead wires due to temperature assuming the change is exactly the same for both lead wires. The use of four extension lead wires enables total compensation that accounts for the inevitable uncertainty in resistance of lead wires. Standard lead wires have a tolerance of 10% in resistance. For 500 feet of 20 gauge lead wire, the error could be as large as 26oC for a 2-wire RTD and 2.6oC a 3-wire RTD. The “best practice” is to use a 4 wire RTD unless the transmitter is located close to the sensor, preferably on the sensor. The transmitter accuracy is about 0.1oC.

A handheld signal generator of resistance and voltage can be used to simulate the sensor to check or change a transmitter calibration. The sensor connected to the transmitter with linearization needs to be inserted in a dry block simulator. A bath can be used for low temperatures to test thermowell response time but a dry block is better for calibration. The accuracy of the reference temperature sensor in the block or bath should be 4 times more accurate than the sensor being tested. The block or bath readout resolution must be better than the best possible precision of the sensor. The block or bath calibration system should have accuracy traceable to the National Metrology Institute of the user country (NIST in USA).

The accuracy at the normal setpoint to ensure the proper process operating point must be confirmed by a temperature test with a block. For factory assembled and calibrated sensor and thermowell with integral temperature transmitter, a single point temperature test in a dry block is usually sufficient with minimal zero or offset adjustment needed. For an RTD with “sensor matching,” adjustment is often not needed. For field calibration, the temperature of the block must be varied to cover the calibration range to set the linearization, span and zero adjustments. For field assembly, it would be wise to check the 63% response time in a bath.

Middle Signal Selection

The best solution in terms of increasing reliability, maintainability, and accuracy for all sensors with different durations of process service is automatic selection of the middle value for the loop process variable (PV). A very large chemical intermediates plant extended middle signal selection to all measurements that in combination with triple redundant controller essentially eliminated the one or more spurious trips per year. Middle signal selection was a requirement for all pH loops in Monsanto and Solutia.

The return on investment for the additional electrodes from improved process performance and reduced life cycle costs is typically more than enough to justify the additional capital costs for biological and chemical processes if the electrode life expectancy has been proven to be acceptable in lab tests for harsh conditions. The use of the middle signal inherently ignores a single failure of any type including the most insidious failure that gives a pH value equal to the set point. The middle value reduces noise without the introduction of the lag from damping adjustment or signal filter and facilitates monitoring the relative speed of the response and drift, which are indicative of measurement and reference electrode coatings, respectively. The middle value used as the loop PV for well-tuned loops will reside near the set point regardless of drift.

A drift in one of the other electrodes is indicative of a plugging or poisoning of its reference. If both of the other electrodes are drifting in the same direction, the middle value electrode probably has a reference problem. If the change in pH for a set point change is slower or smaller for one of the other electrodes, it indicates a coating or loss in efficiency, respectively for the subject glass electrode. Loss of pH glass electrode efficiency results from deterioration of glass surface due to chemical attack, dehydration, non-aqueous solvents, and aging accelerated by high process temperatures. Decreases in glass electrode shunt resistance caused by exposure of O-rings and seals to a harsh or hot process can also cause a loss in electrode efficiency.

pH Electrodes

Here is some detailed guidance on pH electrode calibration from the ISA book Essentials of Modern Measurements and Final Control Elements.

Buffer Calibrations

Buffer calibrations use two buffer solutions, usually at least 3 pH units apart, which allow the pH analyzer to calculate a new slope and zero value, corresponding to the particular characteristics of the sensor to more accurately derive pH from the milliVolt and temperature signals.

  • The slope and zero value derived from a buffer calibration provide an indication of the condition of the glass electrode from the magnitude of its slope, while the zero value gives an indication of reference poisoning or asymmetry potential, which is an offset within the pH electrode itself.
  • The slope of pH electrode tend to decrease from an initial value relatively close to the theoretical value of 59.16 mV/pH, largely due in many cases to the development of a high impedance short within the sensor, which forms a shunt of the electrode potential.
  • Zero offset values will generally lie within + 15 mV due to liquid junction potential, larger deviations are indications of poisoning.
  • Buffer solutions have a stated pH value at 25°C, but the stated value changes with temperature especially for stated values that are 7 pH or above. The buffer value at the calibration temperature should be used or errors will result.
  • The values of a buffer at temperatures other than 25°C are usually listed on the bottle, or better, the temperature behavior of the buffer can be loaded into the pH transmitter allowing it to use the correct buffer value at calibration.
  • Calibration errors can also be caused by buffer calibrations done in haste, which may not allow the pH sensor to fully respond to the buffer solution.
  • This will cause errors, especially in the case of a warm pH sensor not being given enough time to cool down to the temperature of the buffer solution.
  • pH transmitters employ a stabilization feature, which prevents the analyzer from accepting a buffer pH reading that has not reached a prescribed level of stabilization, in terms of pH change per time.

pH Standardization

Standardization is a simple zero adjustment of a pH analyzer to match the reading of a sample of the process solution made using a laboratory or portable pH analyzer. Standardization eliminates the removal and handling of electrodes and the upset to the equilibrium of the reference electrode junction. Standardization also takes into account the liquid junction potential from high ionic strength solutions and non-aqueous solvents in chemical reactions that would not be seen in buffer solutions. For greatest accuracy, samples should be immediately measured at the sample point with a portable pH meter.

If a lab sample measurement value is used, it must be time stamped and the lab value compare to a historical online value for a calibration adjustment. The middle signal selected value from three electrodes of different ages can be used instead of a sample pH provided that a dynamic response to load disturbances or setpoint changes of at least two electrodes is confirmed. If more than one electrode is severely coated, aged, broken or poisoned, the middle signal is no longer representative of actual process pH.

  • Standardization is most useful for zeroing out a liquid junction potential, but some caution should be used when using the zero adjustment.
  • A simple standardization does not demonstrate that the pH sensor is responding to pH, as does a buffer calibration, and in some cases, a broken pH electrode can result in a believable pH reading, which may be standardized to a grab sample value.
  • A sample can be prone to contamination from the sample container or even exposure to air; high purity water is a prime example, a referee measurement must be exposed to a flowing sample using a flowing reference electrode.
  • A reaction occurring in the sample may not have reached completion when the sample was taken, but will have completed by the time it reaches the lab.
  • Discrepancies between the laboratory measurement and an on-line measurement at an elevated temperature may be due to the solution pH being temperature dependent. Adjusting the analyzer’s solution temperature compensation (not a simple zero adjustment) is the proper course of action.
  • It must be remembered that the laboratory or portable analyzer used to adjust the on-line measurement is not a primary pH standard, as is a buffer solution, and while it is almost always assumed that the laboratory is right, this is not always the case.

The calibration of pH electrodes for non-aqueous solutions is even more challenging as discussed in the Control Talk column The wild side of pH measurement.

Additional Mentor Program Resources

See the ISA book 101 Tips for a Successful Automation Career that grew out of this Mentor Program to gain concise and practical advice. See the InTech magazine feature article Enabling new automation engineers for candid comments from some of the original program participants. See the Control Talk column How to effectively get engineering knowledge with the ISA Mentor Program protégée Keneisha Williams on the challenges faced by young engineers today, and the column How to succeed at career and project migration with protégé Bill Thomas on how to make the most out of yourself and your project. Providing discussion and answers besides Greg McMillan and co-founder of the program Hunter Vegas (project engineering manager at Wunderlich-Malec) are resources Mark Darby (principal consultant at CMiD Solutions), Brian Hrankowsky (consultant engineer at a major pharmaceutical company), Michel Ruel (executive director, engineering practice at BBA Inc.), Leah Ruder (director of global project engineering at the Midwest Engineering Center of Emerson Automation Solutions), Nick Sands (ISA Fellow and Manufacturing Technology Fellow at DuPont), Bart Propst (process control leader for the Ascend Performance Materials Chocolate Bayou plant), Angela Valdes (automation manager of the Toronto office for SNC-Lavalin), and Daniel Warren (senior instrumentation/electrical specialist at D.M.W. Instrumentation Consulting Services, Ltd.).

About the Author
Gregory K. McMillan, CAP, is a retired Senior Fellow from Solutia/Monsanto where he worked in engineering technology on process control improvement. Greg was also an affiliate professor for Washington University in Saint Louis. Greg is an ISA Fellow and received the ISA Kermit Fischer Environmental Award for pH control in 1991, the Control magazine Engineer of the Year award for the process industry in 1994, was inducted into the Control magazine Process Automation Hall of Fame in 2001, was honored by InTech magazine in 2003 as one of the most influential innovators in automation, and received the ISA Life Achievement Award in 2010. Greg is the author of numerous books on process control, including Advances in Reactor Measurement and Control and Essentials of Modern Measurements and Final Elements in the Process Industry. Greg has been the monthly "Control Talk" columnist for Control magazine since 2002. Presently, Greg is a part time modeling and control consultant in Technology for Process Simulation for Emerson Automation Solutions specializing in the use of the virtual plant for exploring new opportunities. He spends most of his time writing, teaching and leading the ISA Mentor Program he founded in 2011.

Connect with Greg
LinkedIn