*Posts on this page are from the Control Talk blog, which is one of the ControlGlobal.com blogs for process automation and instrumentation professionals and Greg McMillan’s contributions to the ISA Interchange blog.
The post, Maximizing Synergy between Engineering and Operations, first appeared on ControlGlobal.com's Control Talk blog.
The operator is by far the most important person in the control room having the most intimate knowledge and “hands on” experience with the process. Engineers who are most successful with process improvements realize they need to sit with and observe what operators are doing to deal with a variety of situations. Process engineers tend to recognize this need more than automation engineers. Improvements in operator interfaces, alarms, measurements, valves, and control systems are best accomplished by a synergy of knowledge gained by meetings between research, design, support, maintenance and operations where each talk about what they think are problems and opportunities. ISA Standards and Virtual Plants can provide a mutual understanding in these discussions.
The most successful process control improvement initiative at Monsanto and Solutia used such discussions with some preparatory work on what the process is actually doing and capable of doing. An opportunity sizing that detailed gaps between current and potential performance estimated by identifying the best performance found from cost sheets and from a design of experiments (DOE) most often done by a virtual plant due to increasingly greater limitations to such experimentation in an actual plant. After completion of the opportunity sizing, a one or two day opportunity assessment was held led by a process engineer with input sought and given by operations, accounting and management, marketing, maintenance, field and lab analyzer specialists, instrument and electrical engineers, and process control engineers. Marketing provided the perspective and details on how the demand and value of different products is expected to change. This knowledge was crucial for effectively estimating the benefits from increases in process flexibility and capacity. Opportunities for procedure automation and plantwide ratio control making the transition from one product to another faster and more efficient were consequently identified. Agreement was sought and often reached on the percentage of each gap that could be eliminated by potential PCI proposed and discussed during the meeting. A rough estimate of the cost and time required for each PCI implementation was also listed. The ones with the least cost and time requirements were noted as “Quick Hits”. To take advantage of the knowledge and enthusiasm and momentum, the “Quick Hits” were usually started immediately after the meeting or the following week.
Synergy can be maximized by exploring a wide spectrum of scenarios in a virtual plant that can run faster than real time and discussed in training sessions. Every engineer, scientist technician, and operator should be involved. If necessary this can be done at luncheons. Any resulting Webinar should be recorded including discussions. See the Control article “Virtual Plant Virtuosity” and ISA Mentor Program Webinar Recordings for this and much more in terms of gaining and using operational, process and automation system knowledge.
Webinar recordings should focus on the level of understanding needed and achievable in the plant and not what a supplier would like to promote. The ability of operators to learn the essential aspects and principles of process, equipment, and automation system performance should not be underestimated. We want to ensure the operator knows exactly and quickly what is happening being able to get at the root cause of a problem preemptively preventing poor process performance and SIS activation. Operators need to be aware of the severe adverse effect of deadtime. Fortunately, operators want to learn!
Finding the real causes of potential abnormal situations is critical for improving HMI, alarm systems, engineering, maintenance and operations. Ideally there should be a single alarm of elevated importance identifying the root cause (e.g., state based alarm) and the operator should be able in HMI to readily investigate conditions associated with root cause. Maintenance should be able to know what mechanical or automation component to repair or replace. Engineering should design procedure automation (state based control) to automatically deal with the abnormal situation.
Often the very first abnormal measurement is an indication of root cause. However, the abnormal condition should be upstream and the measurement of the abnormal condition should be faster than the measurement of other problems that occur as a consequence or coincidence. This is a particular concern for temperature because thermowells lags can be 10 to 100 seconds depending upon fit and velocity. For pH, the electrode lags can range from 5 to 200 seconds depending upon glass age, dehydration, fouling and velocity. There is also the deadtime associated with any transportation delay to the sensor. Finally, an output correlated with an input is not necessarily a cause and effect relationship. I find that process analysis and some form of a fault tree diagram and investigating relevant scenarios in a virtual plant as most useful.
Sharing useful knowledge is the biggest obstacle to success. The biggest obstacle can become the biggest achievement.
The post Webinar Recording: Strange but True Process Control Stories first appeared on the ISA Interchange blog site.
This educational ISA webinar on control valves was presented by Greg McMillan in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).
Greg McMillan presents lessons learned the hard way during his 40-year career, through concise “War Stories” of mistakes made in the field. Many of these mistakes are still being made today with some posing a safety risk, as well as potentially reducing process efficiency or capacity.
The post Webinar Recording: Temperature Measurement and Control first appeared on the ISA Interchange blog site.
Temperature is the most important common measurement that is critical for process efficiency and capacity because it not only affects energy use but also production rate and quality. Temperature plays a critical role in the formation, separation, and purification of product. Here we see how to get the most accurate and responsive measurements and the best control for key unit operations.
The usual concern is whether an automation system is too slow. There are some applications where an automation system is disruptive by being too fast. Here we look at what determines whether a system should be faster or slower and what are the limiting factors and thus the solution to meeting a speed of response objective. In the process, we will find there are a lot of misconceptions. The good news is that most of corrections needed are within the realm of the automation engineer’s responsibility.
The more general case with possible safety and process performance consequences is when the final control element (e.g., control valve or variable frequency drive), transportation delay, sensor lag(s), transmitter damping, signal filtering, wireless update rate and PID execution rate is too slow. The question is what are the criteria and priorities in terms of increasing the speed of response.
The key to understanding the impact of slowness is to realize that the minimum peak error and integrated absolute error are proportional to the deadtime and deadtime squared, respectively. The exception is deadtime dominant loops that basically have a peak error equal to the open loop error (error if the PID is in manual) and thus an integrated error that is proportional to deadtime. It is important to realize that this deadtime is not just the process deadtime but a total loop deadtime that is the summation of all the pure delays and the equivalent deadtime from lags in control loop whether in the process, valve, measurement or controller.
These minimum errors are only achieved by aggressive tuning seen in the literature but not used in practice because of the inevitable changes and unknowns concerning gains, deadtime, and lags. There is always a tradeoff between minimization of errors and robustness. Less aggressive and more robust tuning while necessary results in a greater impact of deadtime in that the gain margin (ratio of ultimate gain to PID gain) and the phase margin (degrees that a process time constant can decrease) is achieved by setting the tuning to be a greater factor of deadtime. For example, to achieve a gain margin of 6 and a phase margin of 76 degrees, lambda is set as 3 times the deadtime.
The actual errors get larger as the tuning becomes less aggressive. The actual peak error is inversely proportional to the PID gain. The actual integrated error is proportional to the ratio of the integral time (reset time) to PID gain. Consider the use of lambda integrating process tuning rules for a near integrating process where lambda is an arrest time. If you triple the deadtime used in setting the PID gain and reset to maintain a gain margin of about six and a phase margin of 76 degrees, you decrease the PID gain by about a factor of two times the new deadtime and increase the reset time by about a factor of two times the new deadtime increasing the actual integrated error by a factor of thirty six when the new deadtime is 3 times the original deadtime.
Consequently, how fast automation system components need to be depends on how much they increase the total loop deadtime. The components to make the loop faster is first chosen based on ease such as decreasing PID and wireless execution rate, signal filtering and transmitter damping assuming these are more than ten percent of total loop deadtime. Next you need to decrease the largest source of deadtime that may take more time and money such as a better thermowell or electrode design, location and installation or a more precise and faster valve. The deadtime from PID and wireless update rates is about ½ the time between updates. The deadtimes from transmitter damping or sensor lags increase logarithmically from about 0.28 to 0.88 times the lag as the ratio of the lag to the largest open loop time constant decreases from 1 to 0.01. The deadtime from backlash, stiction and poor sensitivity is the deadband or resolution limit divided by the rate of change of the controller output. Fortunately, deadtime is generally easier and quicker to identify than the open loop time constant and open loop gain. See the Control Talk Blog “Deadtime, the Simple Easy Key to Better Control.”
For flow and pressure processes, the process deadtime is often less than one second making by far the control system components the largest source of deadtime. For compressor, liquid pressure and furnace pressure control, the control valve is the largest source of deadtime even when a booster is added. Transmitter damping is generally the next largest source followed by PID execution rate.
There is a common misconception that the wireless update time should be less than a fraction (e.g., 1/6) of the response time. For the more interesting processes such as temperature and pH, the time constant is much larger than the deadtime. A well-mixed vessel could have a process time constant that is more than 40 times the process deadtime. If you use the criteria of 1/6 the response time assuming the best case scenario of a 63% response time, the increase in deadtime can be as large as 3 times the deadtime from the wireless update rate. Fortunately, wireless update rates are never that slow. Another reason not to focus on response time is because in integrating processes where there is no steady state, a response time is irrelevant.
The remaining question is when is the automation system too fast? The example that most comes to mind is when the faster system causes greater resonance or interaction. You want the most important loops to be able to see an oscillation from less important loops whose period is at least four times its ultimate period to reduce resonance and interaction. Hopefully, this is done by making the more important loop faster but if necessary is done by making the less important loops slower. A less recognized but very common case of needing to slow down an automation loop is when it creates a load disturbance to other loops (e.g., feed rate change). While step changes are what are analyzed in the literature so far as disturbances, in real applications there are seldom any step changes due the tuning of the PID and the response of the valve. This effect can be approximated by applying a time constant to the load disturbance and realizing that the resulting errors are reduced compared to the step disturbance by a factor that is one minus the base e to the negative power of lambda divided by the disturbance time constant.
Overshoot of a temperature or pH setpoint is extremely detrimental to bioreactor cell life and productivity. Making the loop response much slower by much less aggressive tuning settings and a PID structure of Integral on Error and Proportional -Derivative on Process Variable (I on E and PD on PV) is greatly needed and permitted because the load disturbances from cell growth rate or production rate are incredibly slow (effective process time constant in days). In fact, fast disturbances are the result of one loop affecting another (e.g., pH and dissolved oxygen control).
In dryer control, the difference between inlet and outlet temperatures that is used as the inferential measurement of dryer moisture is filtered by a large time constant that is greater than the moisture controller’s reset time. This is necessary to prevent a spiraling oscillation from positive feedback.
Filters on setpoints are used in loops whose setpoint is set by an operator or a valve position controller to change the process operating point or production rate. This filter can provide synchronization in ratio control of reactant flow maintaining the ability of each flow loop to be tuned to deal with supply pressure disturbances and positioner sensitivity limits. However, a filter on a secondary lower loop setpoint in cascade control is generally detrimental because it slows down the ability of the primary loop to react to disturbances.
Finally, more controversial but potentially useful is a filter on the pH at the outlet of static mixer for a strong acid and base to control in the neutral region. Here the filter acts to average the inevitable extremely large oscillations due to nearly non-existent back mixing and the steep titration curve. The result is a happier valve and operator. The averaged pH setpoint should be corrected by a downstream pH loop that is on a well-mixed vessel that sees a much smoother pH on a much narrower region of the titration curve. A better solution is signal characterization. The static mixer controlled variable becomes the abscissa of the titration curve (reagent demand) rather than the ordinate (pH). This linearization greatly reduces the oscillations from the steep portion of the titration curve and enables a larger PID gain to be used. The titration curve must not be very accurate but must include the effect of absorption of carbon dioxide from exposure to air and the change in dissociation constants and consequently actual solution pH with temperature not addressed by a standard temperature compensator that is simply addressing the temperature effect in the Nernst equation. You need to be also aware that the pH of process samples and consequently the shape of the titration curve can change due to changes in sample liquid phase composition from reaction, evaporation, absorption and dissolution. The longer the time is between the sample being taken and titrated, the more problematic are these changes.
The post, Solutions to Prevent Harmful Feedforwards, originally appeared on the ControlGlobal.com Control Talk blog.
Here we looks at applications where feedforward can do more harm than good and what to do to prevent this situation. This problem is more common than one might think. In the literature we mostly hear how beneficial feedforward can be for measured load disturbances. Statements are made that the limitation is the accuracy of the feedforward and that consequently an error of 2% can still result in a 50:1 improvement in control. This optimistic view does not take into account process, load and valve dynamics. The feedforward correction needs to arrive in the process at the same point and the same time as the load disturbance. This is traditionally achieved by passing the feedforward (FF) through a deadtime block and lead-lag block. The FF deadtime is set equal the load path deadtime minus the correction path deadtime. The FF lead time is set equal to the correction path lag time. The FF lag time is set equal to the load path lag time. If the FF arrives too soon, we create inverse response and if the FF arrives too late, we create a second disturbance. Setting up tuning software to identify and compute the FF dynamic can be challenging. Even more problematic are the following feedforward applications that do more harm than good despite dynamic compensation.
(1) Inverse response from the manipulated flow causes excessive reaction in the opposite direction of load. The inverse response from a feedwater change can be so large as to cause a boiler drum high or low level trip, a situation that particularly occurs for undersized drums and missing feedwater heaters due misguided attempts to save on capital costs. The solution here is to use a traditional three element drum level control but added to the traditional feedforward is an unconventional feedforward with the opposite sign that is decayed out over the period of the inverse response. In other words, for a step increase in steam flow, there would be initially a step decrease in boiler feedwater feedforward added to the three element drum level controller output that is trying to increase feedwater flow. This prevents shrink and a low level trip from bubbles collapsing in the downcomers from an increase in cold feedwater. For a step decrease in steam flow, there would be a step increase in boiler feedwater feedforward added to the three element drum level controller output that is trying to decrease feedwater flow. This prevents swell and a high level trip from bubbles forming in the downcomers from a decrease in cold feedwater. A severe problem of inverse response can occur in furnace pressure control when the scale is a few inches of water column and the incoming air manipulated is not sufficiently heated. The inverse response from the ideal gas law can cause a pressure trip. An increase in cold air flow causes a decrease in gas temperature and consequently a relatively large decrease in gas pressure at the furnace pressure sensor. A decrease in cold air flow causes an increase in gas temperature and consequently a relatively large increase in gas pressure at the furnace pressure sensor.
(2) Deadtime in correction path is greater than the deadtime in the load path. The result is a feedforward that arrives too late creating a second disturbance and worse control than if there was no feedforward. This occurs whenever the correction path is longer than the load path. An example is a distillation column control when the feed load upset stream is closer to the temperature control tray than the corrective change in reflux flow. The solution is to generate the feedforward signal for ratio control based on a setpoint change that is then delayed before being used by the feed flow controller. The delay is equal to the correction path deadtime minus the load path deadtime. The same problem can occur for a reagent injection delay that often occurs due to conventional sized dip tubes and small reagent flows. The same solution applies in terms of using an influent flow controller setpoint for feedforward ratio control of reagent and delaying the setpoint used by the influent flow controller.
(3) Feedforward correction makes response from an unmeasured disturbance worse. This occurs in unit operations such as distillation columns and neutralizers where the unmeasured disturbance from a feed composition change is made worse by a feedforward correction based on feed flow. Often feed composition is not measured and is large due to parallel unit operations and a combination of flows that become the feed flow. For pH, the nonlinearity of titration curve increases the sensitivity to feed composition. Even if the influent pH is measured, the pH electrode error or uncertainty of the titration curve makes feedforward correction for feed pH to do more harm than good for setpoints on the steep part of the curve. If the feed composition change requires a decrease in manipulated flow and there is a coincidental increase in feed flow that corresponds to an increase in manipulated flow or vice versa, the feedforward does more harm than good. The solution is to compute the required rate of change of manipulated from the unmeasured disturbance and subtract this from the computed rate of change for the feedforward correction needed paying attention to the signs of the rate of change. If the unmeasured disturbance rate of change of manipulated flow is in the same direction and exceeds the computed feedforward correction rate of change in the manipulated flow, the feedforward rate of change is clamped at zero to prevent making control worse. If the rates of change for the manipulated are in the same direction, the magnitude of the feedforward rate of change is correspondingly increased.
I am trying to see how all this applies in my responses to known and unknown upsets to my spouse.
I have seen engineers and technologists thrown into the world of process instrumentation and control (PIC) with little or no
Bergotech's N.E. (Bill) Battikha
knowledge of this engineering specialty—and they were expected to perform immediately. At best, they may have taken a course in control theory, which is very rarely (if ever) used in a plant environment.
PIC typically represents a substantial cost to an average industrial project. It’s a high-tech discipline critical to the success and survival of a plant, and yet it is typically learned “on the job.” Many people working in the discipline lack the proper training needed to make appropriate decisions. An error could result in a very expensive or hazardous situation.
Many of them don't know the basics. Over the years, PIC personnel have come to me with questions such as, “How does an orifice plate work? With a square root output? Why?” and “How can I describe all this logic? In a logic diagram? What’s that?”
Worse, I have seen so-called experienced PIC personnel facing a ground loop problem because both the transmitter and receiver had their signals grounded. The solution they took? They went back to the vendor of the receiver and asked to have the equipment isolated from ground. In other words, a modification to an electrically-approved off-the-shelf product. The cost of modifying the circuit boards on these fancy receivers and obtaining the required approvals—and there were 20 of them—was astronomical. The experienced personnel and their supervisor had never heard of a loop isolator. Unbelievable, but true.
My examples could go on, filling a few pages. However, I will stop here as the topic of this article is not about listing my complaints. But you can understand what is typically encountered due to lack of knowledge, which is due to the lack of good training.
This is not the fault of the people doing the work. They were never properly trained. The result of this lack of training is poor performance and longer times to correctly implement control systems in a competitive environment squeezed by tight budgets and stiff competition.
There are two main problems facing the need for training: time and money. Time is a problem because organizations operate with a skeleton staff and therefore, it is very hard for a manager to let an employee take time off for training. Money is another problem as budgets are tight, and global competition does not leave much room for “extra” spending on training. In addition to course fees, there is also the cost of traveling and accommodation expenses to a location where face-to-face training is provided.
Besides the time and money issues, many engineering associations have now implemented a requirement for continuing professional development (CPD) for its members. Under such a requirement, members must provide a declaration of competence combined with a reporting of how they are maintaining competence in their discipline. So, adding to the time and money issues, we now have CPD requirements. What can be done?
Training is available in different formats, each with its advantages and disadvantages. It can be provided in the classical form of face-to-face in regular classrooms. However, face-to-face teaching is relatively expensive due to the student having to travel to a remote location (where the class is conducted). In addition, the employee is absent from his/her workplace.
A multitude of face-to-face courses are available from equipment vendors and manufacturers, training companies, universities and technical colleges. The majority of them are not in a sequential format to allow a person to start with the basics and move on to more complex topics. In addition, and quite often, these courses are either too theoretical, or are geared for someone who already has a reasonably good basic knowledge of PIC.
Self-teach programs are another format of training, available either from self-teach books or from software loaded onto personal computers, some which are interactive. This solution is probably the lowest in cost. However, without an instructor available to answer questions, it is up to the student to understand the information at hand and, more important, to have the self-discipline to proceed and complete the learning process independently. At the end, self-teach programs do not typically provide proof of successful completion and understanding by the student.
How about those who want to learn about PIC in an organized fashion, in a condensed time frame, from a practical point of view, with limited training funds and without the (almost impossible) absence from work? The solution is instructor-led quality online learning. This approach provides training without the student having to travel, keeping the personnel on site and costs reduced to a relatively affordable minimum.
Online education allows a student to progress at a relatively convenient pace. With good instructional material fit to the course, students learn and complete quizzes and exams to confirm their acquired knowledge. This approach, with an instructor to answer questions, provides an incentive to finish the study program. It is followed by a certificate obtained on successful completion of quizzes and exams, and is relatively low in cost while keeping the student available at work since the online sessions are typically held in the evening.
I have been teaching university-based online PIC courses for about eight years. I have learned through trial and error as well as through students’ comments and suggestions that the most effective approach for a quality PIC online course is to present it in three modules spread over a year. Such a course would cover the different facets of PIC from a non-mathematical and practical point of view. The spread over one year allows the students to gradually apply and practice some of the information learned. It also avoids students’ information overload.
Including theory such as Laplace Transform, Bode Plots and the like in a PIC practical course has little value in day-to-day plant operation. And speaking from personal experience, this type of theoretical information would be forgotten shortly after the course is completed.
To the best of my knowledge, such online, instructor-led, university-based PIC training programs are presently being taught in North America by three institutions. All of the three use the same award-winning reference book published by the ISA and titled, “The Condensed Handbook of Measurement and Control.”
In the United States, the course is offered by: Penn State University - Berks (phone# 610-396-6221) and University of Kansas Continuing Education (phone# 1-877-404-5823 or 785-864-5823).
In Canada, the course is offered by: Dalhousie University Continuing Education (phone # 800-565-1179 or 902-494-6079).
These three organizations offer a university certificate that is awarded after the successful completion of the three modules, including all quizzes and final exams. The three modules of these certificate programs amount to approximately 150 classroom hours. The universities recommend that participants attend Modules 1, 2, and 3 in their sequential order, however, some of the students, due to their prior knowledge of PIC, take the modules in a different order and have successfully passed all quizzes and exams.
I’ve successfully instructed face-to-face PIC courses for more than 10 years in many industrial plants, at ISA functions, and at several North American universities. Then, and due to a substantial drop in student enrollment following the financial problems of 2018-2019, I started online training at two universities. At the beginning, I was hesitant about the potential effectiveness and success of online training. I have now changed my mind. In addition to avoiding the effects of cost and time lost away from the workplace, online training has proven to be effective and practical for the students. A five-fold increase in the number of students occurred with the implementation of the online course compared to the face-to-face it replaced, proving its success and benefits.
Online courses have their limitations. They can replace many face-to-face courses, but not all. For example, online learning can’t provide hands-on training such as control equipment maintenance. Dedicated training facilities provide such training, often at a vendor’s facility.
The main benefits of an on-line university-based and instructor-led certificate program are:
Online PIC training, when accompanied by a good reference book, quality course notes, quizzes, and exams, provides students with the knowledge and confidence needed to grasp this field of technology.
As a final note, if you think education is expensive, try ignorance. You'll find it more expensive.
N.E. (Bill) Battikha, PE, president, Bergotech / firstname.lastname@example.org
About the Author
N.E. (Bill) Battikha, P.E., has more than 40 years of experience in PIC, working mainly in the USA and Canada. He holds a Bachelor of Science in Engineering and is a member of the Delaware Association of Professional Engineers. Throughout his career, Bill has gained a lot of experience in management, engineering and training. Bill has generated and conducted training courses for many universities in the USA and Canada, including Penn State University, the University of Wisconsin, Kansas State University, the University of Toronto and Dalhousie University. He co-authored a patent and a commercial software package. He also wrote four books on PIC, all published by the ISA, with the third one (The Condensed Handbook of Measurement and Control) twice awarded the Raymond D. Molloy Award as an ISA best-seller. Bill is the president of Bergotech Inc., a firm specializing in teaching online engineering courses in a variety of disciplines as well as implementing university-based online programs. For more info, or to contact the author, please visit www.bergotech.com
The post What are New Technologies and Approaches for Batch and Continuous Process Control? first appeared on the ISA Interchange blog site.
The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.
What is the technical basis and ability of technologies other than PID and model predictive control (MPC)? These technologies seem fascinating and I would like to know more, particularly as I study for the ISA Certified Automation Professional (CAP) exam.
Michel Ruel has achieved considerable success in the use of fuzzy logic control (FLC) in mineral processing as documented in “Ruel’s Rules for Use of PID, MPC and FLC.” The process interrelationships and dynamics in the processing of ores are not defined due to the predominance of missing measurements and unknown effects. Mineral processing PID loops are often in manual, not only for the usual reasons of valve and measurement problems, but also because process dynamics between a controlled and manipulated variable radically change, including even the sign of the process action (reverse or direct) based on complex multivariable effects that can’t be quantified.
If the FLC configuration and interface is set up properly for visibility, understandability and adjustability of the rules, the plant can change the rules as needed, enabling sustainable benefits. In the application cited by Michel Ruel, every week metallurgists validate rules, make slight adjustments, and work with control engineers to make slight adjustments. A production record was achieved in the first week. The average use of energy per ton had decreased by 8 percent, and the tonnage per day had increased by 14 percent.
There have been successful applications of PID and MPC in the mining industry as detailed in the Control Talk columns “Process control challenges and solutions in mineral processing” and “Smart measurement and control in mineral processing.”
I have successfully used FLC on a waste treatment pH system to prevent RCRA violations at a Pensacola, Fla. plant because of my initial excitement about the technology. It did very well for decades but the plant was afraid to touch it. The 2007 Control magazine article “Virtual Control of Real pH” with Mark Sowell showed how you could replace the FLC with an MPC and PID strategy that could be better maintained, tuned and optimized.
We used FLC integrated into the software for a major supplier of expert systems in the 1980s and 1990s but there were no real success stories for FLC. There was one successful application of an expert system for a smart level alarm but it did not use FLC. However, a simple material balance could have done as well. There were several applications for smart alarms that were turned off. After nearly 100 man-years, we have not much at all to show for these expert systems. You could add a lot rules for FLC and logic based on the expertise of the developer of the application, but how these rules played together and how you could tell which rule needed to be changed was a major problem. When the developer left the production unit, operators and process engineers were not able to make changes inevitably needed.
The standalone field FLC advertised for better temperature setpoint response cannot do better than a well-tuned PID if you use all of the PID options summarized in the Control magazine article “The greatest source of process control knowledge,” including PID structure such as 2 Degrees of Freedom (2DOF) or a setpoint lead-lag. You can also use gain scheduling in the PID if necessary. The problem with FLC is how you tune it and update it for changing process conditions. I wrote the original section on FLC in A Guide to the Automation Body of Knowledge but the next edition is going to have it omitted due to common agreement between me and ISA that making more room to help get the most out of your PID was more generally useful.
FLC has been used in pulp and paper. I remember instances of FLC for kiln control but since then we have developed much better PID and MPC strategies that eliminate interaction and tuning problems.
So far as artificial neural networks (ANN), I have seen some successful applications in batch end point detection and prediction and for inferential dryer moisture control. The insertion of time delays on inputs to make them coincide with measured output is required for continuous operations. For plug flow operations like dryers, this can be readily done since the deadtime is simply the volume divided by the flow rate. For continuous vessels and columns, the insertion of very large lag times and possibly a small lead time are needed besides dead time. No dynamic compensation is needed for batch operation end point prediction.
You have to be very careful not to be outside of the test data range because of bizarre nonlinear predictions. You can also get local reversals of process gain sign causing buzzing if the predicted variable is used for closed loop control. Finally, you need to eliminate correlations between inputs. I prefer multivariate statistical process control (MSPC) that eliminates cross correlation of inputs by virtue of principle component analysis and does not exhibit process gain sign reversals or bizarre nonlinearity upon extrapolation outside of test data range. Also, MSPC can provide a piecewise linear fit to nonlinear batch profiles, which is a technique we commonly do with signal characterizers for any nonlinearity. I think there is an opportunity for MSPC to provide more intelligent and linear variables for an MPC like we do with signal characterizers.
For any type of analysis or prediction, whether using ANN or MSPC, you need to have inputs that show the variability in the process. If a process variable is tightly controlled, the PID or MPC has transferred the variability to the manipulated variable. Ideally, flow measurements should be used, but if only position or a speed is available and the installed flow characteristic is nonlinear, signal characterization should be used to convert position or speed to a flow.
I implemented a neural network some years ago on a distillation column level control. The column was notoriously difficult to control. The level would swing all over and anything would set it off, such as weather or feed changes. The operators had to run it in manual because automatic was a hopeless waste of time.
At the time (and this information might be dated) the neural network was created by bringing a stack of parameters into the calculation and “training” it on the data. Theoretically the calculation would strengthen the parameters that mattered, weaken the parameters that didn’t, and eventually configure itself to learn the system.
The process taught me much. Here are my main learning points:
1) Choose the training data wisely. If you give it straight line data then it learns straight lines. You need to teach it using upset data so it learns what to do when things go wrong. (Then use new upset data to test it.)
2) Choose the input parameters wisely. I started by giving it everything. Over time I came to realize that the data it needed wasn’t the obvious. In this case it needed:
3) Ultimately the system worked very well – but honestly by the time I had gone through four iterations of training and building the system I KNEW the physics behind it. The calculation for controlling the level was fairly simple when all was said and done. I probably could have just fed it into a feedforward PID and accomplished the same thing.
The experience was interesting and fun, and I actually got an award from ISA for the work. However when all was said and done, I realized it wasn’t nearly as impressive a tool as all the marketing brochures suggested. (At the time it was all the rage – companies were selling neutral network controller packages and magazine articles were predicting it would replace PID in a matter of years.)
Thank you, this is a lot more practical insight than I have been able to glean from the books.
I imagine the batch data analytics program offered by a major supplier of control systems is an example of the MSPC you mentioned. I think I have some papers on it stashed somewhere, since we have considered using it for some of our batch systems. What is batch data analytics and what can it do?
Yes, batch data analytics uses MSPC technology with some additional features, such as dynamic time warping. The supplier of the control system software worked with Lubrizol’s technology manager Robert Wojewodka to develop and improve the product for batch processes as highlighted in the InTech magazine article “Data Analytics in Batch Operations.” Data analytics eliminates relationships between process inputs (cross correlations) and reduces the number of process inputs by the use of principal components constructed that are orthogonal and thus independent of each other in a plot of a process output versus principle components. For two principal components, this is readily seen as an X, Y and Z plot with each axis at a 90-degree angle to the each other axis. The X and Y axis covers the range of values principal components and the Z axis is the process output. The user can drill down into each principal component to see the contribution of each process input. The use of graphics to show this can greatly increase operator understanding. Data analytics excels at identifying unsuspected relationships. For process conditions outside of the data range used in developing the empirical models, linear extrapolation helps prevent bizarre extraneous predictions. Also, the use of a piecewise linear fit means there are no humps or bumps that cause a local reversal of process gain and buzzing.
Batch data analytics (MSPC) does not need to identify the process dynamics because all of the process inputs are focused on a process output at a particular part of the batch cycle (e.g., endpoint). This is incredibly liberating. The piecewise linear fit to the batch profile enables batch data analytics to deal with the nonlinearity of the batch response. The results can be used to make mid-batch corrections.
There is an opportunity for ANN to be used with MSPC to deal with some of the nonlinearities of inputs but the proponents of MSPC and ANN often think their technologies is the total solution and don’t work together. Some even think their favorite technology can replace all types of controllers.
Getting laboratory information on a consistent basis is a challenge. I think for training the model, you could enter the batch results manually. When choosing batches, you want to include a variety of batches but all with normal operation (no outliers from failures of devices or equipment or improper operations). The applications as noted in the Wojewodka article emphasize what you want to have as a model is the average batch and not the best batch (not the “golden batch”). I think this is right to start detecting abnormal batches but process control seeks to find the best and reduce the variability from the best so eventually you want a model that is representative of the best batches.
I like MSPC “worm plots” because they tell me from tail to the head the past and future of batches with tightness of coil adding insight. The worm plot is a series of batch end points expressed as a key process variable (PV1n) that is plotted as scores of principal component 1 (PC1) and principal component 2 (PC2)
If you want to do some automated correction of the prediction by taking a fraction of the difference between the predicted result and lab result, you would need to get the lab result into your DCS probably via OPC or some lab entry system interfaced to your DCS. Again the timing of the correction is not important for batch operations. Whenever the bias correction comes in, the prediction is improved for the next batch. The bias correction is similar to what is done in MPC and the trend of the bias is useful as a history of how the accuracy is changing and whether there is possibly noise in the lab result or model prediction.
The really big name in MSPC is John F. MacGregor at McMaster University in Ontario, Canada. McMaster University has expanded beyond MSPC to offer a process control degree. Another big name there is Tom Marlin, who I think came originally from the Monsanto Solutia Pensacola Nylon Intermediates plant. Tom gives his view in the InTech magazine article “Educating the engineer,” Part 2 of a two-part series. Part 1 of the series, “Student to engineer,” focused on engineering curriculum in universities.
For more on my view of why some technologies have been much more successful than others, see my Control Talk blog “Keys to Successful Control Technologies.”
The post, Common Mistakes not Commonly Understood - Finale, first appeared on the Controlglobal.com Control Talk blog.
Here is our finale just in time to serve as a momentous process control gift for the Holidays. Just don’t try to re-gift this to anyone unless they are into the automation profession or you big time.
(21) Misuse and Missing Use of Setpoint Filter. The use of a setpoint filter on a secondary loop setpoint will slow down the ability of the primary loop to make a correction for changes in load to or the setpoint of the primary controller in cascade control. For this reason, there has been a general rule of thumb that a setpoint filter should not be used on secondary controllers. However, as with most rules of thumb there are important exceptions derived from a deeper understanding. The setpoint filter on the secondary loop does not interfere with the ability of the secondary loop to reject disturbances and to deal with nonlinearities within the secondary loop that is often the most frequent and important role. If the setpoint filter is judicially applied so that it is less than 10% of the primary loop dead time, the effect on the ability of the primary loop to reject disturbances originating in the primary loop is negligible. The use of a judicious setpoint filter can ensure there are no temporary unbalances from changes of multiple flows under ratio control by enabling all the flows to move in concert. This is critical for reactant flows and the inline mixing of any flows. Often this unbalance was prevented by tuning the secondary flow loops to have the same closed loop time constant. Unfortunately, this forces the tuning of loops to be as slow as the slowest or most nonlinear flow loop. This detuning reduces the ability of the other loops to deal with their pressure disturbances and nonlinearities of their installed flow characteristics. Also, the use of setpoint rate limits that are different up and down gives directional move suppression to provide a fast approach to a better operating condition and a fast getaway to prevent undesirable operating point in the process variable (PV) or manipulated valve position. This is important to provide a fast opening and slow closing surge valve for compressor control, to optimize user valve position to improve process efficiency or maximize production by a valve position control, and to prevent oscillations across a split range point. For primary loops, a setpoint filter time equal to the reset time is the same as a PID structure of Proportional and Derivative on PV and Integral on Error (PD on PV, I on Error) so that the setpoint and load response are the same. The addition of lead time to the setpoint that is about 25% of the setpoint filter where the filter is lag time of a lead-lag block enables a faster setpoint response.
(22) Choosing and Achieving Level Control Objective. We are increasingly becoming aware that level loops are tuned too aggressively causing rapid changes in the manipulated flow upsetting downstream unit operations. The solution for level loops that need loose control is not to simply reduce the level controller gain because this can cause slow rolling oscillations. The tuning objective is to so minimize the transfer of variability of level to the manipulated flow and is more often stated as the maximization of the absorption of variability. The solution is to first increase the reset time dramatically, like one or two orders of magnitude, and then decrease the PID gain so that the product of the PID gain and reset time is greater than twice the inverse of the integrating process gain whose units are %/sec/% (1/sec) as discussed in Mistake 2 in Part 1. For surge tank level control, the objective is obviously maximization of absorption of variability. This objective has gained such popularity that the cases where the level controller must be tuned tightly are not recognized and addressed. In fact some may say there are no such cases and that feedforward control can take care of providing tighter level control when needed. There are exceptions. The biggest one that comes to mind is the distillate receiver level controller that manipulates reflux flow. Tight level feedback control achieves internal reflux control where changes in column top temperature, particularly from blue northerners, causes a change in overhead distillate flow and hence distillate receiver level that results in a correction of manipulated reflux flow in the direction to minimize the disturbance. Another case is where tight residence time control in continuous reactors provides enough time to complete a reaction but not so much time as to cause side reactions or polymer buildup. A change in production rate must result in a change in level setpoint that must be reached quickly by tight level control so that residence time (level/flow) is as constant as possible. A similar concern may exist for continuous crystallizers. For multiple effect evaporators, changes in discharge flow from the last stage to control product solids concentration must be translated to changes in feed coming into each stage by its level controller to affect product concentration. There are similar requirements whenever an upstream flow is manipulated to control a level, such as a raw material makeup flow to deal with changes in a recovered recycle flow. While feedforward flow and ratio control can help, good level control deals with the inevitable errors that cause unbalances in stoichiometry.
(23) Misunderstanding of Load Disturbances. There is a huge disconnect in the literature and what really happens in a plant in terms of the supposed location of a disturbance. The literature and consequently many tuning methods and new algorithms supposedly better than PID are based on the disturbance being on the process output downstream of time constants and dead times in the process and even in most cases ignoring any time constants or dead times in the measurement. This view is convenient for thinking model predictive control and internal model control are best for disturbance rejection and that tuning for setpoint changes is sufficient since a disturbance on the process output is as quick as a change in setpoint. The reality is that nearly all disturbances occur as a process input and are delayed by process dead times or slowed by process time constants. For lag dominant processes, this recognition is particularly important and is the basis of switching from self-regulating tuning rules where lambda is a closed loop time constant for a setpoint change to integrating process tuning rules where lambda is an arrest time for rejection of a load disturbance on the process input. Of course there are a few exceptions where the disturbance is on a process output that would benefit from a larger reset time but this can be identified by tuning the controller for a setpoint response. Also, when in doubt a larger reset time is always a good thing to try since integral action is destabilizing. More proportional action can be stabilizing as discussed in Mistakes 2 and 3 in Part 1.
(24) Missing Automated Startup. Often loops cannot be simply put in automatic for startup.The controller approach to setpoint is often not as smooth and consistent with other controllers approach to setpoint. Often the operator manually position valves to get the process to a reasonable operating state before going to automatic control. The best practices of the best operators can be automated and implemented with much better timing and repeatability enabling continuous improvement by better recognition of what is left to be addressed. If operators say the situation is too complex or conditional on their expertise to be automated, it is even a greater opportunity and motivation for automation. For much more on how procedural automation can be used for startup and dealing with abnormal situations, see the Sept 2016 Control Talk column “Continuous improvement for continuous processes”
(25) Missing Ratio Control. Nearly all process inputs are flows that have a specific ratio to each other for a unit operation as seen on a Process Flow Diagram. The simple use of ratio control is inherently powerful where a “leader” flow is chosen that is often a major feed flow and the other flow controller setpoints designated as “followers” are ratioed to the “leader” flow controller setpoint. If the flows need to work in concert with each other, a filter is applied to each flow setpoint including the “leader” flow for reasons noted in Mistake 21. The actual ratio must be displayed for the operator based on measured flows and the operator must be given the ability to change the ratio for startup and abnormal operating conditions via a ratio controller for each “follower” flow. The ratio often has a feedback correction by a primary temperature or composition controller output. For plug flow volumes, conveyors and sheet lines, the feedback correction changes the ratio setpoint. For back mixed volumes, the feedback correction biases the ratio controller output. For more on ratio control, see the 1/31/2017 Control Talk Blog “Uncommon Knowledge for Achieving Best Feedforward and Ratio Control”
(26) Misleading response time statements. The term response time has no value unless a percentage of the final response is noted. For linear systems, the 63% response time is the dead time plus one time constant. The 86% response time often used for valve response is the dead time plus two time constants. The 95% and 98% response times are the dead time plus three and four time constants, respectively. Waiting for the 98% response takes a lot of time making the test vulnerable to changing conditions and disturbances. For large distillation columns, it could take days to see a 98% response.
(27) Not knowing the dead time to time constant ratio. The tuning and performance of a control loop for self-regulating processes depends heavily upon the dead time to time constant ratio. Most studies in the literature are for loops that are smart enough to include dead time have a dead time to time constant ratio between 2 and 0.5 that are termed “balanced” self-regulating processes. When the dead time to time constant ratio is much larger than 1, the process is termed “dead time dominant” and the reset time can be significantly decreased and PID gain should be decreased to reduce the reaction to noise and abrupt response due to the lack of a significant time constant. For more on these processes see the 12/1/2016 Control Talk Blog “Deadtime Dominance - Sources, Consequences and Solutions”
When the dead time to time constant ratio is less than ¼, the process is termed “lag dominant” and “near-integrating”. Integrating process tuning rules should be used that increase the reset time and PID gain to account for the reduced degree of negative feedback in the process. In the time frame of a major PID reaction (4 dead times), the process ramps and appears to be similar to an integrating process. The integrated absolute error (IAE) for all processes is proportional to the ratio of the reset time to controller gain. The peak error for “dead time dominant” processes approaches the open loop error (error if controller was in manual). The peak error for “lag dominant” processes is inversely proportional to the controller gain since the PID gain can be quite high dominating the initial response. For more on how the dead time to time constant ratio affects performance see the 7/17/2017 Control Talk Blog “Insights to Process and Loop Performance”
(28) Unnecessary crossings of split range point. The valve stiction and nonlinearity and process discontinuity is greatest when switching from the manipulation of one valve and stream to another. Once a controller output crosses a split range point, the tendency is to oscillate back and forth unless there is a predominant need for one stream versus the other. Putting a dead band into the split range point will cause oscillations if there are two integrators either due to integrating action in a process or from the integral mode cascade loop or a positioner. The best way to prevent an unnecessary crossing of the split range point is to put up and down rate limits on a valve or flow controller setpoint where the rate limit is slow in the direction of going back to the split range point if there is no safety issue. For cooling and heating, the movement toward heating across split range point may be slowed down for temperature control. For venting and gas inlet flows, the movement toward more inlet flow across split range point may be slowed down for pressure control. For mammalian bioreactor pH control, the movement toward adding a base across the split range point, such as sodium bicarbonate, is slowed down to reduce sodium ion accumulation that increases cell osmotic pressure and cell lysis. External reset feedback to the primary PID of valve position or flow should be used so that primary PID (temperature, pressure or pH) does not try to change a valve or flow faster than it can respond. This is a great feature in general for cascade control and to provide directional move suppression for valve position control and surge control. For more on external reset feedback see the 4/26/2012 Control Talk Blog “What is the Key PID Feature for Basic and Advanced Control” A corrected and improved block diagram in time domain for a PID with the positive feedback implementation of integral action for the ISA Standard Form is
(29) Minimizing instrumentation cost. We get hung up on saving a few thousand dollars putting millions of dollars often at stake in terms of poor process performance and inadvertent shutdowns. A big mistake is allowing a packaged equipment supplier to choose the instrumentation since the supplier is seek to win the low bid contest. Similarly allowing purchasing to decide which instruments to buy is fundamentally bad. We need to take our knowledge about instrumentation performance and insist on the best even when the justification is not clear and pressure is put on to lower system cost. My cohort Stan Weiner would purposely increase the initial estimates for projects to give him the freedom to choose the best instrumentation. He favored inline meters such as Magnetic Flow Meters and Coriolis meters over differential head meters because of the better rangeability, accuracy, and maintainability and insensitivity to piping system. Similarly, for temperatures less than 400 degrees C, I use RTDs with “sensor matching” instead of thermocouples to reduce drift and improve accuracy and sensitivity by orders of magnitude despite a few hundred dollars more in cost.
(30) Lack of middle signal selection. The best way to avoid unnecessary shutdowns, eliminate the reaction to any possible type of single failure including a measurement stuck at setpoint, reduce the reaction to noise, spikes, and drift and provide intelligence as to what is wrong with a measurement is middle signal selection of three independent measurements. For pH this is almost essential. To me it is bizarre how multimillion bioreactor batches are put at risk from using two instead of three pH electrodes resulting in anybody’s guess as to which electrode is right. Some batches can be ruined by a pH that is off by just a few tenths yet engineers are reluctant to spend a couple of thousand dollars upfront not realizing that even if you disregard the cost of a potentially spoiled batch, the reduction in unnecessary maintenance more than pays for the extra electrode. For one large intermediate continuous process, the use of middle signal selection on all of the measurements used by the Safety Instrumented System (SIS) reduced the number of shutdowns from 2 per year to less than one every 2 years saving tens of millions of dollars each year. The risk of a disastrous operator mistake was also realty reduced because startup is the most difficult and hazardous mode for operations.
I could keep on talking but I think this is enough to start your “New Year Resolutions”. Hopefully, you will be better at keeping them than me.
The post What Are Best Practices and Standards for Control Narratives? first appeared on the ISA Interchange blog site.
The following technical discussion is part of an occasional series showcasing the ISA Mentor Program, authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc (now Eastman Chemical). Greg will be posting questions and responses from the ISA Mentor Program, with contributions from program participants.
At the place I work we are typically good at documenting how we configure our controls in the form of DDS documents but not always as good at documenting why they have been configured that way in the form of rigorous control narratives.
We now have an initiative to start retrospectively producing detailed control narratives for all our existing controls and I am looking for best practice, standards and examples of what good looks like for control narratives.
I wondered if you had any good resources in this regard or you could point me in any direction. (I did look at ANSI/ISA-5.06.01-2007 but this seems more concerned with URS/DDS/FDS documents rather than narratives).
We are mainly DeltaV now.
We do a lot of DeltaV systems and we use 3 different ways to “document” the control system. As a system integrator “document” for me may mean something than different than for you so let me explain that these documents are my way to tell my programmers exactly how I want the system to be configured. These documents fully define the system’s logic so they can program it and I can test against it.
As I said there are three parts:
Obviously batch flowsheets do not apply if your system isn’t batch but the same flow sheets can be used to define an involved sequence.
The tag list is simply a large excel spreadsheet that includes all of the key parameters – module name, IO Name, tuning constants, alarm constants, etc . It also includes a “comment” cell that can include relatively simple logic like “Man only on/off FC valve with open/close limits and 30 sec stroke” or “analog input”, or “Rev acting PID with man/auto modes and FO valve” etc. Most of the modules can be defined on this spreadsheet.
The logic notes are usually a couple of paragraphs each and explain logic that is more complicated. Maybe we have an involved set of interlocks or ratio or cascade logic. If I have a logic note I’ll reference it in the tag list so the programmer knows to look for it.
The flow sheets are the last part. I usually have a flow sheet for every phase which defines the phase parameters, logic paths, failures, etc. (See Figure 1 for an example of an agitate phase.) Then I create a flow chart for every recipe which defines what phases I am using and what parameters are being passed. (See Figure 2 for an example of a partial recipe.)
Figure 1: Control Narrative Best Practices Agitator Phase
Figure 2: Control Narrative Best Practices Recipe Sample
Hiten Dalal’s Pipeline Feed System Example
I find the American Petroleum Institute Standard API RP 554 Part 1 (R2016) “Process Control Systems: Part 1-Process Control Systems Functions and Functional Specification Development” and the ISA Standard ANSI / ISA 5.06.01-2007 Functional Requirements Documentation for Control Software Applications to be very useful. ANSI/ISA95 also offers guidance on “Enterprise-Control System Integration.” These types of documents in my opinion help include the opinion of all stakeholders in the logic without the stakeholder having to be familiar with flow charting or logic diagrams or specific control system engineering terminology. The functional specification in my opinion is a progressive elaboration of a simple process description done by the process engineer. Once finalized, the functional specification can be developed into a SCADA/DCS operations manual by listing normal sequence of operation along with analysis of applicable responsibility such as operator action/responsibility, logic solver responsibility, and HMI display. You may download my example of a pipeline control system functional specification: Condensate Feed Pump & Alignment Motor Operated Valves (MOVs).
The post When and How to Use Derivative Action in a PID Controller first appeared on the ISA Interchange blog site.
Derivative action is the least frequently used mode in the PID controller. Some plants do not like to use derivative action at all because they see abrupt changes in PID output and lack an understanding of benefits and guidance on how to set the tuning parameter (rate time). Here we have a question from one of the original protégés of the ISA Mentor Program and answers by a key resource on control Michel Ruel concluding with my view.
Is there a guideline in terms of when to enable the derivate term in a PID?
Derivative is more useful when dead time is not pure dead time but instead a series of small time constants; using derivative “eliminate” one of those small time constants.
You should use the derivative time equal to the largest of those small time constants. Since we usually do not know the details, a good rule of thumb is adjusting Derivative time to half the dead time.
Adding derivative (D) will increase robustness (higher gain and phase margin) since D will reduce apparent dead time of the closed loop.
A good example is the thermowell in a temperature loop: if the thermowell represents a time constant of 10 s, using a D of 10 seconds will eliminate the lag of the thermowell.
Hence, the apparent dead time of the closed loop is reduced and you can use more propositional, shorter integral time; the settling time will be shorter and stability better.
When you look at formulas to reject a disturbance, you observe that in presence of D, proportional and integral can be stronger.
We recommend using derivative only if the derivative function contains a built-in filter to remove high frequency noise. Most DCSs and PLCs have this function but some do not or there is a switch to activate the derivative filter.
What does having a higher phase margin increase the robustness?
Robustness means that the control loop will remain stable even if the model changes. Phase and gain margin represents the amplitude of the change before it becomes unstable, i.e. before reaching -180 degrees or a loop gain above one.
Ta analyze, we use open loop frequency response, the product of controller model and process model. On a Bode plot, gain are multiplied (or added if plot in dB) and total phase is the sum of process phase and controller phase.
Phase margin is the number of degrees required to reach -180 degrees when the open loop gain is 1 (0 dB). If this number is large (high phase margin), the system is robust meaning that the apparent dead time can increase without reaching instability. If the phase margin is small, a slight change in apparent dead time will bring the control loop to instability.
Adding derivative adds a positive phase, hence increases phase margin (compare to adding a dead time or a time constant that reduces the phase margin).
The use of derivative is more important in lag dominant (near-integrating), true integrating, and runaway processes (highly exothermic reactions). The derivative action benefit declines as the primary time constant (largest lag) approaches the dead time because the process changes become too abrupt due to lack of a significant filtering action by a process time constant.
Temperature loops have a large secondary time constant courtesy of heat transfer lags in the thermowell or the process heat transfer areas. Setting the derivative time equal to the largest of the secondary lags can cancel out almost 90 percent of the lag assuming the derivative filter is about 1/8 to 1/10 the rate time setting. Highly exothermic reactors can have positive feedback that causes acceleration of the temperature. Some of these temperature loops have only proportional and derivative action because integral action is viewed as unsafe.
If a PID Series Form is used, increasing the rate time reduces the integral mode action (increases the effective reset time), reduces the proportional mode action (decreases effective PID gain or increases effective PID proportional band) and moderates the increase in derivative action. The interaction factors moderates all of the modes preventing the resulting effective rate time from being greater than one-quarter the effective reset time. This helps prevent instability if the rate time setting approaches the reset time setting. There is no such inherent protection in the ISA Standard Form. It is critical that the user prevent the rate time from being larger than one-quarter the reset time in the ISA Standard Form. While in general it is best to identify multiple time constants, a general rule of thumb I use is the rate time should be the largest of a secondary time constant identified or one-half the dead time and never larger than one-quarter the reset time.
It is critical to convert tuning based on setting units and PID form used as you go from one vintage or supplier to another. It is best to verify the conversion with the supplier of the new system. The general rules for converting from different PID forms are given in the ISA Mentor Program Q&A blog post How Do You Convert Tuning Settings of an Independent PID with the last series of equations K1 thru K3 showing how to convert from a series PID form to the ISA Standard Form.
In general, PID structures should have derivative action on the process variable and not error unless the resulting kick in the PID output upon a setpoint change is useful to get to setpoint faster particularly if there is a significant control valve or VFD deadband or resolution limit.
A small setpoint filter in the analog output or secondary loop setpoint along with external reset feedback of the manipulated variable can make the kick a bump. A setpoint lead-lag on the primary loop where the lag time is the reset time and the lead is one-quarter of the lag or a two degrees of freedom structure with the beta set equal to 0.5 and the gamma set equal to about 0.25 can provide a compromise where the kick is moderated while getting to the primary setpoint faster.
Image Credit: Wikipedia
The post, Common Mistakes not Commonly Understood - Part 2, appeared first on the ControlGlobal.com Control Talk blog.
Here we continue on in our exploration of what we should know but don’t know and how it hurts us.
(11) Ignoring actuator and positioner sensitivity. Piston actuators are attractive due to smaller size and lower cost but have a sensitivity that can be 10 times worse than diaphragm actuators. Many positioners look fine for conventional tests but increase response time to almost a 100 times larger for step changes in signal less than 0.2%. The result is extremely confusing erratic spikey oscillations that only get worse as you decrease the PID gain. My ISA 2017 Process Control and Safety Symposium presentation ISA-PCS-2017-Presentation-Solutions-to-Stop-Most-Oscillations.pdf show the bizarre oscillations for poor positioner sensitivity. Slides 21 through 23 show the situation, the confusion and a simple tuning fix. While the fix helps stop the oscillations, the best solution is a better valve positioner to provide precise control (critical for pH systems with a strong acid or strong base due to amplification of oscillations from sensitivity limit).
(12) Ignoring drift in thermocouples. The drift in thermocouples (TCs) can be several degrees per year. Thus, even if there is tight control, the temperature loop setpoint is wrong resulting in the wrong operating point. Since temperature loops often determine product composition and quality, the effect on process performance is considerable with the culprit largely unrecognized leading to some creative opinions. Some operators may home in on a setpoint to get them closer to the best operating point, but the next shift operator may put the setpoint back at what is defined in the operating procedures. Replacement of the thermocouple sensor means the setpoint becomes wrong. The solution is a Resistance Temperature Detector (RTD) that inherently has 2 orders of magnitude less drift and better sensitivity for temperatures less than 400 degrees C. The slightly slower response of an RTD sensor is negligible compared to the thermowell thermal lags. The only reason not to use an RTD is a huge amount of vibration or a high temperature. Please don’t say you use TCs because they are cheaper. You would be surprised at the installed cost and lifecycle cost of a TC versus an RTD. See the ISA Mentor Program Webinar “Temperature Measurement and Control” for a startling table on slide 4 comparing TCs and RTDs and a disclosure of real versus perceived reasons to use TCs on slide 7.
(13) Not realizing the effect of flow ratio on process gain. The process gain of essentially all composition, pH and temperature loops is the slope of the process variable plotted versus the ratio of the manipulated flow to the main feed flow. This means the process gain is inversely proportional to feed flow besides being proportional to slope of plot. In order to convert the slope that is the change in process variable (PV) divided by the change in ratio to the required units of the change in PV per change in manipulated flow, you have to divide by feed flow. The plot of temperature or composition versus ratio is not commonly seen or even realized as necessary. The same sort of relationship holds true where the manipulated variable is an additive or reactant flow for composition or a cooling or heating stream flow for temperature. For temperature control, the slope of the curve is often also steeper at low flow creating a double whammy as to the increase in process gain at low flow. Also, for jackets, coils and heat exchangers, the coolant flow may be lower creating more dead time for a sensor on the outlet. Fortunately for pH, we have pH titration curves where pH is plotted versus a ratio of reagent volume added to sample volume although often the sample volume added is just used on the X axis. In this case, you need to find out the sample volume so you can put the proper abscissa on the laboratory curve. In the process application the titration curve abscissa that is the ratio of reagent volume to sample volume is simply the ratio of volumetric reagent flow to volumetric feed flow if the reagent concentrations are the same. You can then use this plot with application abscissa in terms of flow ratios to determine process gain and valve capacity, rangeability, backlash (deadband) and stiction (resolution) requirements. An intelligent analysis of the amplification of oscillations by the slope of limit cycle amplitude from deadband and resolution limitations determines the number of stages and size of reagent valves needed. Often for strong acid or bases, two or three stages of neutralization with largest reagent valve on first stage and smallest reagent valve on last stage are needed due to valve rangeability or precision limitations. For more details, check out the 12/12/2015 Control Talk Blog “Hidden Factor in Our Most Important Control Loops”.
(14) Ignoring effect of temperature on actual solution pH. We are accustomed to using the built-in temperature compensator that has been in pH transmitters for 60 or more years to account for the effect of temperature seen in glass electrode Nernst Equation. What we don’t tend to do is quantify and take advantage of the solution pH compensation in smart transmitters. The dissociation constant for water, acids and bases are a function of temperature. If you express the water dissociation constant as a pKw and the acid as well as the base dissociation constant as a pka, whenever the pH is within 4 pH of the pKw or pKa, there is a significant effect of temperature on actual pH. Physical property tables can detail the pKw and pKa as a function of temperature but the best bet is to vary the temperature of a lab sample and note the change in pH after correction for the Nernst equation (some lab meters don’t do even Nernst temperature compensation).
(15) Replacing a positioner with a booster. Extensive guidelines dating back to Nyquist plot studies in the 1960s concluded that fast loops should use a booster instead of a positioner. I still hear this rule cited today. This is downright dangerous due to positive feedback from the high outlet port sensitivity of the booster and flexure of the diaphragm actuator causing valve to slam shut. The volume booster should be placed on the output of the positioner with its bypass valve slightly open to stop any high frequency oscillations as seen in the ISA Mentor Program Webinar “How to Get the Most out of Control Valves”. You can fast forward to slides 18 and 19 to see the setup.
(16) Putting VFD speed control in the DCS. We like putting controls and logic as much as possible in the control room for adjustment and maintenance. While this is normally a good idea, a speed loop in the VFD instead of DCS is orders of magnitude faster enabling much tighter control. In fact if the speed loop is put in the DCS for flow and pressure control, you will violate the cascade rule that the secondary loop (speed) must be 5 times faster than primary loop.
(17) Putting a deadband into split range block for integrating processes or cascade control or integral action in positioner. The deadband creates a limit cycle just like deadband from backlash in a control valve when there are two or more integrators in loop whether in process, PID or positioner.
(18) Not taking into account temperature and pH cross sectional profile in pipelines. The temperature and pH varies extensively across a pipeline especially for high viscosity feeds or reagents. The tip should be near the centerline. For small pipelines, this may require installing the sensor in an elbow preferably facing into flow unless too abrasive. The pH sensor tip must, of course, be pointed down preferably at about a 45 degree angle so that bubble in internal fill of electrode does not reside in tip. The angle prevents the bubble from residing at the internal electrode (relatively low probability but possible).
(19) Not preventing measurement noise from phases and mixing. Thinking we need to have the sensor see a process change as fast as possible, we fail to realize a few seconds of transportation delay is better than a poor signal to noise ratio. To prevent a sensor seeing bubbles or undissolved solids in a liquid and droplets or condensate in a gas or steam, you need to locate a sensor sufficiently downstream of static mixer or exchanger or desuperheater outlet or where any streams come together. You need to keep sensor away from a sparge and avoid top or bottom of vessel or horizontal line. For temperature control of a jacketed vessel with split ranged manipulation of cooling water and steam, you should use jacket outlet instead of inlet temperature measurement to allow time for water to vaporize and for steam to condense. An even better solution is to use a steam injector to heat up the cooling water eliminating the transition of phases from going back and forth between steam and cooling water in the jacket. The injector provides rapid and smooth transitions from cooling to heating over quite a temperature range going from cold to hot water.
(20) Tuning to make smooth approach of PID output to final resting value in near and true integrating chemical processes. The main task of composition, temperature and pH loops in chemical processes is to be able to effectively reject load disturbances at the process input. This requires a maximization of controller gain and significant overshoot by the controller output of the final resting value to balance the load. Many experts in tuning who worked mostly on self-regulating processes don’t realize this requirement and may even say you should never tune the controller output to overshoot the final resting value failing to realize near-integrating processes will take an incredible long time to recover and true integrating processes will never recover from load disturbance. To understand the necessity of overshoot in PID output, think of a level loop where the level has increased because the flow into the vessel has increased. To bring the level back down to setpoint, the outlet flow manipulated by the level controller must be greater than the inlet flow to lower the level to setpoint before outlet flow settles out to match the inlet flow (final resting value of PID output).
Always remember to …………………………………………………………..………….. Oh shoot, I forget ... senior moment.
The post How to Manage Pipeline Valve Positioner and PID Tuning first appeared on the ISA Interchange blog site.
I have been trying to get a handle on small ripples in one of the pipelines by using a rule of thumb to successively reduce proportional action by 20 percent and integral action by 50 percent. Using the same rule, I could stabilize the ripples on Friday. On Sunday, the product changed in the pipeline and with that back came those 4 percent ripples. There is one control valve that impacts line pressure. I could stretch ripples a bit but could not eliminate them. Output going to zero is natural scheduled shutdown of pipeline. I know it is a lot of information that I am providing but perhaps you can glance through and pinpoint something that stands out. I am learning since I started tuning the control valve that it is product sensitive as well.
Since I don’t know if there is a trend of valve signal and valve flow, I am not sure what is happening. If the considerable decrease in gain does not help or makes it worse, I am wondering if there is some valve stiction or backlash, respectively. Is the valve the same for both products? Could a product be causing more stiction due to buildup or coating on valve seating or sealing surfaces or stem? Could the Sunday valve be closer to the shutoff where friction is greatest?
It sure looks like you have too much proportional (P) action for the new product. The integral action is already greatly reduced and most of the overcorrection is occurring very quickly due to proportional action. I would try decreasing the proportional mode action (proportional mode gain) by 50 percent (cut gain in half). If this helps, reduce the proportional gain again. Based on the very small integral (I) action, you may be able to increase integral action once you decrease proportional action. However, I reiterate that if decreasing the gain simply increases the period of the oscillation, you have backlash or stiction. If amplitude stays the same, you have stiction.
Please make sure there is no integral action in the digital valve controller.
When you say no integral action, do you mean in valve positioner or in controller? I don’t think our positioner has any PID setup. Only PID action is in controller. Since it is liquid pressure and flow, we use P&I. Are you suggesting we use only P action in my controller?
I meant no integral in the valve positioner that for Fisher is called a digital valve controller (DVC). You should use integral action in most process controllers (e.g., flow and pressure). Integral action in the process controllers is essential for the PID control of many processes. So far as tuning the process controller for pipeline control, the integral time also known as reset time (seconds per repeat) should generally be greater than four times the deadtime for an ISA Standard Form. You must be careful about what PID form, structure and tuning setting units are being used. If the integral setting is an integral gain, such as what is used in the “parallel” PID form depicted in textbooks and used in some PLCs, the integral setting may not just be a simple factor of the deadtime (e.g., four times deadtime) but will also depend upon other dynamics. Also, some integral settings are in repeats per minute instead of seconds.
Please make sure you extensively test any tuning settings by making small changes in the setpoint with the controller in automatic or in the controller output by momentarily putting the controller in manual. There should be little to no oscillation. The tests should be done at different valve positions particularly if the valve installed flow characteristic is nonlinear. Oscillations may be most prone near the shutoff positioner where stiction is greatest from seat/seal friction.
If there is interaction between loops, the least important loop must be made slower or decoupling used by means of a feedforward signal. If you are going to do some optimization via a controller that seeks to minimize or maximize a valve position, the proportional gain divided by the reset time for this controller doing optimization must be an order of magnitude smaller than process controller to prevent interaction. These PID controllers used for optimizing a valve position are called “valve position controllers” (VPC). I hesitated to mention this to avoid confusion because these are not valve positioners and are only used for optimization. Also, nonlinear or notch gains and directional move suppression via external reset feedback are used to keep the VPC from responding too much or too little so the process controller does not oscillate or run out of valve.
Many newer smart positioners have added integral action to positioners in the last two decades. In some cases, integral action is enabled as the default. This prompted me to write the Control Talk blog post “Getting the Most Out of Positioners.” This blog does not address setting integral action in process controllers (e.g., flow and pressure controllers).
Do you teach a control valve tuning class? Is there a specific method you recommend for a pipeline control valve?
I do not offer a class on tuning positioners. Supplier courses on tuning positioners are good but you will need to insist on turning off integral action. You can have them talk to me if they disagree. In general you should make sure you do not use integral action and that you use the highest valve positioner gain that does not cause oscillation since for pipeline flow and pressure control, oscillations are not filtered. If you have an Emerson Digital Valve Controller (DVC), I recommend “travel control” with no integral action and with the highest gain that still gives an overdamped response. The valve must be a true throttling valve and not an on-off valve posing as a throttling valve as discussed in the Control Talk blog “Getting the Most out of Valve Positioners”. Note that in this blog we are going for a more aggressive response than what you need. Because of the lack of a significant process time constant in a pipeline, you need a smooth valve response. In the blog, the valve positioner gain is described to be set high enough to cause a slight overshoot and oscillation that quickly settles out. Oscillations in the valve response are useful to get a faster response for vessels and columns since there is a a large process time constant to filter out oscillations. You want to still use a high gain and no integral action in the positioner but seek an overdamped (non-oscillatory) response of valve position.
I have bought Tuning and Control Loop Performance Fourth Edition. I reference tables from there for suggested PID values. I have removed derivative from several pressure and flow loops and observed them to be equally efficient. In the process of tuning I have learned that operations installations have impact on loop tuning. I have made the following types of corrections,
(1) As installed, the logic had the PID getting initiated as soon as block valve #1 was fully opened but block valve #2 was getting commanded to open after #1 causing PID output to ramp off to high output limit since the control valve was not seeing full flow. We solved this by setting temporary upper clamp in PID output at safe limit to avoid overshoot until block valve #2 was fully opened.
(2) Transmitter range was high and margin of error was not acceptable by operations. Re-ranged transmitter to suitable range and brought error within acceptable margin.
(3) EIM Controls Electric and REXA electrohydraulic actuators have a limit on number of actuations. I added an acceptable dead band to reduce number of actuations.
The post, Common Mistakes not Commonly Understood - Part 1, first appeared on ControlGlobal.com's Control Talk blog.
There are many mistakes but some are repeated over and over again even though the automation engineer is attentive and experienced and has the best intentions. Part of the problem is overload in terms of tasks and the time crunch. It is highly unlikely engineers today read even a smattering of the thousands of pages in books, handbooks, white papers and articles. The knowledge to prevent the following mistakes may be buried in this literature but I am not so sure of even this. In any case, one probably could not find it. Here is my effort to get straight to the point of realizing and fixing mistakes.
(1) Reset time set too large for deadtime dominant processes. Most tuning algorithms don’t recognize what Shinskey found is that the reset time could be decreased by a factor of 8 or more from 3 to 4 times the dead time to 0.4 to 0.5 times the dead time for the same controller gain setting for severely dead time dominant processes. Lambda tuning can accomplish a dramatic reduction in reset time since the reset time is the time constant that for dead time dominant processes is by definition less than the dead time. The controller gain is also proportionally reduced providing stability despite a much smaller reset time, which is generally good since these processes are more likely to have noise and a jagged not so smooth response due to the lack of a significant time constant. The criticism that the controller reset time and gain becomes too small leading to integral-only type of control for severely dead time dominant processes is avoided by simply putting a limit of ¼ the dead time on the reset time that is then used in the equation for the controller gain as discussed in the June 2017 Control Talk Column “Opening minds about controllers, part 1”. This column is also a good resource for understanding the next common mistake where the reset time is set too small.
(2) Reset time set too small for lag dominant (near-integrating) processes, integrating and runaway processes. These processes lack self-regulation in the process and depend more upon the gain action in the PID to provide the negative feedback missing in the process. Since engineers are not comfortable with controller gains greater than 5 and operators object to sudden movements of PID output, the controller gain is often an order of magnitude or more too small. Since the product of the reset time and gain must be greater than twice the inverse of the integrating process gain to prevent the start of slow oscillations, the reset time is an order of magnitude or more too small. Since we are taught in control theory classes how too high a PID gain causes oscillations, the PID gain is typically decreased making the problem worse. For more on this pervasive problem and the fix, see the 9/14/2017 Control Talk Blog “Surprising Gains from PID Gain” which leads us to the next common mistake.
(3) PID gain set too small for valves with poor positioner sensitivity and excessive dead band from backlash and variable frequency drives with a large dead band setting. Not only is the PID gain too small per last mistake but is also too small to deal with valve problems as seen in my ISA 2017 Process Control and Safety Symposium slides ISA-PCS-Presentation-Solutions-to-Stop-Most-Oscillations.pdf that details a lot of cases where a counterintuitive increase in PID gain reduces or stops oscillations.
(4) Split ranged valves used to increase valve rangeability. The transition from the large to small valve is not smooth since the friction and consequently stiction is greatest near shutoff as plugs rub seats and balls or disks rub seals. Since the stiction in percent stroke translates to a larger abrupt change in flow and amplitude in the limit cycle, small smooth changes in flow are not possible especially near shutoff but also whenever the large valve is open. The better solution is a large and small valve stroked in parallel either where a Valve Position Controller manipulates the large valve with directional move suppression to keep the small valve near an optimum position or by simultaneous manipulation as detailed in the November 2005 Control feature article “Model Predictive Control can Solve Valve Problem.”
(5) Ignoring effect of meter velocity on flow measurement rangeability. The maximum velocity for a given meter size rarely corresponds to the velocity at the maximum flow in a process application. Often the maximum process velocity is less than half the maximum meter velocity for line size meters. Thus, the velocity at the minimum process flow is so far below the minimum flow for a good meter response that the actual rangeability is less than half what is stated in the literature.
(6) Ignoring the effect of noise on flow measurement rangeability. The signal to noise ratio often deteriorates before the meter reaches the low flow corresponding to its rangeability limit. The flow measurement can essentially become unusable for flow control making the actual rangeability much less than what is stated in the literature.
(7) Ignoring the effect of stiction, backlash and pressure drop on valve rangeability. There are many definitions of valve rangeability that are erroneous, such as those that define rangeability as the ratio of maximum to a minimum flow coefficient (Cv) where the closeness of the actual to theoretical inherent flow characteristic determines the minimum Cv leading to the conclusion that a rotary valve offers the greatest rangeability. The real rangeability should be the ratio of maximum to minimum controllable flow. Deadband from backlash and resolution from stiction near the shutoff position should determine the minimum position that gives a controllable flow. Since stiction is greatest as the plug moves into the seat or ball or disk moves into the seal particularly for tight shutoff valves, the minimum controllable position can be quite large (e.g., 2% to 20%). The flow at this position needs to be computed based on the installed flow characteristic. A ratio of valve to pressure drop less than 0.5 will cause a linear characteristic to distort to quick opening increasing the flow at the minimum controllable position causing a significant loss in rangeability. For equal percentage valves there is also a loss in the minimum controllable flow due to excessive flattening of the installed characteristic. There may also be significant flattening of the installed flow characteristic for rotary valves resulting in any rotation past 50 degrees being ineffectual due to the flatness of the installed flow characteristic, which shows up as the controller output through integral action wandering about above 50 degrees. My book Tuning and Control Loop Performance Fourth Edition published in 2014 by Momentum Press gives the equations to compute the real rangeability. It turns out sliding stem valves with diaphragm actuators and smart positioners have the best rangeability.
(8) Ignoring the effect of static head, motor and frame type, and inverter type and control algorithm on VFD rangeability. Since the inverter waveform is not purely sinusoidal, it is important to select motors that are designed for Pulse Width Modulation (PWM). These “inverter duty” motors have windings with a higher temperature rating (class F). Another option that facilitates operation at lower speeds to achieve the maximum rangeability offered by the PWM drive is a higher service factor (e.g. 1.15). To help prevent motor overheating at low speeds, larger frame sizes and line powered ventilation fans are used. In the process industry, totally enclosed fan cooled (TEFC) motors are used to provide protection from chemicals and ventilation by a fan that is run off the same power line as the motor. The fan speed decreases as the motor speed decreases. To reduce the problem from motor overheating at low speeds, an AC line power constant speed ventilation fan and a larger frame size to provide more ventilation space can be specified. Alternately, a separate booster fan can be supplied. For very large motors (e.g. 1000 HP), totally enclosed water cooled (TEWC) motors are used to deal with the extra heat generation. For low static head pump applications, the overheating at low speeds is not a problem because the torque load decreases with flow. Turndown also depends upon the control strategy in the variable frequency drive. All of the control strategies discussed here use pulse width modulation to manipulate the frequency and amplitude of voltage and current to each phase. Open loop voltage (volts/hertz) control has the simplest algorithm but is susceptible to varying degrees of slip. Most of the drives provided for pump control use this strategy in which the rate of change of flux and hence speed is taken as proportional to voltage. At low speeds the motor losses are larger making the difference between the computed and actual speed (slip) much larger. Some drives make a correction to the voltage to account for estimated motor losses. Ultimately these drives depend upon the DCS to correct for dynamic slip through proportional action and to correct for steady state slip through integral action in process controller(s). The rangeability is normally 40:1 with 0.5% speed regulation. Closed loop slip control has a speed loop cascaded to a torque loop. Speed (tachometer) and torque feedback are shown to be from sensors. The torque feedback may be calculated from a current sensor. A DCS process controller output is the speed set point for the speed controller whose output is the set point to a torque controller. PI rather than P-only controllers can be used since sticktion and resolution limits are negligible, eliminating any concern about limit cycles from integral action. The control system in the VSD is analogous to the cascade control system in a digital positioner. The speed controller plays a role similar to the valve position controller and the torque controller serves a similar purpose as the relay controller. However, in the digital positioner the relay response is inherently much faster than the valve position response. In the VSD, the torque controller can have a relatively sluggish response. To prevent a violation of the cascade rule that requires the secondary loop (torque) to be 5x faster than the primary loop (speed), the speed loop is slowed by decreasing the speed controller gain and integral time. Since the speed set point comes from process controller in the DCS, there is at least a triple cascade. In many cases there is a quadruple cascade control system, for vessel temperature to jacket temperature to coolant flow to speed cascade. The detuning of the speed controller causes detuning of the flow controller, which in turn may cause detuning the temperature controller. As a result, the ability to reject fast process disturbances may be compromised. The rangeability is normally 80:1 with 0.1% speed regulation. However, if the static head approaches the total pressure rise, the rangeability can deteriorate by an order of magnitude for all VFDs with a resulting installed flow characteristic that is quick opening.
(9) Ignoring changes in fluid composition in thermal mass flow meters. Changes in fluid composition cause a change in the assumed thermal conductivity and specific heat capacity of the fluid and the viscosity for liquids introducing a significant error. If you are trying to use a thermal flow meter on air/gas/vapors never install it in a service where it can ever see a gas/vapor approaching dew point. Fouling also causes an error due to thermal lags. Thermal mass flow meters are generally only successfully used on small dry pure gas flows such as oxygen or air for lab or pilot plant bioreactors (very controlled environment) where the fluid is clean single phase and composition is fixed and the remaining 1% or 2% error is corrected by a primary dissolved oxygen controller manipulating the secondary oxygen or air flow controller setpoint.
(10) Ignoring changes in emissivity in optical pyrometers. Two-color or ratio pyrometers measure the radiation at two wavelengths. If the change in emittance at each wavelength with temperature is identical (gray-bodies), the effect of emittance can be cancelled out by ratio calculations. In reality, the change in emittance with temperature varies with wavelength (non-gray-bodies). Additionally, the change in emittance with changes in surface, operating conditions, and the composition of the intervening space may vary with wavelength. In one comparison test on a blackbody, a single-color and two-color pyrometers exhibited errors of 2 and 30 degrees C, respectively. Equal changes in emittance due to surface and operating conditions and intervening gases, particles, and vapors may make a two-color ratio pyrometer more accurate than a single-color pyrometer but it puts into question any accuracy statements for two-color pyrometers that are much better than 30 degrees C.
Please take my advice. I am not using it anyway. I have mostly retired into the virtual world.
The post, Surprising Gains from PID Gain, first appeared on ControlGlobal.com's Control Talk blog.
We learned in control theory courses that too high a PID gain causes oscillations and can lead to instability. Operators do not like the large sudden changes in PID output from a high PID gain. Operators may see what they think is the wrong valve open in split range control as the setpoint is approached when PID gain dominates the response. Most tuning tests and studies use a setpoint response rather than a load response for judging tuning. A high PID gain that is set to give maximum disturbance rejection in load response will show overshoot and some oscillation for a setpoint response. Internal Model Control or any control algorithm or tuning method that considers disturbances on the process output will see a concern similar to what is observed for the setpoint response because the load and setpoint change are immediately appearing at the PID algorithm as inputs. There are many reasons why PID gain is unfavorably viewed. Here I try to show you that PID gain is undervalued and underutilized.
First, let’s realize that the immediate feedback correction based on the change in process variable being controlled may be beneficial in some important cases. The immediate action reduces the deadtime and oscillations from deadband, resolution and sensitivity limits in the measurement, control valve and variable frequency drive. The preliminary draft of ISA-PCS-2017-Presentation-Solutions-to-Stop-Most-Oscillations.pdf for the ISA 2017 Process Control and Safety Symposium show how important PID gain is for stopping oscillations from non-ideal measurements and valves and also from integrating processes.
PID gain does not play as important a role in balanced self-regulating process often shown in control theory courses and publications. The primary process time constant is not very large compared to the dead time. Consequently, there is more negative feedback action seen in these self-regulating processes. When the time constant becomes more than 4 times the dead time, we consider these processes to be near-integrating in that in the time frame of the PID they appear to ramp losing self-regulation. For these processes and true integrating processes, PID gain provides the negative feedback action missing in the process to halt or arrest the ramp. Integrating process tuning rules are used where lambda is an arrest time. Not readily understood is that there is a window of allowable gains, where too low of a PID gain cause larger and slower oscillations than too high of a PID gain. The problem is even more serious and potentially dangerous for runaway processes (highly exothermic reactors). Most loops on integrating and runaway processes have a reset time that is orders of magnitude too small and a PID gain that is an order of magnitude too low. A PID gain greater than 50 may be needed for a highly back mixed polymerization reactor. Many users are uncomfortable with such high gain settings.
For integrating and runaway processes, the PID output must exceed the load disturbance to return the process variable to setpoint. This is more immediately and effectively done by PID gain action. We can often see an immediate improvement in control by greatly increasing the reset time and then the gain. The gain must be greater than twice the inverse of the product of the open loop integrating process gain (%/sec/%) and reset time (sec) to prevent the start of slow oscillations from violation of the low gain limit.
As the process variable approaches setpoint, there is an immediate reduction in the PID contribution to the output from the PID gain. Reset has no sense of direction and will continue to change the output in the same direction not reversing direction till the process variable crosses setpoint reversing the sign of the error. Operators looking at digital displays waiting for a temperature to rise to setpoint will think a heating valve should still be open if the temperature is just below setpoint when in fact the cooling valve should be open to prevent overshoot. If the loop waits till the PV crosses setpoint, the correction is too late due to deadtime. PID gain provides the anticipatory action missing in reset action.
A setpoint filter equal to the reset time or a PID structure of proportional action on process variable instead of error will eliminate overshoot of the setpoint for PID tuning that maximizes disturbance rejection where the peak error and the integrated error are both inversely proportional to PID gain.
Finally, let’s realize that the use of external-reset feedback allows us to use setpoint rate of change limits on the valve signal or secondary loop signal that will prevent the large abrupt jump in PID output form a high PID gain that upsets operators and some related loops. External-reset feedback will also prevent the PID output from changing faster than a valve or secondary loop can respond. No retuning needed.
These days we personally don’t have time to wait for taking corrective action in our lives and need to seek more anticipation based on where we are going. We also benefit from external-reset feedback. We need to realize the same for PID loops. There is a lot to be gained from PID gain.
The post, How to Motivate Management and Millennials (M&Ms), originally appeared on ControlGlobal.com's Control Talk blog.
The key to a much brighter future of our profession depends upon management providing the funding and support and millennials seeking to improve process performance by the use of the best automation and process control. There are some common approaches to these seemingly very different groups.
Management is focused on the bottom line that may be as short term as quarterly results. If management has only business degrees, this may be the primary and perhaps the only motivation. If management also has a technical degree, there can be an additional motivation to advance the knowledge and technology used to make processes safer and more productive. Technical people are intrigued and attracted to new more powerful developments in technology. An upcoming Control Talk column with Walt Boyes will give considerable insight into how management thinks.
Millennials choose engineering as a major because they were interested in technology and having a positive impact on people and systems by using and advancing the latest technologies. Unfortunately, students have a negative image that industry is seemingly low tech, routine, and “down and dirty”. Peter Martin aptly discussed the negative view by engineering graduates of working in industry and the missing understanding of opportunities that meet their altruistic motivation in his July 2017 ISA Interchange post “The Challenges of Attracting Millennial to Industrial Careers”. Another goal of engineers may be to make money and seem important by moving on and becoming a manager.
The use of the best and highest tech hardware and software in automation and process control can yield impressive improvements in process safety and performance that are much faster and less expensive than changing process equipment. Unfortunately, management is often not aware of this as discussed in the May 2017 Control Talk column “The Invisibility of process control”.
What can possibly work to impress both management and millennials are demos that show the benefits and use of new technologies. The “before” and “after” cases should show benefits of increases in process performance in terms of dollars with a running quarterly total moving forward showing continuous improvement from knowledge gained. Since time is precious and concentration spans are short, the summary of dynamic runs can be presented in terms of trend charts of benefits for changes in demand and supplier feeds. “Seeing is believing.” The reality of being able to adjust to a dynamic world is inspiring. For engineers, a chance to see live demos may increase interest because of the dynamics. The emphasis should be on how fast and beneficial are the results from using the best technology. See the August feature article “Virtual Plant Virtuosity” for how to develop high tech solutions that impress the M&Ms satisfying altruistic and monetary motivations.
I love M&Ms and there are so many flavors now. See if you can enjoy the M&Ms that determine our profession as much as the M&Ms that satisfy your cravings. Maybe M&Ms can be a totally sweet deal.
The post Webinar Recording: How to Use Key PID Controller Features first appeared on the ISA Interchange blog site.
This educational ISA webinar on control valves was introduced by Greg McMillan and presented by Hector Torres, in conjunction with the ISA Mentor Program. Greg is an industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical). Hector is a senior process and control engineer with Eastman Chemical. Hector is a recipient of ISA’s John McCarney Award for the article “Enabling new automation engineers”
Héctor Torres, a protégé of the ISA Mentor Program from its inception, provides a detailed view of how to use key PID controller features that can greatly expand what you can achieve. The setting of anti-reset windup ARW limits, dynamic reset limit, eight different structures, integral dead band, and set-point filter. Feedforward and rate limiting are covered with some innovative application examples.
The post Webinar Recording: How to Use Key PID Features first appeared on the ISA Interchange blog site.
Héctor Torres, a protégé of the ISA Mentor Program from its inception, provides a detailed view of how to use key PID features that can greatly expand what you can achieve. The setting of anti-reset windup ARW limits, dynamic reset limit, eight different structures, integral dead band, and set-point filter. Feedforward and rate limiting are covered with some innovative application examples.
The post, Insights to Process and Loop Performance, originally appeared on ControlGlobal.com's Control Talk blog.
Here we look at a myriad of metrics on process and control loop performance and show how to see through the complexity and diversity to recognize the commonality and underlying principles. We will see how dozens of metrics simplify to two classes each for the process and the loop. We also provide a concise view of how to compute and use these metrics and what affects them.
Let’s start with process metrics because while as automation engineers we are tuned into control metrics, our ultimate goal is improvement in the process and thus, process metrics. The improvement in profitability of a process comes down to improving process efficiency and/or capacity. Often these are interrelated in that an increase in process capacity is often associated with a decrease in process efficiency. Also an increase in the metrics for a particular part of a process may decrease the metrics for other parts of the process. The following example cited in the April 2017 Control Talk Column “An ‘entitlement’ approach to process control improvement” is indicative of the need to have metrics and an understanding for the entire process:
“In a recent application of MPC for thermal oxidizer temperature control that had a compound response complicating the PID control scheme, there was a $700K per year benefit clearly seen in reduced natural gas usage. However, the improvement also reduced steam make to a turbo-generator, reducing electricity generated by $300K per year. We reached a compromise of about $400K per year in net benefit because of lost electrical power generation from less steam to the turbo-generators. We spent many hours to align the benefit with measureable accounting for the natural gas reduction and the electrical purchases. Sometimes the loss of benefits is greater than expected. You need to be upfront and make sure you don’t just shift costs to a different cost area.”
Process efficiency can be increased by reducing energy use (e.g., electricity, steam, coolant and other utilities) and raw materials (e.g., reactants, reagents, additives and other feeds). The efficiency is first expressed as a ratio of the energy use per unit mass of product produced (e.g., kJ/kg) or energy produced (kJ/kJ) and then ideally in terms of ratio of cost to revenue by including the cost of energy used (e.g., $ per kJ) and the value of revenue for product produced (e.g., $ per kg) or energy produced (e.g., $ per kJ). The kJ of energy and kg and mass are running totals where the oldest value of mass flow or energy multiplied by a time interval between measurements is replaced in the total by the current value. A deadtime block can provide the oldest value. The time interval between measurements and the deadtime representative of the time period for the running total should both be chosen to provide a good signal to noise ratio. The deadtime block time period should also be chosen to help focus on the source of changes in process efficiency. For batch operations, the time period is usually the cycle time of a key phase in the batch and may simply be the totals at the end of the phase or batch. For continuous operations, I favor a time period that is an operator shift to recognize the key effect of operators on process performance. This time period is also suitable for evaluating other sources of variability, such as the effect of ambient conditions (day to night operation and weather) and feeds and recycle and heat integration (upstream, downstream and parallel unit operations). The periods of best operation can be used to as a goal to be possibly achieved by smarter instruments or better installations less sensitive to ambient conditions or smarter controls thru procedural automation or state based control as discussed in the in the Sept 2016 Control Talk Column “Continuous improvement of continuous processes”.
The metrics that affect process capacity are more diverse and complicated. Process capacity can be affected by feed rates, onstream time, startup time, shutdown time, maintenance time, transition time, spectrum of products and their value, recycle, and off spec product. An increase in off spec product that can be recycled can be taken as a loss in product capacity if the raw material feed rate is kept the same or taken as a loss in process efficiency if the raw material feed rate is increased. If the off spec product can be sold as a lower revenue product, the $ per kg must be correspondingly adjusted.
For batch operations, an increase in batch end point in terms of kg of product produced and a decrease in batch cycle time including time in-between batches can translate to an increase in process capacity. If a higher endpoint can be reached by holding or running the batch longer, there is a likely increase in process efficiency assuming a negligible increase in raw material but there may be an increase or decrease in process capacity. The optimum time to end a batch is best determined by looking at the rate of change of product formation (batch slope) and if necessary the rate of change of raw material and energy use to determine the optimum time to end the batch and move on. A deadtime block is again used to provide a fast update with a good signal to noise ratio to compute the slope of the batch profile and the prediction of batch end point. Of course whether downstream units for recovery and purification are able to handle an increase in batch capacity and their metrics must be included in the total picture. For example in ethanol production, a reduction in fermenter cycle time may not translate to an increase in process capacity because of limitations in distillation columns downstream or the dryer for recovery of dried solids byproduct sold as animal feed. For more on the optimization of batch end points see the Sept 2012 Control feature article “Getting the Most Out of your Batch”.
The metrics that indicate loop performance can be classified as load response and setpoint response metrics. The load response is often most important in that the desired setpoint response can be achieved for the best load response by the proper use of PID options. The load response should in nearly all cases be based on disturbances that enter as inputs to the process whereas many academic and model based studies are based on disturbances entering in the process output. For self-regulating processes where the process deadtime is comparable to or larger than the process time constant, the point of entry does not matter because the intervening process time constant does not appreciably slow down input disturbances in the time frame of the PID response (e.g., 2 to 4 deadtimes). However, most of the more interesting temperature and composition control loops in my career did not have a negligible process time constant and in fact had a near-integrating, true integrating or runaway open loop response.
The load metrics are peak error and integrated error. The peak error is the maximum excursion after a load upset. The integrated error is most often an absolute integrated error (IAE) but can be an integrated square error. If the response is non oscillatory, the integrated error and IAE are the same. There are also metrics indicative of oscillations such as settling time and undershoot. The ultimate and practical limits to peak error are proportional to the deadtime and inversely proportional to controller gain, respectively. The ultimate and practical limits to integrated error are proportional to the deadtime squared and the ratio of controller reset time to controller gain, respectively.
For setpoint metrics, there is the time to get close to setpoint, which I call rise time, important for process capacity. I am sure there is a better name because the metric must be indicative of the performance for an increase or decrease in setpoint. The other setpoint metrics are overshoot, undershoot and settling time that can affect process capacity and efficiency. The use of a setpoint lead-lag or PID structure that minimizes proportional and derivative action on setpoint changes can reduce overshoot, despite using good load disturbance rejection tuning. A setpoint lag equal to the reset time (no lead) corresponds to a PID structure of Proportional and Derivative on the Process Variable and Integral action on the Error (PD on PV and I on E).
See the Sept and Oct 2016 Control Talk Blogs “PID Options and Solutions - Part 1” and “PID Options and Solutions - Parts 2 and 3” for a discussion of loop metrics in great detail including when they are important and how to improve them. Also look at the presentation for the ISA Mentor Program WebExs “ ISA-Mentor-Program-WebEx-PID-Options-and-Solutions.pdf ”.
My last bit of advice is to ask your spouse for metrics on your marriage. Minimizing the deadtime while still having a good signal to noise ratio is particularly important. For men, the saying “Happy wife, happy life” I think would work the other way as well. I just need a rhyme.
The post How to Get the Most out of Control Valves first appeared on the ISA Interchange blog site.
The data that is really needed when selecting and sizing a control valve is rarely understood and specified, which leads to excessive variability originating from the valve. In this presentation, ISA mentor Greg McMillan discusses pervasive problems and rampant misconceptions. He then provides guidance—supported by test results—on how to select a good throttling control valve. He also explains PID tuning adjustments and a key PID feature that can be utilized to provide precise, smooth, and fast control.
The post Webinar Recording: How to Get the Most out of Control Valves first appeared on the ISA Interchange blog site.
The post, Fixes for Deadly Deadband, first appeared on ControlGlobal.com's Control Talk blog.
While there are some cases where deadband is helpful, in most applications the effect is extremely detrimental and confusing. Deadband can arise from any sources either intentionally or inadvertently. Deadband creates deadtime and for certain conditions excessive and persistent oscillations.
The increase in loop deadtime is the deadband divided by the rate of change of controller output. The increase in deadtime can increase the peak error and integrated error from a load disturbance. If there are two or more integrators in the system due to integral action in the valve positioner, variable speed drive, controller(s), or process, a limit cycle will develop.
The biggest and most troublesome source of deadband is backlash from an on-off or isolation valve (tight shutoff valve) posing as a throttling valve. The positioner seeing feedback from the actuator shaft of such rotary valves often does not realizer the internal closure member (e.g. ball or disk) is not responding due to backlash from the connections between the shaft, stem and ball or disk or the shaft windup from seal friction. The positioner diagnostics say everything is fine even meeting the requirements set by the ISA-75.25.01 Standard for Measuring Valve Response. Creative story telling develops to explain the oscillations in the process.
An on-off or isolation valve offers a great advantage when used in series with a throttle valve. Besides achieving tight shutoff, the placement of a quickly stroked completely open or closed on-off or isolation valve close-coupled to the connection into the process eliminates the deadtime and any unbalance between ratioed flows during the start and stop of reactant and reagents enabling more precise composition and pH control. The throttle valve is located at a position that is more accessible for better maintenance and with some straight runs upstream and downstream. The throttle valve straight run requirements are rather minimal but can give a more consistent flow relationship between valve position and flow.
For the throttle valve, the best solution is to get rid of the excessive deadband. Given that you are literally and figuratively, stuck with deadband principally when the source is a big valve, an increase in the PID gain will reduce the peak and integrated absolute error (IAE) by increasing the rate of change of the PID output and thus decreasing the additional deadtime from deadband. If there is a limit cycle, increasing the PID gain reduces the amplitude and period of the limit cycle, decreasing the persistent IAE and increasing the ability of downstream volumes to filter out the oscillations. Open loop step tests don’t reveal the additional deadtime but show a decrease in process gain upon a reversal of direction of step change. A filter time can be judiciously added that is less than 20% of the total loop deadtime seen in the test to prevent changes in the PID output from noise exceeding the deadband of the valve. For more on the effects of backlash see the May 2016 Control article “How to specify control valves that don’t compromise control” and the recording of the YouTube recording to be posted in June on the “ISA Mentor Program Webinar Playlist” of my ISA Mentor WebEx “ ISA-Mentor-Program-WebEx-Best-Control-Valve-Rev0.pdf ”. The article white paper and presentation also shows that an increase in PID gain eliminates an oscillation from poor positioner sensitivity by making changes in the valve signal larger than sensitivity limit.
A simple algorithm can be configured to increase the change in PID output by an amount slightly less than the deadband when the output changes direction and the change is greater than the noise band seen in the PID output. The kick of the output upon a change in direction eliminates the deadtime and lost motion from backlash. The practical issue is the deadband may vary with valve position, time, operating conditions, and positioner tuning. These algorithms are often used for Model Predictive Control besides PID control.
A lead-lag on the valve signal can reduce the effect of deadband, resolution and positioner sensitivity but the valve movement can quickly become erratic for a lead much larger than the lag time and noise.
Often deadband is a parameter in a Variable Speed Drive (VSD) setup to reduce changes in speed from noise. Often deadband is set too large because of a lack of understanding of the detrimental effect. The deadband should be just slightly larger than ½ the noise band seen in the VSD setpoint.
Dynamic simulation with a backlash-stiction block and a PID with external reset feedback can show this and much more. The virtual plant is my lab to rapidly explore, discover, prototype and test solutions.
I recently went to a Grateful Dead tribute band concert. The “dead heads” were grateful the music of the band was not dead. Keep your control system alive by not succumbing to the deadly deadband.
The post How to Overcome Challenges of PID Control and Analyzer Applications via Wireless Measurements first appeared on the ISA Interchange blog site.
This article was authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemicals).
Wireless measurements offer significant life-cycle cost savings by eliminating the installation, troubleshooting, and modification of wiring systems for new and relocated measurements. Some of the less recognized benefits are the eradication of EMI spikes from pump and agitator variable speed drives, the optimization of sensor location, and the demonstration of process control improvements. However, loss of transmission can result in process conditions outside of the normal operating range. Large periodic and exception reporting settings to increase battery life can cause loop instability and limit cycles when using a traditional PID (proportional-integral-derivative) for control. Analyzers offer composition measurements key to a higher level of process control but often have a less-than-ideal reliability record, sample system, cycle time, and resolution or sensitivity limit. A modification of the integral and derivative mode calculations can inherently prevent PID response problems, simplify tuning requirements, and improve loop performance for wireless measurements and sampled analyzers.
The combination of periodic and exception reporting by wireless measurements can be quite effective. The use of a refresh time (maximum time between communications) enables the use of a larger exception setting (minimum change for communication). Correspondingly, the use of an exception setting enables a larger refresh time setting. The time delay between the communicated and actual change in process variable depends upon when the change occurs in the time interval between updates (sample time). Since the time interval between a measured and communicated value (latency) is normally negligible, on the average, the true change can be considered to have occurred in the middle of the sample time. This delay limits how quickly control action is taken to correct changes introduced by process disturbances.
Since ultimately what you often want to control is composition in a process stream, online analyzers can raise process performance to a new level. However, analyzers, such as chromatographs, have large sample transportation and processing time delays that contribute to the total loop deadtime and are generally not as reliable or as sensitive as the pressure, level, and temperature measurements.
The sample transportation delay from the process to the analyzer is the sample system volume divided by the sample flow rate. This delay can be five or more minutes when analyzers are grouped in an analyzer house. Once the sample arrives, the processing and analysis cycle time normally ranges from 10 to 30 minutes. The analysis result is available at the end of the cycle time. If you consider the change in the sample composition occurs in the middle of the cycle time and is not reported until the end of the next cycle time, the analysis delay is 1½ times the cycle time. This cycle time delay is added to the sample transportation delay, process deadtime, and final control element delay to get the total loop deadtime. The sum of the 1½ analyzer cycle time plus the sample transportation delay will be referred to as the sample time.
Most of the undesirable reaction to discontinuous measurement communication is the result of integral and derivative action in a traditional PID. Integral action will continue to drive the output to eliminate the last known offset from the setpoint even if the measurement information is old. Since the measurement is rarely exactly at the setpoint within the A/D and microprocessor resolution, the output is continually ramped by reset. The problem is particularly onerous if the current error is erroneous.
Derivative action will see any sudden change in a communicated measurement value as occurring all within the PID execution time. Thus, a change in the measurement causes a spike in the controller output. The spike is especially large for restoration of the signal after a loss in communication. The spike can hit the output limit opposite from the output limit driven to from integral action. The spike from large refresh time can also cause a significant spike, because the rate of change calculation uses the PID execution time.
A smart PID has been developed that makes an integral mode calculation only when there is a measurement update. The change in controller output from the proportional mode reaction to a measurement update is fed back through an exponential response calculation with a time constant equal to the reset time setting to provide an integral calculation via the external reset method. For applications where there is an output signal selection (e.g., override control) or where there is a slowly responding secondary loop or final control element, the change in an external reset signal can be used instead of the change in PID output for the input to exponential response calculation. The feedback of actual valve position as the external reset signal can prevent integral action from driving the PID output in response to a stuck valve. The use of a smart positioner provides the readback of actual position and drives the pneumatic output to the actuator to correct for the wrong position without the help of the process controller.
For a reset time set equal to the process time constant so the closed loop time constant is equal to the open loop time constant, the response of the integral mode of the smart PID matches the response of the process. This inherent compensation of process response simplifies controller tuning and stabilizes the loop. For single loops dominated by a large time in between updates (large sample time), whether due to wireless measurements or analyzers, the controller gain can be the inverse of the process gain.
In the smart PID, the time interval used for the derivative mode calculation is the elapsed time from the last measurement update. Upon the restoration of communication, derivative action considers the change to have occurred over the time duration of the communication failure. Similarly, the derivative response to a large sample time or exception setting spreads the measurement change over the entire elapsed time. The reaction to measurement noise is also attenuated. This smarter derivative calculation combined with the derivative mode filter eliminates spikes in the controller output.
The proportional mode is active during each execution of the PID module to provide an immediate response to setpoint changes. The module execution time is kept fast so the delay is negligible for a corrective change in the setpoint of a secondary loop or signal to a final control element. With a controller gain approximately equal to the inverse of the process gain, the step change in PID output puts the actual value of the process variable extremely close to the final value needed to match the setpoint. The delay in the correction is only the final control element delay and process deadtime. After the process variable changes, the change in the measured value is delayed by a factor of the measurement sample time. Consequently, the observed speed of response is not as fast as the true speed of process response, a common deception from measurements with large signal delay or lag times.
Communication failure is not just a concern for wireless measurements. Any measurement device can fail to sense or transmit a new value. For pH measurements, the broken glass electrode or broken wire will result in a 7 pH reading, the most common setpoint. The response of coated or aged electrodes and large air gaps in thermowells can be so slow to show no appreciable change. Plugged impulse lines and sample lines can result in no new information from pressure transmitters and analyzers. Digitally communicated measurements can fail to update due to bus or transmitter problems.
If a load upset occurs and is reported just before the last communication, integral action in the traditional controller drives the PID output to its low limit. The smart PID can make an output change that almost exactly corrects for the last reported load upset, since the controller gain is the inverse of the process gain.
The wireless measurement sample time and transport delay associated with sample analyzers must be taken into account when using these measurements in control. A minimum wireless refresh time of 16 seconds is significant compared to the process response for flow, liquid pressure, desuperheater temperature, and static mixer composition and pH control. The sample time of chromatographs makes nearly all composition loops deadtime dominant except for industrial distillation columns and extremely large vessels. To eliminate excessive oscillations and valve travel caused by sample time and transport delay, a traditional PID controller is tuned for nearly an integral-only type of response by reducing the controller gain by a factor of 5. Increasing the reset time instead of reducing could also provide stability, but the offset is often unacceptable especially for flow feedforward and ratio control.
The smart PID can be aggressively tuned by setting the gain equal to the inverse of the process gain for deadtime dominant loops. The result is a dramatic reduction in integrated absolute error and rise time (time to reach setpoint). The immediate response of the smart PID is particularly advantageous for ratio control of feeds to wild flows and for cascade and model predictive control by higher level loops. The advantage may not be visible in the wireless or analyzer reported value because of the large measurement delay. The improvement in performance is observed in the speed and degree of correction by the controller output and reduced variability in upper level measurements and process quality. A similar deception also occurs for measurements with a large lag time relative to the true process response due to large signal filters and transmitter damping settings, and slow sensor response times. An understanding of these relationships and the temporary use of fast measurements can help realize and justify process control improvement. The ability to temporarily set a fast wakeup time and tight exception reporting for a portable wireless transmitter could lead to automation system upgrades.
Level loops on large volumes can use the largest refresh time of 60 seconds without any adverse affect because the integrating process gain is so slow (ramp rate is less than 1% per minute). Temperature loops on large vessels and columns can use an intermediate refresh time (30 seconds) and the maximum refresh time (60 seconds), respectively, because the process time constant is so large. However, gas and steam pressure control of volumes and headers will be adversely affected by a refresh time of 16 seconds because the integrating response ramp is so fast that the pressure can move outside of the control band (allowable control error) within the refresh time. Furnace draft pressure can ramp off scale in seconds. Highly exothermic reactors (polymerization reactors) can possibly run away if the largest refresh time of 60 seconds is used. To mitigate the effect of a large refresh time, the exception reporting setting is lowered to provide more frequent updates.
Measurements have a limit to the smallest detectable or reportable change in the process variable. If the entire change beyond threshold for detection is communicated, the limit is termed sensitivity. If a quantized or stepped change beyond the threshold is reported, the limit is termed resolution. Ideally, the resolution limit is less than the sensitivity limit. Often, these terms are used indiscriminately.
Wireless measurements have a sensitivity setting called deadband that is the minimum change in the measurement from the last value communicated that will trigger a communication when the sensor is awake. In the near future, the wakeup time in most wireless transmitters of 8 seconds is expected to be reduced. pH transmitters already have a wakeup time of only 1 second enabling a more effective use on static mixers.
A traditional PID will develop a limit cycle whose amplitude is the sensitivity and resolution limit, whichever is larger, from integral action. The period of the limit cycle will increase as the gain setting is reduced and the reset time is increased. A smart PID will inherently prevent the limit cycle.
Wireless and composition measurements offer a significant opportunity for optimizing process operation. A smart PID can dramatically improve the stability, reliability, and speed of response for wireless measurements and analyzers. The result is tighter control of the true process variables and longer battery and valve packing life.
A version of this article originally was published at InTech magazine.
The post, Deadtime, the Simple Easy Key to Better Control, first appeared on ControlGlobal.com's Control Talk blog.
Deadtime is the easiest dynamic parameter to identify and the one that holds the key to better control. Deadtime found visually or by a simple method can tell you what is limiting the ability of the loop and what the remedy is. In most loops, you as the automation engineer can gain a much greater understanding and make a dramatic improvement. You can become famous by Friday (assuming you read this on a Monday).
You can make a small setpoint change in automatic or a small output change (e.g., 0.5% ) by momentarily putting the loop in manual. The time to a change in the process variable in the correct direction is the deadtime. To detect the deadtime and noise visually, compression must be turned off. The remaining principal limit to identifying the deadtime in fast loops is the update time of the historian. For loops with a relatively large deadtime (e.g., greater than 10 sec), the deadtime can be visually identified assuming the update time is 1 sec or less. For loops with smaller deadtimes, you can put a few function blocks together in a module executing as fast as possible (e.g., 0.1 sec for many DCS) to tell you the deadtime. Of course, good tuning software that is executing very fast can tell you the deadtime and a lot more. The point here is that just knowing the deadtime offers exceptional insight and power to do what is right. So without delay let’s explore how we all can become more responsive.
The sum of all the discrete update times such as PID module execution rate and wireless update time and all the signal filter times and transmitter damping time should be less than 20% of the total loop deadtime to limit deterioration in achievable loop performance to be appreciably less than 20%. The valve response time should also be less than 40% of the deadtime. This is going to be difficult to achieve and to measure if you don’t have a true throttling valve and is nearly impossible in pressure systems because the process deadtime is usually so small (e.g., less than 1 sec). Almost as difficult but perhaps less important is the size of the valve response time compared to process deadtime in level systems assuming liquid flows into or out of the volume are manipulated for level control and the sensitivity limit and noise in the level measurement is extremely small enabling a detection of a small level change. The time for the level to get through a sensitivity limit or noise band is additional deadtime.
Consider the case where a loop in manual has no oscillations and develops oscillations when the loop is put in automatic. If the period of oscillations is 3 to 4 times the deadtime, the PID gain is too high. If the period is 6 to 10 times the deadtime, the reset time is probably too small. If the period of a level, gas pressure, or temperature loop on a vessel or column is more than 20 times the deadtime and decaying, it is most likely due to small of a PID gain - actually the product of PID gain and reset time is too small but it is most often caused by a PID gain needed for a normal reset setting being much greater than what is used due to the comfort zone of operations (e.g., many of these loops should have a PID gain that is 50 to 100 unless the reset time is greatly increased). If the period is more than 20 times the deadtime and the amplitude is constant indicating a limit cycle, the source is deadband, backlash, stiction, or resolution limit. If the source is deadband or backlash, increasing the PID gain should be able to reduce the oscillation amplitude and period.
Now let’s look at the situation where a loop in manual has an existing oscillation. If the oscillation period is less than the deadtime, it is essentially noise and the PID should not react to it. If the period is between 2 and 10 times the deadtime, the PID gain must be considerably reduced to prevent amplification of the oscillation due to resonance. If the period is more than 10 times the deadtime, the PID gain should be made as aggressive as possible to reduce the amplitude of the oscillation. Of course, the best solution is to find and eliminate the source of the oscillation.
What the controller sees in the first four deadtimes is most important in terms in controller tuning because unless the PID is seriously detuned, the PID should have reacted to arrest the response from a load disturbance. This corresponds to a lambda setting of 3 or less deadtimes. For a near-integrating, integrating, and runaway process, the maximum ramp rate in % of PV scale (%/sec) in the first four deadtimes divided by the step % change in PID output is approximately the integrating process gain (1/sec) that can be used with the deadtime to tune the PID using integrating process tuning rules. If there is a compound response, having the PID appreciably do its job within 4 deadtime simplifies the tuning and reduces what the PID sees and has to deal with in terms of the consequences of a later response typically due to recycle effects.
The reset time should be greater than 3 deadtimes for a PID with the exception being a truly deadtime dominant process (a rather rare case). The more likely scenario as mentioned before, is that the reset time must be increased because the product of the PID gain and reset time is too small for near-integrating, true integrating and runaway processes. For many loops on vessels and columns, the reset time is several orders of magnitude too small.
Deadtime also determines the limit as to loop performance even if the loop is tuned aggressively. The minimum peak error is proportional to the deadtime and the minimum integrated absolute error is proportional to the deadtime squared. If a PID gain is detuned, the effect can be equated to an increase in an effective deadtime greater than the actual deadtime. In other words, if you spend money to decrease deadtime in the process by better equipment or piping design or in the automation system by faster valves, measurements and discrete actions, if the PID is not tuned to match the decrease in actual deadtime, you do not see an improvement due to an effective deadtime from sluggish PID control.
For more on how deadtime limits performance, see slides 12-14 of ISA-Mentor-Program-WebEx-PID-Options-and-Solutions.pdf and for much more see the associated ISA Mentor Program Webinars.
The effect of an operating point non linearity is reduced by decreasing the total deadtime. My goal in pH control on difficult systems was to make the total deadtime as small as possible by better mixing and reagent injection so that the excursion on the nonlinear titration curve was as small as possible from tighter pH control. In other words, an increase in deadtime causes an increase in the nonlinearity seen, which causes a further deterioration in control (a spiraling effect, literally and figuratively).
If the deadtime is zero, the controller gain could be theoretically infinite and control perfect. Without deadtime, I would be out of job. The good news is that negligible deadtime only exists in simulations. Even if the process deadtime is extremely small, just having an automation system creates a deadtime that must be dealt with. The bad news is that deadtime is extremely detrimental and is not given the proper consideration as to what it is telling you and the PID. Also, the misuse of the term of “process deadtime” rather than “total loop deadtime” leads people into missing the important opportunities to reduce deadtime in the valve, measurement and controller, which is usually more readily done, more in your realm of responsibility and typically much less expensive than reducing deadtime in the process.
The post PID Tuning Rules first appeared on the ISA Interchange blog site.
Nearly every automation system supplier, consultant, control theory professor, and user has a favorite set of PID tuning rules. Many of these experts are convinced their set is the best. A handbook devoted to tuning has over 500 pages of rules. The enthusiasm and sheer number of rules is a testament to the importance of tuning and the wide variety of application dynamics, requirements, and complications. The good news is these methods converge for a common objective. The addition of PID features, such as setpoint lead-lag, dynamic reset and output velocity limits, and intelligent suspension of integral action enable the use of disturbance rejection tuning to achieve other system requirements, such as maximizing setpoint response, coordinating loops, extending valve packing life, and minimizing upsets to operations and other control loops.
The purpose of a control loop is to reject undesired changes, ignore extraneous changes, and achieve desired changes, such as new setpoints. PID control provides the best possible rejection of unmeasured disturbances (regulatory control) when properly tuned. The addition of a simple deadtime block in the external reset path can enhance the PID regulatory control capability more than other controllers with intelligence built-in to process dynamics, such as model predictive control. In plants, unknown and extraneous changes are a reality, and the PID is the best tool if properly tuned. The test time has been significantly reduced for the most difficult loops. Simple equations have been developed to estimate tuning and resulting performance for a unified approach. (Equation derivations and a simple tuning method are in the online version.)
The foremost requirement of a PID is to prevent the activation of a safety instrumentation system or a relief device and the prevention of an environmental violation (RCRA pH), compressor surge, and shutdown from a process excursion. The peak error (maximum deviation from setpoint) is the most applicable metric. The most disruptive upset is an unmeasured step disturbance that would cause an open loop error (Eo) if the PID was in manual or did not exist. The fraction of open loop error seen in feedback control is more dependent upon the controller gain than the integral time since the proportional mode provides the initial reaction important for minimizing the peak error. Equation (1) shows if the product of the controller gain (Kc) and open loop gain (Ko) is much greater than one, the peak error (Ex) is significantly less than the open loop error. The open loop gain (Ko) is the product of the final element, process, and measurement gain and is the percent change in process variable divided by the percent change in controller output for a setpoint change. For most vessel and column temperature and pressure control loops, the process rate of change is much slower than the deadtime. Consequently, the controller gain can be set large enough where the denominator becomes simply the inverse of the product of the gains. Conversely, for loops dominated by deadtime, the denominator approaches one, and the peak error is essentially the open loop error.
The peak error is critical for product quality in the final processing of melts, solids, or paste, such as extruders, sheet lines, and spin lines. Peak errors show up as rejected product due to color, consistency, optical clarity, thickness, size, shape, and in the case of food, palatability. Unfortunately, these systems are dominated by transportation delays. The peak errors and disruptions from upstream processes must be minimized.
The most widely cited metric is an integrated absolute error (IAE), which is the area between process variable and the setpoint. For a non-oscillatory response, the IAE and the integrated error (IE) are the same. Since proportional and integral action are important for minimizing this error, Equation (2) shows the IE increases as the integral time (Ti) increases and the controller gain decreases.
Equation (2) also shows how the IE increases with controller execution time (Δtx) and signal filter time (τf). The equivalent deadtime from these terms also decreases the minimum allowable integral time and maximum allowable controller gain, further degrading the maximum possible performance. In many cases, the original controller tuning is slower than allowed and remains unchanged, so the only deterioration observed is from these terms in the numerator of Equation (2). Studies on the effect of automation system dynamics and innovations can lead to conflicting results because of the lack of recognition of the effect of tuning on the starting case and comparative case performance. In other words, you can readily prove anything you want by how you tune the controller.
IE is indicative of the quantity of product that is off-spec that can lead to a reduced yield and higher cost ratio of raw material or recycle processing to product. If the off-spec cannot be recycled or the feed rate cannot be increased, there is a loss in production rate. If the off-spec is not recoverable, there is a waste treatment cost.
A controller tuned for maximum performance will have a closed loop response to an unmeasured disturbance that resembles two right triangles placed back to back. The base of each triangle is the total loop deadtime and the altitude is the peak error. If the integral time (reset time) is too slow, there is slower return to setpoint. If the controller gain is too small, the peak error is increased, and the right triangle is larger for the return to setpoint.
The major types of process dynamics are differentiated by the final path of the open loop response to a change in manual controller output assuming no disturbances. (The online version shows the three major types of responses and the associated dynamic terms.) If the response lines out to a new steady state, the process is self-regulating with an open loop time constant (τo) that is the largest time constant in the loop. Flow and continuous operation temperature and concentration are self-regulating processes. If the response continues to ramp, the process is integrating. Level, column and vessel pressure, batch operation temperature, and concentration are integrating processes. If the response accelerates, reaching a point of no return, the process has positive feedback leading to a runaway. Batch or continuous temperature in highly exothermic reactors (e.g., polymerization) can become runaway processes. Prolonged open loop tests are not permitted, and setpoint changes are limited. Consequently, the acceleration is rarely intentionally observed.
The three major types of responses have an initial period of no response that is the total loop deadtime (θo) followed by the ramp before the deceleration (inflection point) of a self-regulating response and the acceleration of the runaway response. The percent ramp rate divided by the change in percent controller output is the integrating process gain (Ki) with units of %/sec/%, which reduces to 1/sec.
For at least 10 years, slow self-regulating processes with a long time to deceleration have shown to be effectively identified and tuned as “near integrating” or “pseudo integrating” processes, leading to a “short cut tuning method” where only the deadtime and initial ramp rate need to be recognized. The tuning test time for these “near integrating” processes can be reduced by over 90% by not waiting for a steady state. Recently, the method was extended to runaway processes and to deadtime dominant self-regulating processes by the use of a deadtime block to compute the ramp rate over a deadtime interval. Furthermore, other tuning rules were found to give the same equation for controller gain when the performance objective was maximum unmeasured disturbance rejection. For example, the use of a closed loop time constant (λ) equal to the total loop deadtime in Lambda tuning yields the same result as the Ziegler Nichols (ZN) ultimate oscillation and reaction curve methods if the ZN gain is cut in half for smoothness and robustness. Equation (3) shows the controller gain is half the inverse of the product of integrating process gain and deadtime.
The profession realizes that too large of a controller gain will cause relatively rapid oscillations and can instigate instability (growing oscillations). Unrealized for integrating process is that too small of a controller gain can cause extremely slow oscillations that take longer to decay as the gain is decreased. Also unrealized for a runaway process is that a controller gain set less than the inverse of the open loop gain causes an increase in temperature to accelerate to a point of no return. There is a window of allowable controller gains. Also realized is too small of an integral time will cause overshoot and can lead to a reset cycle. Almost completely unrealized is that too slow of an integral time will result in a sustained overshoot of a setpoint that gets larger and more persistent as the integral time is increased for integrating processes. Hence a window of allowable integral times exists. Equation 4a provides the right size of integral time for integrating processes. If we substitute Equation 3 into Equation 4a, we end up with Equation 4b, which is a common expression for the integral time for maximum disturbance rejection. Equation 4a is extremely important because most integrating processes have a controller gain five to 10 times smaller than allowed. The coefficient in Equation 4b can be decreased for self-regulating processes as the deadtime becomes larger than the open loop time constant (τo) estimated by Equation 5.
The tuning used for maximum load rejection can be used for an effective and smooth setpoint response if the setpoint change is passed through a lead-lag. The lag time is set equal to the integral time, and the lead time is set approximately equal to ¼ the lag time.
For startup, grade transitions, and optimization of continuous processes and batch operations, setpoint response is important. Minimizing the time to reach a new setpoint (rise time) can in many cases maximize process efficiency and capacity. The rise time (Tr) for no output saturation, no setpoint feedforward, and no special logic is the inverse of the product of the integrating process gain and the controller gain plus the total loop deadtime. Equation 6 is independent of the setpoint change.
Fast changes in controller output can cause oscillations from a slow secondary loop or a slow final control element. The problem is insidious in that oscillations may only develop for large disturbances or large setpoint changes. The enabling of the dynamic reset limit option and the timely external reset feedback of the secondary loop or final control element process variable will prevent the primary PID controller output from changing faster than the secondary or final control element can respond, preventing oscillations.
Aggressive controller tuning can also upset operations, disturb other loops, and cause continual crossing of the split range point. Velocity limits can be added to the analog output block, the dynamic reset limit option enabled, and the block process variable used as the external reset to provide directional move suppression to smooth out the response as necessary without retuning.
The different closed loop response of loops can reduce the coordination, especially important for blending and simplification of the identification of models for advanced process control systems that manipulate these loops. Process nonlinearities may cause the response in one direction to be faster. Directional output velocity limits and the dynamic reset limit option can be used to equalize closed loop time constants without retuning.
Final control element resolution limits (stick-slip) and deadband (backlash) can cause a limit cycle if one or two or more integrators, respectively, exist in the loop. The integrator can be in the process or in the secondary or primary PID controller via the integral mode. Increasing the integral time will make the cycle period slower but cannot eliminate the oscillation. However, a total suspension of integral action when there is no significant change in the process variable and when the process is close to the setpoint can stop the limit cycle. The output velocity limits can also be used to prevent oscillations in the controller output from measurement noise exceeding the deadband or resolution limit of a control valve preventing dither, which further reduces valve wear.
Controllers can be tuned for maximum disturbance rejection by a unified method for the major types of processes. PID options in today’s DCS, such as setpoint lead-lag, directional output velocity limits, dynamic reset limit, and intelligent suspension of integral action, can eliminate oscillations without retuning. Less oscillations reduces process variability, enables better recognition of trends, offers easier identification of dynamics, and provides an increase in valve packing life.
The post PID Controller Tuning Rules first appeared on the ISA Interchange blog site.
This article was authored by Greg McMillan, industry consultant, author of numerous process control books, 2010 ISA Life Achievement Award recipient and retired Senior Fellow from Solutia Inc. (now Eastman Chemical).
Connect with Greg:
A version of this article also was published at InTech magazine.
This is the official online community site of the Emerson Global Users Exchange, a forum for the free exchange of non-proprietary information among the global user community of all Emerson Automation Solution's products and services. Our goal is to improve the efficiency and use of automation systems and solutions employed at members’ facilities by sharing our knowledge, experiences, and application information.
User Groups |
World Areas |
Community Guidelines |
Legal Information |
Contact Community Manager
Website translation provided by
© 2015 Emerson Global Users Exchange. All rights reserved.