m

Virtual Q&A: Process Control >> REPLY to THIS POST w/ QUESTIONS.

Thank you to our esteemed experts and to everyone who submitted questions for the Live Q&A, both in the EE365 Community and onsite! You'll find almost 2 hours of Process Control GENIUS in the video below. The Experts welcome your continued questions and comments in this forum thread. Enjoy!

Gallery Player

bcove.me

Questions submitted in this forum will be answered LIVE on Friday, October 16th 8:00am MDT, in the Meet the Experts: Process Control session at the Emerson Exchange Global Users Conference in Denver, Colorado. Please reply to this Forum Thread with your Questions. 

Seven experts will solve the challenges of using traditional and advanced control techniques to optimize plant operations & host a Live Q&A with the Emerson Exchange 365 Community. These renowned experts include:

Jim Cahill summarized in real-time as best as possible the answers you ask from the experts. A full video recording is now available so that you can hear the complete answer from the expert themselves.

Q: (Tyler Franzen): When is it appropriate to use a FLC in place of a PID? Is it simply a matter of preference or are there significant benefits to one or the other in certain scenarios?

A: (James Beall): One situation where the FLC does well is for a "slow, lag dominant" process where it is desired to achieve "as fast as possible" response to SP changes without an overshoot. One examples of this type of process are continuous and batch reactor temperature control.

Q: (Tyler Franzen): I noticed in books online that FF_LEAD, FF_LAG, FF_DT all show up as topics if you search "Feedforward". However I cannot find a function block that actually has these parameters available. Is it only possible to get a dead time compensation for a feedforward with a DT block, also is it only possible to get lead and / or lag compensation for a feedforward with a LL block? If that is the case why do these

A: (James Beall):  I did not find the parameters FF_LEAD, FF_LAG and FF_DT. However, the method to implement dynamic elements in the feedforward path is to place the DT and LL blocks in the path of the FF signal. If you search on "feedforward dynamic compensation" you will see an example of the LL block in the FF path. Note that one misunderstood parameter related to FF is the "FF_SCALE". The value coming into the FF_VAL input to the PID block is converted to a 0-100% basis based on the FF_SCALE. Note that the FF_VAL is internally limited to -10% to 110% of the FF_SCALE. So, consider and example where the FF_SCALE is left at its default value of 0-100% and the FF_VAL is typically in a range of 1000- 1500 lbs/hr, the %based FF signal is internally limited to 110% and thus there is no actual FF action! 

Q - Deadtime Compensation: I have a control loop with a long dead time. How would compensate for this in the DeltaV?

A: A modified Smith Predictor block can be used as well as a model predictive controller block. A one-by-one model predictive control block can be used to replace the PID blocks. Large agreement among the panel members on the 1x1 MPC approach.

Q - MPC Blocks: What is the difference betweeen the Predict and PredictPro blocks and what type of application would I still use them?

A: Predict is the first model predictive controller product. It had basic functionality with a heuristic controller and a very simple optimizer. Only 4 output are manipulated variables. It's a good product for basic applications. PredictPro handles much larger control applications with an LP optimizer. Multiple control objectives can be set. A user in the audience recommend just starting with PredictPro to avoid any limitations with Predict.

Q - Secondary time constant: Why do I need to estimate it, when does it become important? I have seen that this could be used as the setting for the derivative term. Is this correct?

A: As far as PID control, first order + deadtime is about all one should concern themselves with. From a field standpoint, James finds the second order helpful to get more stable control and stable tuning. The Entech Toolkit helps identify the higher order diagnostics and provide more robust tuning to the derivative term of the PID block. Terry noted it was discussed in the book Control Loop Foundation.

Q - MPC Penalties: What should I understand for "Penalty on Move" and "Penalty on Error"? Is there a way to calculate these.

A: The penalty on move impacts how aggressive the movement of the variables are. The smaller the value the more aggressive. Penalty on error assigns priority to the variables on which should be more important in the model than another. Base is to start with everything at 1. So these are ways to tune the movement and priority of the values in the model. They tune the response on the output of the MPC controller. Lou says the penalty on move is the first thing he looks at and if a change needs to be made start with a factor of 2 change. Penalty on error should also be changed at least by twice. Willy noted that the reference trajectory also adjusts the response.

Best Regards,

Rachelle McWright: Business Development Manager, Dynamic Simulation: U.S. Gulf Coast

  • When is it appropriate to use a FLC in place of a PID? Is it simply a matter of preference or are there significant benefits to one or the other in certain scenarios?
  • I noticed in books online that FF_LEAD, FF_LAG, FF_DT all show up as topics if you search "Feedforward". However I can not find a function block that actually has these parameters available. Is it only possible to get a dead time compensation for a feedforward with a DT block, also is it only possible to get lead and / or lag compensation for a feedforward with a LL block? If that is the case why do these parameters show up in books online?
  • In reply to tyler franzen:

    Tyler,
    One situation where the FLC does well is for a "slow, lag dominant" process where it is desired to achieve "as fast as possible" response to SP changes without an overshoot. One examples of this type of process are continuous and batch reactor temperature control.
    James
  • In reply to tyler franzen:

    Tyler,
    I did not find the parameters FF_LEAD, FF_LAG and FF_DT. However, the method to implement dynamic elements in the feedforward path is to place the DT and LL blocks in the path of the FF signal. If you search on "feedforward dynamic compensation" you will see an example of the LL block in the FF path. Note that one misunderstood parameter related to FF is the "FF_SCALE". The value coming into the FF_VAL input to the PID block is converted to a 0-100% basis based on the FF_SCALE. Note that the FF_VAL is internally limited to -10% to 110% of the FF_SCALE. So, consider and example where the FF_SCALE is left at its default value of 0-100% and the FF_VAL is typically in a range of 1000- 1500 lbs/hr, the %based FF signal is internally limited to 110% and thus there is no actual FF action! Hope this helps. Let me know if you have further questions.
    James
  • In reply to James Beall:

    Thanks to , one of the featured panelists, for giving us a "taste" of the expert insights to come on Friday, Oct 16th beginning at 8am MDT. Keep the great questions coming...
    will be Live Blogging the remaining questions & answers that are added to this forum during the session during Friday's Meet the Experts: Process Control panel.

    *We will also be filming the entire 90-minute session & make it available exclusively to EE365 Community members later that day!

    Best Regards,

    Rachelle McWright: Business Development Manager, Dynamic Simulation: U.S. Gulf Coast

  • Here is my recap while live-blogging this session in real-time. Apologies in advance for any errors or omissions.

    Reduce Energy for Distillation - James Beall

    James opened discussing the importance of a control foundation. He noted that this provides some of the best ROI for improvement project. Advanced process control (APC) particularly provides significant results. Some typical expected results include:

    • 4-8% increase in throughput
    • 5-10% reduction in energy costs
    • 2-8% reduction in product inventories
    • 40-80% reduction in quality variation
    • 1-5% increase in equipment availability

    He stressed the importance of taking a holistic approach:

    • Understand process objectives
    • Understand impact of measurement devices and final control elements (e.g. valves)
    • Review control strategies
    • Utilize advanced regulatory control (Cascade, Feed Forward, …)
    • Choose appropriate PID options (e.g. Form, Structure, etc.)
    • Tune loops with a coordinated response to maximize process (vs. loop) performance
    • Eliminate variability at the source where possible
    • Move variability to less harmful places with control techniques

    For a distillation column, typically loop interactions are present. Manual control had shown that the steam usage could be reduced by 25% and still meet product specs. The automatic control scheme became unstable as the energy was reduced.

    The solution was to test control valves and recommended improvements by using Emerson’s EnTech Toolkit (InSight) to determine process dynamics and loop interaction. The tuned loops to have a coordinated, non-interacting response using Lambda tuning. The including reducing reflux 27% and steam usage by 25% with a project payback of 3 months.

    Control Performance Visibility and Awareness- Hydrogen Plant - Jay Colclazier

    Jay opened discussing the problem of improving awareness and visibility of control performance. Most people would agree that Control Performance is important. However, it is usually difficult to quantify Control Performance is often not very visible.

    The solution is to implement simple metrics and improved visibility through control performance reports and real-time KPI values. The team uses Inspect portion of DeltaV Insight focusing on control utilization metrics. Also Jay noted that they used built-in reporting functions which were summarized and distributed weekly.

    For the real-time KPIs, they calculated process KPI values in real time inside a DeltaV control module:

    • Yield Values
    • Material Balances
    • Throughput measures
    • Heavily filtered to take out short-term variability

    These values were added to the Historian and a few of the KPIs were made visible on the DeltaV Operate screens for the operators.

    Jay share an example of a Steam/Methane Reformer and PSA KPIs:

    • Hydrogen Yield
    • PSA Yield
    • Reformer Material Balance
    • PSA Material Balance
    • Steam/BFW Material Balance
    • Unit Cost

    Wellpad Optimization - Warren Mitchell

    Warren opened decribing the production optimization challenge. Across a field with hundreds of injector/producer pairs, into which wells, in what order and when do you put the steam in order to produce the greatest volume of bitumen?

    Goals of the project included:

    • Automate and improve the ease of operating >100 wells/operator
    • Stabilize the producer well
    • Minimizing the well sub cool by effectively controlling the steam trap down-hole
    • Optimize pad steam injection and emulsion production rates
    • Mitigate the impact of upstream disturbances
    • Improve the utilization of high-efficiency wells
    • Reduce production and steam injection variability to sensitive wells
    • Protect reservoir team chambers from large swings in steam injection rates
    • Coordinate the well pad operation with separation and steam generation processes
    • Improve the consistency of how well pads are operated

    Here are some of the details around two of the solution’s Spartan has developed for SAGD facilities that are having a big impact on operating sites today. The Spartan team has been working with SAGD producers since the first pilot sites got off the ground (20+ years now).

    Warren represents a group called ‘Advanced Process Solutions’ at Spartan and it is their job to help process manufacturers and producers deploy advanced process control and information technologies to lower operating costs. The team typically gets involved after a plant has started up to tune and then optimize it’s performance. They are seeing more and more producers begin to think about these issues during FEED and detailed engineering. There is much that can be done upfront to make sure the plant does not take years after start-up to reach steady state conditions at your nameplate capacity.

    Well pad optimization is just a component of site wide optimization; also includes steam header coordination, boiler load optimization, separation & recovery optimization, water treatment optimization, etc. Even though the ground-up implementation approach is critical to the success of these projects, they must be designed within the framework of a site-wide optimization strategy.

    The biggest thing that downhole instrumentation (P/T) gives the team is the liner subcool, a measure of the amount of fluid in your steam chamber above the producer well. The biggest risk to damaging SAGD wells is flashing steam across the liner, which occurs when the fluid that is coming into the well drops to its saturation pressure for the temperature it is at as it passes through the liner.

    At this point, the fluid undergoes a phase change from liquid to vapor (steam), thus velocity increases substantially and can wash out liners, compromising ability to control sand. Liner subcool is calculated as the associated saturation temperature to the producing bottom hole pressure, then subtracting the producing bottom hole temperature (tells you how far you are from a saturated steam condition).

    For every 10 degC of liner subcool, this represents 1 meter of fluid above your producing well. Optimal (most efficient) SAGD operation is at 0 degC subcool, however due to the nature that instrumentation is not 100% accurate, wells are not flat (go up and down), and drawdown is not evenly distributed along the well path, the team targets a minimum of 10 degC liner subcool, targeting 20 degC in the past due to our lack of reliable downhole P/T measurement.

    With better instrumentation, the team will push sub cools lower than where they currently operate as it is more conservative.

    Some qualitative results include:

    • Smooth, Stable, Reliable Production.
    • Decreased risk of flashing fluids across liners
    • Decreased risk of pump damage
    • Decreased risk or well and reservoir damage and abnormal steam chamber formation

    Some quantitative results include:

    • > 5% increase in well pad production
    • ~45 % decrease in reservoir sub cool standard deviation
    • ~80% decrease in pump sub cool standard deviation
    • ~90% decrease in ESP speed standard deviation

    Data Analytics in Process Optimization -Willy Wojsznis

    Willy opened discussing optimizing principles:

    • Developing model
    • Defining optimization objectives usually targeting increased profit by increasing production rate and saving energy and materials.
    • Applying optimization solution (LP, QP, NLP….)

    Some reasons analytics may be used in process optimization include:

    • Developing deterministic (causal) models require process testing which is not always acceptable – analytic models are developing based on historical data
    • Another objectives like minimizing production losses due to the process faulty operation is not framed into classical optimization – process faulty operation can be detected by analytic model and losses minimized

    Willy noted that he has been exploring analytic quality prediction and fault detection techniques for distillation columns. Analytics can serve as a backup for online analyzers since it provides a good back up for on-line analyzer and shows a potential for substituting on-line analyzer for some quality parameters.

    He shared an example where an on-line analyzer was malfunctioning over period of one week. The analytic predictors both PLS and NN provided consistent quality indication and the analytic predictors helped the operator personnel to avoid irrelevant operation actions and engineering personnel to diagnose the on-line analyzer fault.

    Another example was detecting abnormal conditions for analytics with on-line data and their relationships; i.e. correlations differ from those used for the analytic model development. Typical causes include grade change, not covered in the modeling data, new operation settings and process malfunctions.

    Analytic quality prediction does not perform well at the abnormal conditions, BUT analytic fault detection is an excellent way to capture process abnormality.

    Another example is measurements and valves operation validation. Continuous data analytics may be used as a regular tool for loop diagnostics. This measurement vailidation is done by exploring trends from the data historian by using control system tools and fault detection PCA, which is the the easiest way to detect measurements or valve faults.

    Improving Equipment Performance - a Tale of Two Valves - Jim Coleman

    Jim opened sharing a tale of two valves. Both "worked for years", then came a change in requirements and then they went "belly up". The maintenance team said the valves are good, but process control is not acceptable and limits production.

    The ball valve caused caused the flow to oscillate which produced unacceptable performance and slip-stick cycling. The valve required deadband reset scheduling and works for ‘slip-stick’ cycling in self-regulating loops.

    For the globe valve on a pressure vessel, they needed a linear relation between controller output and flow. The pressure control on the main vessel was never tuned well and its performance limited production. They were measuring flow vs. stem position which was very non-linear. They needed the PID to command flow and not command stem position since the combo is linear.

    The solution was to use DeltaV signal characterizer (SGCR) blocks to provide valve linearization to the loop. This linearization provided excellent performance over the full range and allow a 50% increase in production rate.

    Application of APC to LNG Process - Lou Heavner

    Lou opened describing the location of the LNG process in Africa. It was the most value delivered project he has ever done. Lou described the steps in an advanced process control project cycle. It includes vendor selection, controller tuning, base data collection, step testing, model generation, post audit, project close out, site acceptance test, commissioning and model validation. A turnaround was performed to fix valves between commissioning looping back to another site acceptance test.

    Control of the unit balancing required:

    • Valve operability
    • Effective control
    • Balancing compressor loads
    • Adjusting compressor loads for ship loading

    Results from applying APC was 19.53 m3/hr or 162,500 m3/year additional production.

  • Additional Questions posed in the EE365 Community by member, BC Spear and answered by Willy Wojsznis, Senior Technologist & ISA Author:

    Q: Deadtime Compensation: I have a control loop with a long dead time. How would compensate for this in the DeltaV?

    A: Predict will be a good tool. Predict with one MV does not require license.

    Q: MPC Blocks: What is the difference betweeen the Predict and PredictPro blocks and what type of application would I still use them?

    A: Predict is a first DeltaV Model Predictive Control, embedded into DeltaV controller. (All competitive MPC operate in a workstation). The maximum configuration size for MPC- Predict is 8 ins x 8 outs, with maximum number of Manipulated Variables 4. MPC-Predict uses a simplified optimizer - Pusher. Optimizer is very essential part of MPC operation. In a summary it is a multivariable controller with limited size and simple optimization (maximize or minimize one of the parameters), good for replacing PID override and simple multivariable control - SP targets.

    PredictPro supports configuration 40x80, Integrated with LP optimizer, full range of configuration and optimization functionality.

    Best Regards,

    Rachelle McWright: Business Development Manager, Dynamic Simulation: U.S. Gulf Coast

  • In reply to tyler franzen:

    The theory suggests that a process with noise, like perhaps a level would benefit from the nonlinear behavior of the FLC. Its nonlinear behavior is a lot like an error squared control where the response near the setpoint is mild and far from setpoint is aggressive, although the nonlinearity is less extreme making the FLC more robust than an Error-Squared PID. However, I have not noticed that advantage with noisy processes personally. It is an interesting control function and has been demonstrated to work well in some applications. The On-Demand Tuner in Insight gives good initial tuning, but doesn't give you much ability to tailor the response. My colleague Mark Coughran has found that simulating the loop and using the simulation to study the tuning parameters can help identify even better tuning values.
  • Q: MPC Blocks: What is the difference betweeen the Predict and PredictPro blocks and what type of application would I still use them?
    A: If you look at BOL, the examples for Predict are as replacements for advanced regulatory controls where feed-forward or override control are required, but no specific requirement for optimization. This is still the best kind of application for Predict. It can be difficult to properly design and implement a multivariable advanced regulatory solution, but it is quite straight-forward with Predict. If no optimization is required, only the requirement for feed-forward and/or override, this is an effective choice. Similarly, where optimization is not required, but the process dynamics include a dead-time dominant response, Predict is a good choice. One example that worked well for me was a level control in a hopper. The hopper was fed granular material from multiple conveyors and the conveyed solids were mixed in a rotary tumbler. The conveyor speeds were used to control the level and the tumbler introduced a huge transport delay deadtime. In this case, tight level control was required to avoid bridging of solids in the hopper (high levels) and insufficient packing on the discharge conveyor (low levels). PID feedback could not be tuned to keep the level in bounds due to the dead-time dominant dynamics, but Predict worked very well. Predict does allow you to "push" one variable, so an example like feed optimization might be a good candidate. Evaporators and dryers might be good examples where a small (in terms of MVs and CVs) problem with potentially significant dead-times and and a desire to maximize feed up to some constraint limits are part of the control objective. Split-range control is another good advanced regulatory control strategy that can be effectively replaced with Predict. In the case where Predict (or PredictPro) output directly to the valve instead of a flow loop, you may want to consider using a characterizer to linearize the problem.

    PredictPro is obviously required for problems with more than 4 MVs, CVs, DVs, or constraints, since it can handle up to 40 process inputs and 80 process outputs. It can handle loop interactions and dead-time dominant dynamics like Predict. But where it really shines is applications that require constraint optimization. For many processes, different constraints will become active under different situations. In the example I used during the session, the objective was to maximize feed to the LNG plant. But to liquefy the feed gas, heat had to be rejected to the environment (ambient air). As ambient temperature changed, the amount of heat that could be rejected and hence the amount of gas that could be fed also changed. There were limits on pressures and temperatures within the plant, especially around the discharge pressures and temperatures from the refrigeration compressors and in the refrigerant reservoirs. We configure 30 process outputs, of which only 2 or 3 were CVs, the rest being constraints and only 5 MVs. According to Degrees of freedom theory, with 5 MVs, we could simultaneously push 5 constraint limits and as conditions changed, so did the constraints we were pushing. The other nice capability of PredictPro is the ability to have multiple control objectives. For example, in the case of a distillation splitting propane and butane, one day propane may be more valuable and the next butane may be more valuable. It is possible to set up one objective to maximize propane recovery and another to maximize butane recovery. In a reactor, it may be desirable one day to run at high severity and another day it may be important just to maximize throughput. The control objectives allow you to do this by maximizing or minimizing selected MVs and/or Process Outputs. And now MPC Plus allows us o go a step further with "adaptive" MPC by adjusting the penalty factors based on operating conditions without the need to go offline or perform a download.
  • Q: MPC Penalties: What should I understand for "Penalty on Move" and "Penalty on Error"? Is there a way to calculate these?

    A: The POM is calculated for you based on steady-state gains and the process dynamics. It is generally a good starting point. During commissioning, you can monitor the action of the MVs. If one seems too aggressive, then increase the POM. If it does not appear to be aggressive enough and is doing little work, reduce the POM. This is a trial-and-error process, and in my experience, it is best to adjust the POM by a factor of 2 and observe the new response.

    There is an online parameter that can be used to slow down an aggressive MV. It is the maximum MV rate of change. This is intended to avoid a change that is too rapid and is entered as the maximum MV change per second. This is usually a very small number and it may not be easily known what it should be. My rule of thumb is to ask an operator what is the maximum change they would make and how long they would wait before making another change. I divide the maximum change by the wait time and use that as the parameter value. For example, suppose the operator would never make a change greater than 60 gpm to a process flow and after making a change that large would wait at least 5 minutes to observe the response before making another change. Then the parameter value I would use is 60 gpm/ (5 min x 60 sec/min) or 0.2 gpm/sec. During commissioning, it may become apparent that the MV is being limited by online rate-of-change limit. This will be seen as an MV ramp. In this case, the POM is doing nothing and you should consider increasing the POM of that MV for an overall better performing controller.

    POE is defaulted to a value of 1. You shouldn't have to adjust this parameter in most applications. If one of the CVs is not tracking SP as well as you like, then reduce the POE. Increasing the POE will make the CV response slower and may be used in some processes like integrating processes to allow some float on the CV and thereby reduce some aggressiveness on the MVs. James Beall has done some study of POE and I would invite him to chime in here.

    There is an online tunable parameter, the Setpoint Trajectory that can also be used to make the controller less aggressive with respect to a CV. The default is one Time-To-Steady-State (TSS) though the value is 0. That means the controller will try to achieve the setpoint target in 1 TSS. Setting a value larger than TSS tells the controller to extend the time to target for the period entered. The parameter cannot be set to a value less than TSS.

    Another parameter that is not mentioned, but which will have a significant impact on performance is the Price associated with optimized (MAX or MIN) variables in the LP Optimizer. This is another parameter may need to be determined by trial-and-error. Theoretically, if we knew the value of the variables (such as product and energy prices) we could enter them explicitly. But usually the optimized variables have no such intrinsic value and must be estimated. In my experience, when adjusting prices, it is best to do so by a factor of 10.