DeltaV PK750 Controller - Perf_Index of 2 (Asset Monitoring

Hello everyone,

     I have a DeltaV PK750 Controller and I do not seem to find the latest documentation on Asset Monitoring for the PK Series Controllers. As you can see below, my current Perf_Index is 2 (A heavily loaded and supported condition). I am wondering whether there is a way of finding out how close am I to a Perf_Index of 1 (whcih is of course, undesirable). Is there a possibility to use the rest of the data to gauge how close I am to a Perf_Index of 1?

Thank you in advance for your help,

Valentin Borsu

4 Replies

  • Right click on controller and select Time Utilization. There two tabs one shows indices and the other %utilization.

    Andre Dicaire

  • In reply to Andre Dicaire:

    Thank you Andre. Is there a way to gauge how many more modules can be downloaded to a controller by means of the %Free of Control Expansion? That would be appreciated.
  • In reply to Valentin Borsu:

    Yes and no. The loading is a function of Module count, complexity and execution scan rate. If you extrapolate your configuration, and say you have 200 modules, and you have 20% control Expansion, you could anticipate up to an additional 40 modules (200 *1.20). But if your new modules a skewed to more complex modules or faster execution times, your actual count would be less, or simpler/slower modules would allow more modules. You can also look at the IO count and ask yourself how much more IO can you add to the controller based on the mix of existin to new. The IO typically indicates the type of control which is also an approximation of capacity.

    Each module has a parameter called EXEC_TIME, which indicates the CPU consumption of the module each time it executes. If you look at a series of similar modules (all created from the same class) you will see similar numbers though some can be higher. Excluding the outliers, you can determine a loading factor for each module type. (the reason for the outliers is that the parameter provides a measure of time from start to end of the module. Higher priority tasks may interrupt a module briefly to complete a time sensitive task, and then come back to finish the logic in the module. This bumps the EXEC_TIME and is normal. So understanding this, you can get a real sense of each module's complexity.)

    A second parameter is the ON_TIME parameter. This indicates if the module is executing at the expected frequency. The Performance Indices are intended to generate a very general indication of controller health or loading. on Control, Communications and Memory, and roll into the overall index. % capacity numbers give you a more accurate view of consumed resources. The ON_TIME parameter tells you if your control logic is on time or slipping. The %On Time parameters are a function of the individual ON_TIME values that are true vs false. The ON TIME is set FALSE if the module execution scan is 10% larger than the configured time. When slippage occurs, you are truly overloaded. But even at this point, module execution continues. Most control algorithms are unaffected by a little bit of slippage. The goal however is to have everything on time.

    In your configuration, you can fine tune the module execution rates to help manage the CPU. To free up CPU, target the most complex and fastest modules to get the maximum impact. For instance, complex modules running at 100 ms that collectively consume 10% of the CPU will drop to 5% if executed at 200ms. Or if you set all modules to the same scan time regardless of process response times, you have an opportunity to free up CPU without sacrificing any control response. Another option is to use the Block Scan Multiplier to reduce loading on complex modules that still need to execute fast but have logic that can run at a slower frequency. Such as an Integrator on a Flow measurement. This could be in a separate module (FQI) to the flow Indicator (FI) and run the integrator slower. If the Integrator is in the same module, use the Block Scan to run the integration at a slower frequency. Changing the module configuration is a bit more disruptive than adjusting scan rate. But those are the tools at your discretion. Short of ensuring you configuration overall is efficient.

    If you run out of Control CPU, modules will typically still be on TIME at 0% Control Capacity, but now you don't know if you are at 0, or -1 or -10 etc. The controller allocates CPU to various tasks but they can borrow if it's not being used. So at 0% capacity, you have no indication of how "close" you are to impacting module execution schedules. When ON_TIME percentage drops below 100%, you are slipping some modules by 10%. You are full and should back off.

    If you reach 0% capacity, you should adjust scan rates to get above 0, which is the FULL mark, and know you are full, and not overloaded. Modules can fluctuate in CPU load due to exception logic or sequences etc. Depending on your configuration, you may want to establish a floor for the average loading. Consider the minimum %capacity observed over time as you true loading. If you dip from 20 to 10% periodically, then 10% should be your minimum normal loading, giving room to dip to 0 (note that a move from 20% to 10% is an increase in loading of 10/80, so 12%). So, your minimum target might really be around 12%. It's not that critical as there is some extra CPU beneath 0%. You just should not be at 0% most of the time. Never at 0% makes everybody happy.

    Overloading the CPU of a controller is an age-old problem. Every new controller as delivered more CPU and/or Memory for the IO capacity it was designed for. And in every release, users manage to consume all the CPU. This was a driving force in the development of CHARMS IO. With the SX and SDPLUS/SQ controllers capable of 1500 or 750 IO, the limiting factor in design was the CPU. With conventional IO, if you ran out of cpu, any unused expansion IO was orphaned in place. If CPU ran out before all required control was completed, a redesign of the IO was required to split to another controller. That is why CHARMS allows up to 4 controllers to reference the IO cards channels (CHARMS). Rather than design the IO to the controller, you design the IO to the CIOC IO network. Then if Controller CPU loading becomes a problem, you can add a controller to the designed IO, not changing any wiring or IO cabinet designs, just adding a controller to the network.

    With the PK controller, the processor capacity was increased by a factor of 4 to that of the SX (and MX) controllers. But also, a change in philosophy where all four PK sizes have the same CPU and Memory. A small PK100 or PK300 supports a reduced number of IO, but can draw on the same CPU capacity, supporting a small super-fast controller for a dedicated application (up to 25 ms scan rates using local IO cards). With CHARMS, you can choose to have two PK750's to handle 1500 IO, which gives you double the CPU of a PK1500. It also allows for double the number of IO nodes if needed.

    In the end, if you are using CHARMS, you have the option to leave your control execution settings as is and add a controller to provide more CPU capacity for the installed IO of the CIOC's. You can choose to add a PK100 or PK300 or whatever. The new controller will also allow additional CIOC's to be added in the future. And this comes without any redesign to your existing IO.

    Andre Dicaire

  • In reply to Andre Dicaire:

    Thank you, Andre. That is very insightful. I had a chance to meet you at Spartan University in Calgary last year, and I recall Tom Grusendorf spoke highly of you, too. Wishing you all the best.