Calc block or multiple logic/math blocks?

Hi,

I've read elsewhere that one should avoid using CALC blocks when it's not required (mainly because of the impact on controller loading, and module execution priority). At what point would a CALC block be better than multiple logic & math blocks? How can I determine the 'better' method? I haven't figured out how launch the Load Estimator utility in Simulate...

As an example, I want to hold the last input value if the status is bad, but I also want the output value to have the same status as the input value. Which method is better in this case? A CALC block.... or a CND, XFR, ACT & some math block?

Thanks.

  • That is always a good question. Sometimes the Calc block gives a lot of flexibility difficult to achieve with FBs. I would say to compare exec times with both options. We created soft AI blocks using a calc block years ago for similar reasons to your use case. We did a lot of testing, made multiple copies of each option and trended the exec times just to make sure we weren't creating a problem.
  • Lot's going on in this thread.

    For this specific application, I would use a single CALC block, replacing the CND and ACT blocks, and setting both the value and Status of my OUT parameter wired to the SUB FB.

    The decision to use a CALC block or some explicit Function Blocks should weigh two things: Code Simplicity (for understanding) and code efficiency. Some slightly less efficient code is not a bad thing if the implementation is also easy to understand. Also keep in mind how often this code is run.

    In this case, one CALC block can combine the expressions in the CND and ACT block, with a simple IF THEN ELSE function to replace the XFR block. Add some comments in the expression to explain the logic. As far as the logic, use OUT.ST := IN.ST to keep the same status on the signal, and to hold the value:

    rem Hold the last good value when input status is BAD
    if IN.ST =BAD then
    OUT1.CV := OUT1.CV;
    ELSE;
    OUT1.CV := IN1.CV;
    ENDIF;

    rem Pass Input status to Output status
    OUT1.ST := IN1.ST;

    Instead of calling this CALC1, call the block HOLD_VAL or what every. Now your Module drawing shows a wired in and wired out signal and you can add a comment that explains, "This block holds the last Good value if status goes BAD"

    To me, having a clean and understandable signal path in the drawing is important for long term easy of understanding, especially for those that will read this for the first time in 5 or 10 years.

    You could also place this inside a Composite block, with an Input parameter and an output, and name these as you like rather than IN1 and OUT1 of the CALC. If you reference the parameters rather than wire them in the Composite that keeps it most efficient:

    If '^/IN_VAL.ST' <> BAD Then
    '^/OUT_VAL.CV' := '^/IN_VAL.CV;
    ENDIF;

    '^/OUT_VAL.ST' := '^/IN_VAL.ST;

    The Composite block would have those two parameter as input and output as Float with Status. Now the block in the diagram is actually informative as to its function. Note I trimmed out the OUT = OUT when BAD. Most efficient code, except the out value will be 0 until the IN Status goes good. Then it will reflect IN_VAL. So I would go back to my first expression and add the ELSE OUT_VAL := OUT_VAL. Sometimes we can get too "efficient".

    My rule is to use wired connections when you want to show signal flow through the Control Module. This is required when linking function blocks. When CND, ACT and CALC blocks are concerned, Internal or external references are more efficient than wiring a parameter to an IN or OUT connector. The wired connection means the data is copied from one memory location to the other. The direct reference in the expression avoids that step. So when it makes sense in terms of clarity to understanding the function of the module, use wires. When it does not help, use more efficient references. In the example above, if we create a composite, you might wire the IN_VAL parameter to the IN2 connection of the CALC block. and again on the output. This shows the signal path in the composite. But now you move the value from the origin to the IN_VAL parameter and then again to the IN1. Since this is a simple composite that is pretty much self explanatory, and you plan to use this thing hundreds of times, skip the wires inside the composite.

    As compared to the CND, ACT and XTFR blocks, and the looped wire data copy, a single CALC will be more efficient. Note that by using internal references in the composite, the composite will be the same CPU as the CALC alone. It will add some additional memory usage, but the CPU would be the same.

    To see the difference in CPU between these two approaches would be extremely hard. And I would not bother, not in this case. My design would be a Composite with a simple CALC block and a succinct description in the composite. (I'd probably need help with the "succinct" part). The value of the design, readability of the module at the top level is more important for maintainability. We could place the CND, TFR and ACT blocks in a composite and gain the same benefits of readability at the top level. But it would be a bit harder to explain why the TFR is looped back and explain the extra logic blocks, but this becomes a personal preference.

    As for the Load Estimator, I would say it is accurate for all the basic function blocks, but the Expression based blocks are all dependent on the actual expression. So it cannot provide any insight to loading from these blocks or SFC's. It is useful to gauge the relative load of a configuration, based on the number of modules and their execution, but you need to "calibrate" it to your specific module designs. The latest version allows you to define your custom modules and to add a compensation value to refine the CPU usage per block.

    You could spend a lot of time trying to dial that in, but as mentioned, if CALC blocks and ACT blocks are conditionally run, you need to figure out the worst case loading and then figure out if that is even likely for all the modules. It gets into diminishing returns.

    What I would like to see if that the DeltaV Database provide the Load Estimate for a module based on the CPU usage of each block in a module. Then, like the Load estimation tool, provide a compensation value we can tweak as we determine the impact of the customized expressions and such. That way, the database could provide a Load Estimate based on the actual configuration and you would be able to compare that to the actual controller loading. Then, if you adjust module scan rates, or Block Execution scan time, you'd have an estimated benefit of the changes without having to download. And not have to manually build your controller loading modules in a separate tool. That would be useful, in my opinion.

    In the meantime, here's what I do. In Excel with DeltaV Add In, it read the Module EXEC_TIME, and PERIOD parameters. Divide the EXEC_TIME by the PERIOD to get a relative load per second. (PERIOD is the scan rate in seconds, so 100 ms = 0.1. That module has 10 times more CPU usage than an identical module running at 1 second rate). Now sum up the weighted Load. In a perfect world, that will add up to a total Execution time that proportional to the diagnostic value for CPU loading. But it will not. You see the EXEC_TIME includes any Interrupt time during a module execution. It simply is the difference between when the module started and when it completed. IF that is interrupted for a high priority task, that scan of the module's exec time will be skewed.

    EXEC_TIME is therefore subject to error. If you download 20 identical modules with the same scan time to a controller, you might find that some of them report an EXEC_TIME that could be double or triple the other modules. That indicates the ones with higher exec time were interrupted. Disregard these as you determine the actual loading of the module.

    Or, accept that module interrupts happen and "calibrate" the Excel sheet loading to the reported diagnostic. Average your module loadings and that becomes a percentage of the overall loading, and that gives you the loading in % cpu for your modules.

    When using a module class library, I would say each class module should have it's CPU and Memory impact documented. As Matt explained, create as many modules of a type of class and assign them to a controller. Run them at 100 ms and strive to get a CPU consumption as high as you can. The higher the better. Maybe add more modules. Then, divide the CPU usage by the number of modules running per second, and that is your CPU usage. If you use EXEC_TIME and through out the skewed times, you will get a similar result. But since overall loading is in % CPU, the EXEC_TIME in micro seconds is only useful if you know the total CPU Time allocated for control. in pre v 14 M series, that was either 65 or 70%. with a caveat that you maintain minimum 20% FREETIME, which is about 13 to 14 % of total CPU. That was confusing. In S series, v12, and now in all controllers in v14, %CPU Capacity is express from 100% to 0% but we still don't know how much actual time that is.

    To summarize, the Load Estimator is useful to gauge future controller loading but needs to be calibrated with actual controller loading to account for variability in the expressions and some complex FB options. Once you have reasonable modules loads defined, you can input the quantity of them at various execution times to get a decent estimate of load. For the S-series CHARMS, the IO load consumes a separate CPU allocation and running all CIOC at 50 ms with HART enabled Channels could exceed the allocated Communications load. So the Load Estimator can be useful to make sure you are not overloading the Control and Communications load of S series and M series controllers. With PK controllers in v14, the CPU capacity is 4 times that of the SX/MX controllers. They support 25 and 5 ms Module execution, so if you really try, you could overload their CPU to. But you have to really try.

    Use EXEC_TIME to gauge module complexity, and coupled with scan time (PERIOD) gauge loading. But remember that EXEC_TIME can include interrupts so be sure to verify across multiple instances of the same module class (or copies). Using Watchit to trend EXEC_TIME can also show it change based on changing conditional logic in a module and also the impact of interrupts. Ignore EXEC_TIME values that are much larger than the majority of the same class modules. And don't worry about them as the interrupts are normal handling of things like IO data, or a remote write to a parameter.

    And always consider diagram readability and understanding as well as code efficiency. Too much focus on one can be detrimental to the other.

    I know, that wasn't succinct at all...

    Andre Dicaire