Calc block or multiple logic/math blocks?

Hi,

I've read elsewhere that one should avoid using CALC blocks when it's not required (mainly because of the impact on controller loading, and module execution priority). At what point would a CALC block be better than multiple logic & math blocks? How can I determine the 'better' method? I haven't figured out how launch the Load Estimator utility in Simulate...

As an example, I want to hold the last input value if the status is bad, but I also want the output value to have the same status as the input value. Which method is better in this case? A CALC block.... or a CND, XFR, ACT & some math block?

Thanks.

  • That is always a good question. Sometimes the Calc block gives a lot of flexibility difficult to achieve with FBs. I would say to compare exec times with both options. We created soft AI blocks using a calc block years ago for similar reasons to your use case. We did a lot of testing, made multiple copies of each option and trended the exec times just to make sure we weren't creating a problem.
  • In reply to Michael Moody:

    When doing this testing and using EXEC_TIME parameter, you need to make sure that the execution of the module is in the high priority (100 ms, 200 ms, .5 sec) to get the most accurate number.

    Action and Calc blocks can be used to actually decrease the load in situations where function blocks aren't needed to be executed. Nesting of logic and not running code that isn't required which when using Function Blocks isn't available will provide a decrease in the loading.
  • In reply to Matt Stoner:

    How accurate is the Load Estimator in this case? Building a custom module like the OP has shows a significant difference in favor of using the custom build. However, the description is for a 4x4 block with a "large conditional expression". I wouldn't consider OP's expression to be large.

    What portion of the loading is due to number of IO and size of expression?
  • In reply to TreyB:

    I think, the load estimator can not really use,because the classification, complex or simple expression is not clear. We don't know how long a function, assignment etc take. Skipping bigger parts of expression with IF/THEN/ELSE looks that it save time, but a bigger expression need also longer to load. So far as I understand will the expression be translated into a pseudo code similar to the basic interpreter and interpreted every cycle. Not sure that is right, but for me it looks like. You can find this expressions in the download files as such a P-Code.
    I use often Calc or Action, but sometimes, if memory is no issue you might also consider to use a SFC State-machine.
    Anyway, to get out the most efficient coding, you must use an empty controller, create as much as possible copies of your code and check the free-time (M-Series), alternative for S-Series the load status, memory and exec time.
    My opinion (which must not be right) is, Emerson got too much trouble with customers who requested enough free time after commissioning that they now hide this useful parameter free time.

    In the above example from the initial question, I would consider to use an Input Selector and set the option to select next good.
    I did this in combination with an ALM block to create a Soft-AI with Substitute, Simulate in EU and Hold Last Value options, easier as I could do with a calc block. That guy as a composite can be used for any AI Input from an AI-FB , AIFF-FB, PIN or modbus register and the best was, simulation was always possible without hardware (especial FF) for FAT testing.

    Anyway, a very long request of mine is to be able to compile a complete composite into machine code including all expressions and just create a new FB from it, which is direct also save to hide your knowledge.
  • In reply to Matt Stoner:

    I was told from our service from earlier version (DV2++), that if you open a control studio, that module would be placed in a middle priority. Not sure it is also today, but if, then the exec time should be checked with watch-it instead of CS Online.
    Do you know more about? You are almost very good informed ;-)
  • Hi Fo,
    may you can explain, why you want use the last good value, but set the status to bad?
    If you need this, I would consider to use uncertain constant (think status will be 67).
    Some blocks can be configured how to use such an input signal, as bad or just even good. But it shows that something is wrong with the input.
  • In reply to Michael Krispin:

    Yes I use Watchit to watch the EXEC_TIME values because you can set the read times to 100 ms and also trend them from that application. CS update rates won't be good for this type of testing.
  • Lot's going on in this thread.

    For this specific application, I would use a single CALC block, replacing the CND and ACT blocks, and setting both the value and Status of my OUT parameter wired to the SUB FB.

    The decision to use a CALC block or some explicit Function Blocks should weigh two things: Code Simplicity (for understanding) and code efficiency. Some slightly less efficient code is not a bad thing if the implementation is also easy to understand. Also keep in mind how often this code is run.

    In this case, one CALC block can combine the expressions in the CND and ACT block, with a simple IF THEN ELSE function to replace the XFR block. Add some comments in the expression to explain the logic. As far as the logic, use OUT.ST := IN.ST to keep the same status on the signal, and to hold the value:

    rem Hold the last good value when input status is BAD
    if IN.ST =BAD then
    OUT1.CV := OUT1.CV;
    ELSE;
    OUT1.CV := IN1.CV;
    ENDIF;

    rem Pass Input status to Output status
    OUT1.ST := IN1.ST;

    Instead of calling this CALC1, call the block HOLD_VAL or what every. Now your Module drawing shows a wired in and wired out signal and you can add a comment that explains, "This block holds the last Good value if status goes BAD"

    To me, having a clean and understandable signal path in the drawing is important for long term easy of understanding, especially for those that will read this for the first time in 5 or 10 years.

    You could also place this inside a Composite block, with an Input parameter and an output, and name these as you like rather than IN1 and OUT1 of the CALC. If you reference the parameters rather than wire them in the Composite that keeps it most efficient:

    If '^/IN_VAL.ST' <> BAD Then
    '^/OUT_VAL.CV' := '^/IN_VAL.CV;
    ENDIF;

    '^/OUT_VAL.ST' := '^/IN_VAL.ST;

    The Composite block would have those two parameter as input and output as Float with Status. Now the block in the diagram is actually informative as to its function. Note I trimmed out the OUT = OUT when BAD. Most efficient code, except the out value will be 0 until the IN Status goes good. Then it will reflect IN_VAL. So I would go back to my first expression and add the ELSE OUT_VAL := OUT_VAL. Sometimes we can get too "efficient".

    My rule is to use wired connections when you want to show signal flow through the Control Module. This is required when linking function blocks. When CND, ACT and CALC blocks are concerned, Internal or external references are more efficient than wiring a parameter to an IN or OUT connector. The wired connection means the data is copied from one memory location to the other. The direct reference in the expression avoids that step. So when it makes sense in terms of clarity to understanding the function of the module, use wires. When it does not help, use more efficient references. In the example above, if we create a composite, you might wire the IN_VAL parameter to the IN2 connection of the CALC block. and again on the output. This shows the signal path in the composite. But now you move the value from the origin to the IN_VAL parameter and then again to the IN1. Since this is a simple composite that is pretty much self explanatory, and you plan to use this thing hundreds of times, skip the wires inside the composite.

    As compared to the CND, ACT and XTFR blocks, and the looped wire data copy, a single CALC will be more efficient. Note that by using internal references in the composite, the composite will be the same CPU as the CALC alone. It will add some additional memory usage, but the CPU would be the same.

    To see the difference in CPU between these two approaches would be extremely hard. And I would not bother, not in this case. My design would be a Composite with a simple CALC block and a succinct description in the composite. (I'd probably need help with the "succinct" part). The value of the design, readability of the module at the top level is more important for maintainability. We could place the CND, TFR and ACT blocks in a composite and gain the same benefits of readability at the top level. But it would be a bit harder to explain why the TFR is looped back and explain the extra logic blocks, but this becomes a personal preference.

    As for the Load Estimator, I would say it is accurate for all the basic function blocks, but the Expression based blocks are all dependent on the actual expression. So it cannot provide any insight to loading from these blocks or SFC's. It is useful to gauge the relative load of a configuration, based on the number of modules and their execution, but you need to "calibrate" it to your specific module designs. The latest version allows you to define your custom modules and to add a compensation value to refine the CPU usage per block.

    You could spend a lot of time trying to dial that in, but as mentioned, if CALC blocks and ACT blocks are conditionally run, you need to figure out the worst case loading and then figure out if that is even likely for all the modules. It gets into diminishing returns.

    What I would like to see if that the DeltaV Database provide the Load Estimate for a module based on the CPU usage of each block in a module. Then, like the Load estimation tool, provide a compensation value we can tweak as we determine the impact of the customized expressions and such. That way, the database could provide a Load Estimate based on the actual configuration and you would be able to compare that to the actual controller loading. Then, if you adjust module scan rates, or Block Execution scan time, you'd have an estimated benefit of the changes without having to download. And not have to manually build your controller loading modules in a separate tool. That would be useful, in my opinion.

    In the meantime, here's what I do. In Excel with DeltaV Add In, it read the Module EXEC_TIME, and PERIOD parameters. Divide the EXEC_TIME by the PERIOD to get a relative load per second. (PERIOD is the scan rate in seconds, so 100 ms = 0.1. That module has 10 times more CPU usage than an identical module running at 1 second rate). Now sum up the weighted Load. In a perfect world, that will add up to a total Execution time that proportional to the diagnostic value for CPU loading. But it will not. You see the EXEC_TIME includes any Interrupt time during a module execution. It simply is the difference between when the module started and when it completed. IF that is interrupted for a high priority task, that scan of the module's exec time will be skewed.

    EXEC_TIME is therefore subject to error. If you download 20 identical modules with the same scan time to a controller, you might find that some of them report an EXEC_TIME that could be double or triple the other modules. That indicates the ones with higher exec time were interrupted. Disregard these as you determine the actual loading of the module.

    Or, accept that module interrupts happen and "calibrate" the Excel sheet loading to the reported diagnostic. Average your module loadings and that becomes a percentage of the overall loading, and that gives you the loading in % cpu for your modules.

    When using a module class library, I would say each class module should have it's CPU and Memory impact documented. As Matt explained, create as many modules of a type of class and assign them to a controller. Run them at 100 ms and strive to get a CPU consumption as high as you can. The higher the better. Maybe add more modules. Then, divide the CPU usage by the number of modules running per second, and that is your CPU usage. If you use EXEC_TIME and through out the skewed times, you will get a similar result. But since overall loading is in % CPU, the EXEC_TIME in micro seconds is only useful if you know the total CPU Time allocated for control. in pre v 14 M series, that was either 65 or 70%. with a caveat that you maintain minimum 20% FREETIME, which is about 13 to 14 % of total CPU. That was confusing. In S series, v12, and now in all controllers in v14, %CPU Capacity is express from 100% to 0% but we still don't know how much actual time that is.

    To summarize, the Load Estimator is useful to gauge future controller loading but needs to be calibrated with actual controller loading to account for variability in the expressions and some complex FB options. Once you have reasonable modules loads defined, you can input the quantity of them at various execution times to get a decent estimate of load. For the S-series CHARMS, the IO load consumes a separate CPU allocation and running all CIOC at 50 ms with HART enabled Channels could exceed the allocated Communications load. So the Load Estimator can be useful to make sure you are not overloading the Control and Communications load of S series and M series controllers. With PK controllers in v14, the CPU capacity is 4 times that of the SX/MX controllers. They support 25 and 5 ms Module execution, so if you really try, you could overload their CPU to. But you have to really try.

    Use EXEC_TIME to gauge module complexity, and coupled with scan time (PERIOD) gauge loading. But remember that EXEC_TIME can include interrupts so be sure to verify across multiple instances of the same module class (or copies). Using Watchit to trend EXEC_TIME can also show it change based on changing conditional logic in a module and also the impact of interrupts. Ignore EXEC_TIME values that are much larger than the majority of the same class modules. And don't worry about them as the interrupts are normal handling of things like IO data, or a remote write to a parameter.

    And always consider diagram readability and understanding as well as code efficiency. Too much focus on one can be detrimental to the other.

    I know, that wasn't succinct at all...

    Andre Dicaire

  • In reply to Andre Dicaire:

    Thanks for your comprehensive response and the extra nuggets to think about and try, I really appreciate it!
  • In reply to Michael Moody:

    Thanks, I hadn't thought of using the EXEC_TIME before.
  • In reply to Lily G:

    ,

    The description you provided raises a number of questions I am hoping you can help me with:
    1. Is there a document that actually describes proper techniques (style guide) for calc blocks to ensure efficiency? (e.g. your information about references being better than using inputs?)
    2. Are the inx/outx keywords more or less efficient than the reference syntax?
    3. We run into the calc block character limits on occasion. Using references burns a lot of additional characters. does that factor in to your approach?
    4. Using references means that a parameter being written to cannot be prevented from manipulation by user code (e.g. instance configurable calculation that runs later in the execution order), whereas an OUTx parameter can be more tightly controlled. Do you weight that very heavily?

    Thanks!
  • For CPU load I normally do not fully rely on EXEC_TIME. Creating/assign/download 256 instances into real controller and then check FREETIM/FREEMEM is what I used to do to get CPU load index for a particular config. After replacement of FREETIME/FREEMEM by Controller Performance indexes it is not so easy.

    A comparison I've did in the past was 256 instances containing a single CALC with simpler code (DUMMY:=TRUE) and another set of 256 modules containing only one AI block. Result was CALC more then three times heavier than AI.

    Another thing to consider when choosing CALC is download behavior. Function blocks guarantee that critical parameter values are kept but, with CALC, this depends of code. Basically, if CALC code uses values from previous scan or internal variables then download is potentially an issue. Because of this, CPU load of CALC based modules can be higher than expected because some additional code needs to be added to manage downloads or switchover.

    Typically I avoid using CALC to drive other modules. My preference is to use CALC to perform a single calculation, based on CALC INx calculate a set of values into CALC OUTx parameters.

    Even when wired function block logic becomes complex, I do evaluate if putting it inside a COMPOSITE is more CPU efficient than use a CALC.
  • In reply to gamella:

    In replay to Gametta and Michael Krispin, S-series controllers, as well as SZ, EIOC, PC controllers and now MQ and MX in v14 use the QNX OS which changed the diagnostic for control capacity from FRETIM to % Control Expansion. On the original release of this OS in v12 on the SX and SQ controllers, this number was in the TELNET information, but was promoted to be an online parameter, and was added to the "Time Utilization Chart" on a second Tab. It replaced FRETIM and unlike FRETIM that has a minimum steady state value of 20% (ie control capacity = 0), % capacity provides a full scale 0 to 100% expansion value.

    The intent of the diagnostic Index numbers was to simplify diagnostics, and emulated the Windows PC Health Index. It was not to obfuscate the CPU usage. After reviewing the usefulness of the % Expansion numbers for system planning and for bid spec requirements, these parameters were elevated to online view.

    Andre Dicaire

  • In reply to Brian Hrankowsky:

    Brian, I had to review my post, I should write shorter posts, but that takes more time...
    1. I am not aware of a style guide for Structured Text expressions. There are many nuances to expressions that are not documented and overtime behavior can change. I recall the old FST handbook that was created by a Rep company to help PROVOX users with Logic Control Points. Something similar would be most useful. Structured text is much easier to use than the FST (assembler like) structure, and BOL does a good job with Operators and Functions, but a style guide would be nice.
    2. I don't think so. the expression is "tokenized" on download. I've not checked if the downloaded version changes if you use ^/OUT1.CV or OUT1. But I don't think either puts any significant load on the expression.
    3. I've never run into this character limit. I like to break down complex problems in to smaller pieces. I once converted an C program calculation for Ethylene compressibility that requires a converging loop. The original program also had 10 characterizing curves and one dynamic curve, which I implemented in SIgnal Characterizer blocks. Connected them together in a composite. Point is, I could have done it all in a calc block, but I laid it out and realized the signal flow worked with multiple blocks and the result was much more readable. You can solve some pretty complex algorithms using multiple CALC blocks to avoid the character limit. Just like the max number of loops in a calc block prevent an infinite loop, the character limit provides break points in the code to allow the controller to run other tasks and prevent an expression from holding CPU for too long.
    4. Combining parameters and multiple blocks into an Embedded or Composite block hides values from other code. You can also hide the content of a composite so users only see the block's exposed parameters. If you need to manipulate integers or use bitwise actions, or a string, these must use module parameters as OUT, IN and internal variables are Float only. There is no limit to how someone can mess up a working configuration. The less moving of values you do, the less CPU is used. An expression can write to OUT1. You can then wire to a parameter to expose the value, which moves the value in the module. If you write directly to this parameter, you same a wire. Or, make the parameter an internal reference to the CALC/OUT and save the wire. Or, use a wire because it makes the module drawing so much clearer. There's no one answer.

    As I said, you have to balance ease of understanding with efficiency of execution.

    Andre Dicaire

  • In reply to Lily G:

    Ah. EXEC_TIME. As many have indicated, this is a finicky parameter. It is simple in nature as it measures the time it takes a module to execute. This provides a measure of complexity for a module. The more time it takes, the more CPU it uses. But if it were so simple, then why does everyone have a different way of using it?. Why is it that if I download 50 identical modules, some of them will show an EXEC_TIME that can be 10 times greater than others?

    EXEC_TIME is simply the difference in time from start to finish of a module. However, modules share the CPU with many other tasks, including IO processing, Communications, Redundancy updates and an array of house keeping tasks that run at the same or higher priority as the control thread. The EXEC_TIME parameter calculation is not aware of these interruptions.

    Another impact is that the exposed value is based on multiple samples and is conservative in nature to reflect the longer times. this is why out fifty identical modules downloaded to a controller, some of them will show significantly higher EXEC_TIME, due to longer interrupts on some modules based on when they are scheduled.

    Testing the modules for their CPU load is most useful so that you can predict the loading of a controller based on the various module classes used in the configuration. Download 50 instances at 100 ms and check the CPU FRETIM or % Control Expansion. You now have 500 instances of this module running each second. That gives you a good basis to determine the % load that module design will require.

    When evaluating an existing configuration, EXEC_TIME can give you relative complexity numbers, but you still should evaluate them statistically and with an understanding of the module designs. Identifying the more complex modules coupled with their scan time can help identify candidates that would have the greater impact to reduce loading by adjusting module scan time. This is useful for heavily loaded controllers where additional control modules maybe needed or in a migration where the controllers are below recommended minimum loading for migration.

    In my opinion, EXEC_TIME is not that useful to predict loading. Use a testing process to determine % load due to x number of modules for that. For existing configurations, comparing EXEC_TIME across all modules in a controller can reveal opportunities to increase FRETIM/%Control Expansion. Do not use EXEC_TIME compare these numbers across different controllers.

    Andre Dicaire