Ver 13.3 | Application Station | OPC DA Server | Read/Write Limitations

DeltaV 13.3

Our application station has three OPC DA servers running on it, including the built in DeltaV OPC DA server.  The other two are for systems that we need to exchange data with.

We use Kepware as a client that reads data, from the two DA servers, and then writes this data to the DeltaV OPC server. (We also write data from DeltaV back to these servers but that seems to work fine)
This is accomplished using linked tags in the Kepware advanced tags. 


We are reading around 2500 tags from each of the two systems.  The issue is that we frequently get write failures to the DeltaV OPC DA server.

I came across this post today  RE: OPC DA clients number limit 

Could the 2K Writes per second limitation be the cause of our troubles?

8 Replies

  • I’m quite that the reason is not a performance issue. The opc mechanism protocole is enough robust if your kepware client are configured as Asynchronuous mode.
    In other hand if you are in asynchronous mode the issues can come from :
    - lack of license number capacity.
    - clients connexion released too much time. That means clients must keep the connexion continuously alive, because at each re-connection tag enumeration can take a long time to be established.
    I suggest you to use OPC deltav Mirror instead of kepware, it’more robust, more easy and supported by Emerson Guardian support so hotfix are available and works in pair with dv opc server
  • In reply to LaurentB:

    Thanks for the reply.
    I can confirm that our client is using Asynchronous writes.

    I am dong some testing on a spare application station (live system) and have found that the writes begin to fail fairly quickly.
    The Application station has a 5K Scale up and I am currently testing 800 points.
    I setup a control module with 800 input parameters. 200 Floats, 200 INT, 200 UINT and 200 BOOL
    When enabling the 800 writes per second, i immediately started getting a scrolling list of failed writes.
    If i remove the 200 floats and 200 uints, the system recovers and i no longer have any fails.
    Does the 2000 writes per second come with an asterisk and it is data type dependent.
    Or maybe it is "Up to" 2000 writes per second.

    I'll do some research on the client connection duration.

    I am hesitant to use OPC Mirror since that is what we used several years ago and found it to be fraught with issues. Perhaps newer versions are hardened, but our original experience soured us on it.


    In the communication settings for Kepware, I reduced the Max items per write from 512 to 400 and increased the write timeout from 1000ms to 2500ms.
    So far I am a little over 600K writes and 0 failures with all 800 points.

    Going to ramp this up to 1600 and see how it does.

  • In reply to Invalid String:

    How are the destination parameters configured? That is, where are the modules located to which the OPC Client is writing?

    OPC continuous writes to DeltaV should made to Control Modules located/assigned to the Application station hosting the OPC Server. From there, these values are available to consoles and controllers alike via unsolicited exception reporting of the DeltaV communications. If the module is assigned to a controller, the OPC Write forces a communication to the controller which must process and respond as to the success of the write.

    Manual writes to a given parameter is handled much like and Operator parameter change, so there is no issue to write to a controller based module for such infrequent actions. But bulk data transfer between servers should land the data in local modules. The guideline is to have no more than 20 writes per second into a controller. The 2000 writes per second are contingent on the modules being assigned to the Application station, and not involving peer to peer writes across the control network.

    Andre Dicaire

  • In reply to Andre Dicaire:

    The destination parameters are in modules that are currently running on redundant MX controllers.
    I will move it to the local application station and continue testing.

    Some quick back story for you:
    We initially ran the modules on the application stations, however, there is the occasional server reboot (mostly unplanned!) which would disrupt module execution. (magenta screens)
    We moved the modules to MX controllers to isolate the module execution and make server reboots more transparent to the operators. We also added a pair of dedicated redundant network switches between the application stations and MX controllers. The goal was to have load sharing\redundancy with this architecture.

    Is there a difference in peer to peer communications in the following scenarios?
    -application station to controller
    -application station to application station

    What if we dedicated one application station to running the modules and two other application stations for load sharing\redundant communication with our third party OPC DA servers?
    How much of a performance penalty would we take from this approach?
    Guesstimate answers are welcome.
  • In reply to Invalid String:

    After moving the module to the local application station, the performance is significantly better.
    Currently at 262 million writes with 46 failures. I can live with that!

    I would still like to explore running all of the modules in a single application station and then writing to it from a different application station. This way, if my application station running the client needs to be rebooted, my modules will continue executing. Either way, the initial problem is solved.

    Thanks for taking the time to reply with such a detailed an helpful answer.
  • In reply to Invalid String:

    Additionally, you can use Redundant Application Station to host/run the modules. You do you have to use OPC Remote node (OPC Remote is redundancy-aware). This is where you should run Kepware.
  • In reply to Invalid String:

    Additionally, you can use Redundant Application Station to host/run the modules. You do you have to use OPC Remote node (OPC Remote is redundancy-aware). This is where you should run Kepware.
  • In reply to Lun.Raznik:

    Thanks for this, it is something we will certainly be evaluating.