Firewall-IPD and CIOCs

HI all,

I just want to confirm something from an architecture perspective regarding CIOCs, Controllers, and IPD Firewalls.  October 2017 White Paper "Network Considerations for M-series with Electronic Marshalling" states the following:

"DeltaV Controllers and CIOCs that communicate among themselves must be in the same side of the same firewall - no Controller to CIOC communications shall go across the Firewall-IPD."

We will be using S-series hardware, but I assume the same still applies?  Do we need to ensure that the CIOCs and any controllers that it talks to are under the same Firewall-IPD?

Thank you for your insights!

Christiana Spencer

5 Replies

  • Hi Christina,

    your assumption is correct. CIOCs and Controllers (whichever type) need to be under the same set of Firewall-IPDs. The reasoning here comes from a security principle as well as overload of the Firewall-IPD's resources. Security here means that control data exchanged between controllers and CIOCs that are exposed to the "unsecure" side of these firewalls would not be protected in case of a storm or similar issue. Overload is related to bursts of communication which are common among CIOCs and Controllers and Firewall-IPDs would not be able to handle such communications gracefully.

    I hope this helps!

    Regards,

    Peixe
  • Yes Christiana you are correct. IPD work the same with M-Series or S-Series Controllers and all CIOC should be on the protected side of the IPD. You can reference AK-1400-0078 for more information.
  • In reply to Alexandre Peixoto:

    Thank you!

    Christiana Spencer

  • In reply to Christiana Spencer:

    Christina, When CHARMS were first released, there was a lot of debate about having the IO cards located on the DeltaV network rather than a separate subordinate network. This was a disruptive technology. One feature of the CIOC is that it supports communication of IO signals to up to four controllers. One of the benefits this delivered was the ability to add additional controllers to an IO network of CHARMS, in other words expanding the control capacity. No rewiring of IO signals to new IO cards on a new controller.

    When designing a system, you can consider the multiple process units each requiring an IO network, aka, a number of CIOC's. Each IO network is up to 16 CIOCs or 1536 physical IO, to which you can have up to 4 controllers. You might combine some smaller IO networks under a single controller. But the idea is to have all the CIOC's of an IO network topologically connected together to a switch that has the controller connected to it. This same switch is uplinked to the rest of the network to communicate to consoles or other controllers. Effectively, this creates multiple IO networks and their controllers that operate independently on some dedicated network switches.

    Another way of using the multiple controller support of the CIOC was the IO Cloud concept. On paper, this says don't worry about where your IO signals will be needed. Connect your field signals to the nearest CIOC and electronically marshal this to the right controller. The problem here is that you can only assign to up to 4 controllers. If you preassign you CIOC to four controllers, you lose the ability to add a controller to relieve CPU loading. There is also the fact that each controller supports 16 CIOC's. If you assign 1 or 2 signals of a CIOC to a controller, and you do this multiple times, you reduce the potential IO of that controller. One still needs to be mindful of how they consume the signals from their CIOC's.

    When you introduce Firewall-IPD's to the network, you now add some additional constraints to data flow, as Alexandre highlighted. With Electronic Marshalling, I consider my IO Networks based on logical groupings by process units or trains. These IO nodes are collected to a "Controller" switch, which may have multiple controllers and multiple IO Networks. This controller switch offers one uplink that can be protected with a Firewall. The Controller Firewall supports up to 8 controllers and this is based on Unsolicited communications, which are limited to 4000 values per second for an SX, MX, PK, SZ or EIOC. This communication is exception based. You can have many more parameters communicated but the exception values will be throttled to 4000 per second. From what I've seen, that level of data flow will run you around 1 MB per second of data. On the flip side, CIOC data is not by exception and 1500 IO signals can consume several MB per second of bandwidth. This depends on the CIOC update rate, number of Signals and their type/direction.

    It just makes sense to have an entire IO Network and its controllers located under the same "controller" Switch. You can then add Firewalls appropriately above the controllers, taking care to allow room for additional controllers if CPU capacity becomes an issue. Maybe what I'm saying is your DeltaV system will have multiple IO Clouds to which you will plug in up to four controllers to meet the CPU requirements of your control strategies. Each IO cloud is like a dedicated physical network segment acting very much like a dedicated subordinate network of traditional controller/IO topologies. All the IO communication remains local to this physical segment, existing only between IO nodes and the controllers consuming those signals. Your firewalls are used to isolate and protect these IO clouds and their controllers.

    Remember that controller to controller communication can exist across firewalls. This communication is by Unsolicited Exception reporting. If an IO signal in one "IO Cloud" is needed by a controller in a different "IO Cloud", it can be assigned to a local controller, but can be referenced by an AI, DI or External reference directly by its Signal Tag. The exception reporting will reduce the communication bandwidth. At a minimum, if you run the remote module at 1 second, updates will come every second by exception, rather than from the CIOC every 250, 100 or 50 ms update rate.

    Getting to Alexandre's comment about loading and security, there is no one size fits all answer. What if there are only 2 controllers on each firewall, and there are only a handful of signals traversing above the firewalls? Would that work? And the entire DeltaV network is cybersecure to a significant degree. And DeltaV switches provide network storm protection for Broadcast and Unicast traffic. Is it really forbidden to send a CIOC's IO Data through a firewall or two? After all, if the firewalls were not there, the traffic would be entirely valid across the DeltaV network.

    The reason we install Firewall-IPDs is to increase the cybersecurity profile of controllers, which includes their IO communication. Closed loop control involves the network switches that handle IO traffic. The Firewalls increase the availability of this communication, as well as the cybersecurity profile of the controllers. The can prevent unapproved downloads to the CIOC's as well as the controllers.

    In my book, the best practice is to keep all IO traffic of CIOC's and RIU's to local physical network segments, creating an IO Cloud connected to their controller switch. IO data exchange between such IO networks or clouds should be done peer to peer between the respective controllers, allowing any IO data to be consumed by any controller in the system. Preferably, Control modules should be located in the same IO Cloud as their IO Signals. If an input signal is in a different IO cloud than the Output, place the module with the Output signal and reference the Input from the controller in the other IO cloud. This keeps all CIOC and RIU traffic local to the controllers consuming them and sets the system up for the use of Firewall-IPDs. The only traffic traversing the firewalls will be unsolicited exception reporting. This traffic is minimized by co-locating controllers under the same Firewall-IPD where there is significant peer to peer traffic.

    One man's opinion.

    Andre Dicaire

  • In reply to Andre Dicaire:

    Many thanks Andre, as always you are a wealth of information! This is very helpful, thank you.

    Christiana Spencer