• Not Answered

Deltav Continuous historian.

Hello experts, 

Need your help & advice with below queries. 

1) In continuous Historian once the specified historian database size gets full, the oldest current dataset gets exported to configured path in hard disk. But if the hard disk itself gets full after a period of time,then what will happen?? 

2)We can define the active & current history database size, but can we define the limitations for hard disk space usage for export?? 

6 Replies

  • This is why the Continuous Historian has a 10GB limit on dynamic datasets, so that the Hard Disk does not get full unexpectedly, which would result in a loss of history collection and possibly cause other programs on the computer to be impacted.

    The Disk space requirements are driven by the amount of data you are collecting. You can't ask the software to change the amount of disk space it uses. You should consider adding a larger Hard Disk to the computer, or you can move the exported Datasets to a different computer.

    The exported data sets can be used to create Extended Datasets, which will allow access to their contained data, keeping an unlimited amount of data available to the operator (limited by available disk space). If you are creating these extended data sets, you should remove the Exports from the server by moving them to a backup location, leaving the maximum disk space for online available data.

    Are you using compression in your data collection? If not, you should as this will greatly reduce the disk space needed to store the same amount of history information. Some users want to store values at a prescribed period so that they don't "miss" any data. A compression bandwidth of 0 ensures every change from a straight line is collected. That is a significant benefit for Setpoints or other values that change infrequently. Since every signal has a measurement error from the transmitter, the data is not any more accurate because you store it every scan. Add a small deadband (0.1 %) and increase your history collection capacity. For critical data, you can also force a value as quickly as every 15 minutes, so you avoid the creation of a long gap if the historian is shutdown abruptly.

    You best bet is to coordinate the removal of unneeded Export files with your back up strategy. Once the files are backed up, delete these from the database. How much Disk space are you using? Replacing or adding a D: drive with a 600 GB drive, or larger is an easy hardware fix that moves the problem out a few years.

    Andre Dicaire

  • In reply to Andre Dicaire:

    It looks like the version of SQL system manager installed on Emerson machines is limited to 10.4GB of data. Can a full version of SQL be installed to prevent this limit from being reached? that way in future versions, we wont be dividing up data into 10GB packages, but instead can specify a file size we want to export based on drive capacity. I can fill up 10GB in 1 day easily.
  • In reply to Petrisky:

    To Petrisky
    Absolutely no, DV Continuous Historian is not a SQL database, It's its own engine
    To know how to manage dataset better,please refer to Book online , and search for Continuous Historian database administration
  • In reply to Andre Dicaire:

    Thanks for the reply andre.

    I totally agree with solution you have mentioned.
    And usually my explanation to the clients are same as yours.
    And yes!! I did use the data compression facility.
    But this solution we are giving needs constant human intervention & vigilance,error or ignorance from plant maintenance team will result in bigger issue.
    What I was hoping for are some solid full proof automatic solutions as below :-

    1) If we were able to alott the max hard disk space for exported datasets.
    For example- my hard disk full capacity is 1000gb.And form that only 400 gb will be used by the exported datasets.
    2) And suppose after a brief period of time, the exported datasets reaches 400gb.Then at that time the latest dataset will replace the oldest dataset automatically.

    AMIT JAIN
  • In reply to AmitJ:

    I'd say your only avenue is to own this by having a scheduled script that evaluates diskspace and removes the oldest Exported Datasets so that there is always room on the hard disk for the next exported dataset.

    The max size of the historian applies to the Active and Current Datasets. If you do not convert exports to Extended, then the Historian's max disk space usage is limited to 10 GB, or less based on the configuration of the Historian properties. Since Extended Datasets are manually created, the user can manually remove the oldest datasets to avoid filling the disk.

    The DeltaV diagnostics provides the Free Disk space on the Drive containing DVDATA folder. (WS_NAME/FREDISK is the parameter name) You can monitor this value ( in megabytes) to set an alarm when disk space is getting too low. The action is to perform some administration tasks on the exports. You could build a module assigned to the Historian and log changes into the Event Chronicle, say every 100 Meg change sends a log event, so you avoid trending this. Set an alarm when it reaches 1Gb or 500 Mb, something that gives time to go free up space. I'd say two or three times the size of the typical export file.

    Andre Dicaire

  • In reply to AmitJ:

    I'd say your only avenue is to own this by having a scheduled script that evaluates diskspace and removes the oldest Exported Datasets so that there is always room on the hard disk for the next exported dataset.

    The max size of the historian applies to the Active and Current Datasets. If you do not convert exports to Extended, then the Historian's max disk space usage is limited to 10 GB, or less based on the configuration of the Historian properties. Since Extended Datasets are manually created, the user can manually remove the oldest datasets to avoid filling the disk.

    The DeltaV diagnostics provides the Free Disk space on the Drive containing DVDATA folder. (WS_NAME/FREDISK is the parameter name) You can monitor this value ( in megabytes) to set an alarm when disk space is getting too low. The action is to perform some administration tasks on the exports. You could build a module assigned to the Historian and log changes into the Event Chronicle, say every 100 Meg change sends a log event, so you avoid trending this. Set an alarm when it reaches 1Gb or 500 Mb, something that gives time to go free up space. I'd say two or three times the size of the typical export file.

    Andre Dicaire