• Not Answered

Extract Events from Event Chronicle SQL Instance

Hello,

I want to extract certain events from the Event Chronicle SQL server on a regular basis.  My intent is to execute a query and pull that data into another database for report processing.  I have been using the Excel reporter tool to perform this function on a timed basis, but is is rapidly becoming unwieldy.

What are the best practices for accessing the Event Chronicle in this manner?

Thank you

5 Replies

  • Dave,

    One method is to use the Scheduler in DeltaV Operate to execute a VBA script at fixed times that executes an SQL command and drops the data into the other database, either through an SQL INSERT or lets that database read the records sets as an xml or csv.

    Example code for reading data out of the Event Chronicle using SQL is below where MY_PROPLUS is the name of your chronicle host, the structure of sSQLstatement needs to be changed to what you want to read, and the destination and format of the data file. The VBA needs a reference to Common Files\System\ado\msado60.tlb (Microsoft ActiveX Data Objects 6.0 Library). You'll need to do some testing to make sure the data retrieval is working correctly. I know this is not exactly all you need but hopefully it is enough to get started.

    Cheers.

    Private Sub GetAndSaveAlarmData()

    'Tested in DeltaV version 11, 12 and 13

    Dim conn As ADODB.Connection

    Dim rs As ADODB.Recordset

    Dim Chronicle As String, A As String, sSQLstatement As String, sLogStart As String, sLogStop As String

    Dim Filenumber As Integer, iTemp As Integer

    On Error GoTo ErrorHandler

    'Get offset from site local time to UTC used in the Event Journal

    If IsDSTNow() Then

       Time_to_UTC = 7  'Put your own offset to UTC here

    Else

       Time_to_UTC = 8  'Put your own offset to UTC here

    End If

    'Convert local time to GMT used in Event Journal

       LogStartTime = CDbl(Now - 1 + (Time_to_UTC / 24))

       LogStopTime = CDbl(Now + (Time_to_UTC / 24))

    'Convert log start and stop times (in UTC not local) to strings to use in SQL Server command with custom funtion

       sLogStartTime = fTimeToString(LogStartTime)

       sLogStopTime = fTimeToString(LogStopTime)

    'Connect to Event Journal database on the ProPlus from this workstation (use gs variable to get workstation name)

    Set conn = New ADODB.Connection

    conn.ConnectionString = "Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=True;" & _

                           "Data Source=MY_PROPLUS\DELTAV_CHRONICLE;" & _

                           "Initial Catalog = EJournal;" & _

                           "Use Procedure for Prepare=1;" & _

                           "Auto Translate=True;" & _

                           "Packet Size=4096;" & _

                           "Workstation ID=" & frsvariables.gs_wsName.CurrentValue & ";" & _

                           "Use Encryption for Data=False;" & _

                           "Tag with column collation when possible=False;" & _

                           "QuotedID=No"

    conn.Open

    'Assemble SQL command text string

    sSQLstatement = "SELECT * FROM Journal WHERE Event_Type = 'ALARM'" & _

                       " AND State = 'ACT/UNACK'" & _

                       " AND Date_Time BETWEEN '" & sLogStart & "' AND '" & sLogStop & "'" & _

                       " AND Desc1 NOT IN ('HIGH','HIHI','LOW','LOLO','OPEN','CLOSE','RATE','UNDER','OVER','TURB HI','TRAVEL','CLOSED', 'CFN') ORDER BY Ord"

    ' Create a recordset object.

    Set rs = New ADODB.Recordset

    'Get data

    With rs

       ' Assign the Connection object.

       .ActiveConnection = conn

       ' Extract the required records.

       .Open sSQLstatement

       'Write data for file

       Filenumber = FreeFile

       Open System.PicturePath & "\Temp\alarm_log.txt" For Output As #Filenumber

           Do Until rs.EOF

               Print #Filenumber, rs.GetString(, 500000, Chr(9), Chr(13) & Chr(10), "")

           Loop

       Close #Filenumber

       ' Tidy up

       .Close

    End With

    conn.Close

    Set rs = Nothing

    Set conn = Nothing

    Exit Sub

     

    ErrorHandler:

       Close #Filenumber

       rs.Close

       conn.Close

    End Sub

  • In reply to RobPerry:

    Hi Rob,

    Thank you for the answer! It was very helpful in moving me along my path and gets me one step closer to not having to rely on the Excel Reporter plug-in.

    In my scenario (in case this helps someone else); I'm creating a linked table in MS Access (my customers reporting tool of choice) to the EJournal that I can query against for certain failure events. Once I am able to do that, then I can insert the data into another local table and process it for reports.

    This is very helpful because you never know how many events are going to be returned across all of the modules you are querying. In Excel with the Reporter plug-in, this variability makes for a very ugly and unwieldy spreadsheet to maintain.

    Thank you,
    Dave
  • In reply to dave_marshall:

    Hello again,

    I am updating this question/discussion with the method that I ended up using in the event it may be helpful to others or to spawn a better discussion on the topic.  Please note, I'm not a DBA!

    To begin, a refresher on my goals when starting this project:

    1. Extract specific events from the Event Chronicle for a specific list of modules. In this case, I am looking for Failure (failure alarm) and OOS (specific parameter) events for motors.
    2. Stop using the Excel Reporter for this.  Setting up the spreadsheets for the Excel Reporter for each module was inelegant, unwieldy, and it did not function very well.
    3. Place the events into a SQL or MS Access database for reporting and auditing purposes.

    Youssef El-Bahtimy published an article entitled "Do not add custom SQL views, stored procedures, etc to your Event Chronicle, Batch Historian, or other DeltaV SQL databases...." (http://emersonexchange365.com/operateandmanage/deltav/f/50/t/4797) that provided some hints, but it wasn't detailed.  When I combined this with Robs response above, I decided to take the approach outlined below.

    1. On the system Event Chronicle app stations DELTAV_CHRONICLE SQL instance:
      1. Enabled local SQL server logins (requires SQL service restart)
      2. Added a local SQL user account to use as a service account
      3. Gave the service account permissions/access to the EJournal database.
      4. Note: It is possible to avoid this step if SPN's/Kerberos is used for authentication, but is is beyond my comfort zone at this time to manually enable them without knowing what effects may exist.
    2. On the App Station to be used as the reporting system:
      1. Installed a new instance of SQL server
      2. Created a linked server object (Server Objects -> Linked Servers). This allows me to remotely access the event chronicle
        1. Note: The security page has the login option "Be made using this security context:" selected and the service account from Step 1 specified.
      3. Created the necessary stored procedures to query the event chronicle and insert the data into local tables.
      4. Created a Job to execute this on a timed basis.

    My biggest challenge was properly configuring the Linked Server and the job to execute with permissions able to access the Event Chronicle. This is where the SPN/Kerberos issue came into play.  I struggled for quite a while trying to get the Linked Server to properly pass the domain user credentials used to execute the job.  My goal was to make no changes to the Event Chronicle SQL configuration, but I eventually gave in and created the local SQL user account.

    Now that the link has been properly established, I can write SQL queries to get data from the Event Chronicle to my hearts content. As noted in Youssefs article, all my logic resides in my SQL server that I installed for reporting.

    Thank you,

    Dave

  • In reply to dave_marshall:

    DAve,

    Wondering how your EJournal Query solution has been performing for your over the last two years...

    One thing I did not see in these posts is the use of the Ordinal number in the Ejournal database. This number is a sequential number added to each event as it is entered. By using this field to note the last entry in a previous query and to set the starting range in the next query you eliminate missing an entry or two, and it assures you don't process any entries twice.

    Since events are time stamped at their source through out the DeltaV system, it is possible for the Ejournal to receive events out of sync, whereby an event with a time stamp of 1:00.001 pm arrives milliseconds after a timestamp of 1:00.200 pm. If your query returns the last event at time 1:00.200 pm, just before event at 1:00.001 pm is received and entered, and your next Query starts at TIme > 1:00.100 pm, you will miss this event completely.

    These Out of Sync events can occur regularly on a healthy system, and it is why we time stamp the events at their source. Also, each event, like an alarm, is buffered at the source. Note that this is per event. If an alarm changes state, an event message is created with a time stamp. If the message cannot be sent due to a communication issue, the controller hangs on to it and resends it. If the alarm changes state again, a "Lost Event" message is flagged along with the latest state of the alarm when it actually communicates. IF there are network issues causing ACN switchovers or a reboot of the host server or other disruptions, the actual order of events stored in the event journal maybe out of time sequence.

    If the queries always deal with a time range that ends in the past, rather than "Now()" these effects are greatly mitigated. Certainly all normal occurrences of out of order records are resolved in seconds. Longer times would require disruptions in the network connection to the EJournal database.

    The simple solution might be to always read your data with an end time of say 15 minutes. The longer the delay, the less likely there are any missed events.

    If you want to read data up to millisecond, you should take note of the Ordinal number of the last event returned in the last query and use this as your starting event criteria (where Ordinal is > that Last_Ordinal and time is =< now), This will ensure you don't miss an event that arrived out of time just after your last query ran.

    The easiest solution is likely to always use an end time that is in the past: i.e. Now() - 600 seconds.

    You can also use the ordinal number of discard events that you've already processed.

    Note that if you rebuild the Event Journal and start with a fresh version without recovering the previous journal, the Ordinal number will start again from 0.

    Andre Dicaire

  • In reply to Andre Dicaire:

    Hi Andre,

    Thanks for the information. I wish I'd know this back when I wrote the original application. I saw the Ordinal, but wasn't sure what it was used for. That would have helped sort some of the items better.

    My application was pretty low demand and so I could get away with not being up to the second. I was tracking and calculating motor run time, reliability and availability. I had added a run time counter to the motor classes and historized that. Thus, I could easily track the runtime without having to search for start/stop events.

    The portion where the Event Chronicle came into play was to locate failure and Out of Service events. I collected events on a 24 hour basis (0:00 - 11:59:59) an hour after the collection period ended. Once I identified events, I presented them to the operators in their log system so they could indicate what had happened in each case. Those results were used to calculate reliability and availability.

    The amount of thought and information that went into your notes above tells me you have spent some significant time massaging Event Chronicle data. I'm glad you shared, I love learning more about how the system works!

    Thanks,
    Dave