Main Page | Features | Central Services | csv-Files | Types | Transfer | Access | API-C | API-.NET | API-Java | Examples | Downloads
page generated on 21.11.2024 - 04:45
Tips and Suggestions Regarding Data Flow

Data Flow

Consider the following types of Data Flow:

  • Asynchronous updates for display
    • Time-critical: updates are useless if old
    • Idempotent: Repeated updates are okay
    • Last-is-best: Latest information is more important than retrying misses.
    • Handling bursts is unimportant
  • Asynchronous status or data updates for input to control logic
    • Same as above except that handling bursts is possibly important
  • Asynchronous updates for display in extreme cases (extra large data sets sent to many clients).
    • Same as above, but use multicast.
  • Asynchronous status or data updates for logging
    • The order (and timestamp) of incoming data is important
    • Repeated updates are to be avoided
    • Handling bursts is important.
  • Asynchronous events
    • Should only respond once to the same event
    • Should not miss important events
  • Asynchronous requests
    • Value response is important
    • Request is not part of a sequence
  • Synchronous requests
    • Value response is important
    • Request is part of a sequence (what follows depends on the outcome of this request).
  • Asynchronous commands
    • Command response is important
    • Command is not part of a sequence
  • Synchronous commands
    • Command response is important
    • Command is part of a sequence (what follows depends on the outcome of this command).

Data Links with ACOP:

Commands:

Commands are transactions which change settings on the server. They should always specify WRITE access.
The server should always require the incoming deviceAccess to contain the CA_WRITE bit before changing the setting (if the CA_WRITE bit isn’t set => srv1.setCompletion illegal_read_write, “”).

In many cases:

  • A client will not want to miss a transaction.
  • A server will not want to respond to an erroneous second request (due to a retry, for instance).

In such cases, the transaction can be made via a connection, i.e. use “WRITE.CONNECT” in ACOP. This will force the use of TCP/IP as the communications protocol. In other cases, this is not necessary. Note that TINE will automatically retry transactions when the given timeout parameter is 1000 milliseconds or higher, and servers will by default recognize a retried transaction and know to return the result without re-issuing the command.

Often a server is so configured so that repeated settings are of no consequence. For instance, receiving two commands to change some magnet current to 4.7 A will change the magnet current to 4.7 A just the same as receiving only one command.

Requests:

Issuing a READ request to receive follows the discussion above for WRITE commands, except that TINE offers open READ access to every one, and that multiple READ requests are allowed, so the issue of retries is irrelevant.

Synchronous versus Asynchronous Commands and Requests:

Using the Execute() method in ACOP generates a synchronous request, which will block execution for the duration specified in the AccessRate parameter or until the call returns. Use this method when the code that follows depends on the results of the call. Also note, by convention AccessRate (which gives the timeout) values of 1000 milliseconds or greater generate an automatic retry in case of a timeout, which means that a timeout of 1000 ms will actually block for 3000 ms (2 times 1000 ms plus a 1000 ms cushion) before returning. AccessRate values less than 1000 ms will take the given timeout very seriously and not issue a retry. It is the duty of the application programmer to supply his own retry in these cases.

If the blocking behavior of the synchronous Execute() method is not desired, an alternative is to use “READ” and “WRITE” requests with the asynchronous AttachLink() method. AttachLink() will return immediately with a positive link handle if the link’s end point can be resolved. The results of the call will then be returned in the ACOP control’s Receive event.

Display Data:

Data acquisitions that are designed to update a histogram, trend chart, trace or other kind of display at regular intervals are best realized by following the “last is best” principle, i.e. by using normal, non-connected updates and by turning off the “receive queue” in Acop (.ReceiveQueueDepth = 0). Thus, you can be assured that the data being displayed is the most recent the application knows about.

Status Data:

Data acquisitions that use incoming status data for input to logical decision-making algorithms will typically not want to miss an update if possible. In these cases, the receive queue depth should be left alone (at the default of 100 deep). Thus, if the CPU is momentarily busy (someone started the Internet Explorer) and the control application is unable to finish processing the current data before the next data set arrives, then it will be queued and the receive event will be fired immediately after the first data set is processed. Careful! A pathological case where the application can never finish processing the data before the next update comes (you’re updating at 1Hz, but you take more the 1 second to handle the incoming data) will do nothing but fill up the queue, and introduce a serious delay between the current data and data the application is using!

Extreme Cases (large payload, high rep rate):

Sending large amounts of data at a high rate (e.g. large video frames at several Hz) to multiple clients is best realized by using “POLL.NETWORK” mode. This will signal the server to send the data out via multicast. Additional clients will then simply join the multicast group and cause no extra burden on the server.

Extreme Cases (extra large number of contracts):

When clients need to access a large amount of data distributed over many devices or properties (or are so written to do it in this manner), there can be an enormous benefit in making use of multi-channel arrays or use-defined structures. In cases where there are many, many devices (1000 or more) all with the same settings and units for a particular property, it makes sense to declare such a property as having array type CA_CHANNEL at registration time (either via API or using RegisterPropertyInformation()).
In such cases, all (asynchronous) requests for individual devices will be remapped into a single request to acquire the entire multi-channel array from the server. This utlimately translates into a single contract as opposed to 1000 contracts (and a single dispatch interrupt as opposed to 1000, etc.). Likewise any access to a single field of a tagged structure will also result in the transfer of the entire structure information.
In each case, the TINE engine will remap the incoming data at the client side back to orginal requests. In a strict device-oriented model one can also declare a 'group' device and coerce single-element data acquition into a multi-channel access via the API call RegisterMultiChannelGroupDevice().

Should the device order change, the server should make use of the API call ResetMultiChannelProperty() in order to inform any listening clients that they should 're-learn' the device order of the multi-channel elements.

Note
TINE also supports wildcards. However, the use of such should be viewed as a method of last-resort in efficiently acquiring data from multiple devices or properties. The problem is that, in contrast to a multi-channel array, there is no way to know a-priori what data from what devices will be returned. Indeed, one often needs to access the data using a data type that can also carry the device name (and perhaps status) along with the accompanying data (such as CF_NAME64DBLDBL or CF_USTRING).

Logged Data:

Data acquisitions that are logged or otherwise should not be missed should maintain a receive queue depth of 100 as per the default. Using POLL.CONNECT (i.e. using TCP/IP) will also improve reliability but may not be necessary.

Grouped Data:

Use the Acop ‘grouped’ flag when you have several data links all in “POLL” (alias "TIMER") mode, and polling at the same polling rate. The grouped flag will cause the Receive event to be fired only once when all of the attached data sets have updated. If one of the data links is in DATACHANGE (alias REFRESH) mode, it could hold up the Receive event until the links heartbeat (at 60 second intervals) has fired.

As a grouped call delivers only one overall status, you might also want to use the “USE_ON_ERROR” extension to the data type when setting up the data links in order to distinguish which links had an error in case there is one.

When to use DATACHANGE mode:

“DATACHANGE” mode and “TIMER” mode both monitor the data at the specified polling rate on the server. In DATACHANGE mode, the data are only sent to the client when they have changed or when the heartbeat of 60 seconds since the last update has expired. As a change in the data is determined based on zero tolerance, properties which deliver status are the best candidates for DATACHANGE mode polling. Properties which return floating point values (e.g. beam position, vacuum pressure, etc.) are prone to contain noise which will always signal a data change. These updates can be further supressed by applying a notification tolerance (SetNotificationTolerance()). It is important to remember that when a server updates a client in DATACHANGE mode, it requires by default an acknowledgment upon receipt of the data. Therefore, acquiring data which are constantly changing in DATACHANGE mode will needlessly generate a good deal of back traffic where the client is constantly acknowledging the incoming data. DATACHANGE links (as well as EVENT links - see below) will automatically start a 'watchdog' link to the server in question (if not already started) which will offer the connection management necessary to inform the caller that the link has gone down as soon as possible (bypassing the heartbeat latency).

Events:

An event is a signal that something just happened, such as a threshold was reached, a state has changed, the beam was lost, etc. Signaling an event from a server can be achieved in one of two ways. A general event can be signaled by sending a network global (issuing a broadcast or multicast) at the time of the event. Clients interested in receiving such event would then use the AttachLink() method in ACOP in “RECEIVE” mode, or the 'recvNetGlobal()' routine in C or the .receive() method of TLink in java. However, the server in such a case has no knowledge as to whether the event was in fact seen by all clients interested in it. Broadcasts and multicasts are datagrams which are sent with the “best effort” mechanism of the Ethernet, but nonetheless occasionally fail. A better technique for reacting to events is to monitor the event from the client using “EVENT” mode. If the server wishes to signal the event, it should call the .ScheduleProperty() method of the TEquipmentModule instance in java, or the SystemScheduleProperty() routine in C. This will ensure that the event is sent to all listening clients immediately regardless of polling interval.
Furthermore, as EVENT mode requires an acknowledgment, the server will resubmit the event to any client who does not see it the first time.


Impressum   |   Imprint   |   Datenschutzerklaerung   |   Data Privacy Policy   |   Declaration of Accessibility   |   Erklaerung zur Barrierefreiheit
Generated for TINE API by  doxygen 1.5.8