When using the 'Producer-Consumer' method of data exchange, we speak of the data sets being produced as 'Network Globals', as they are globally available to all elements of the control system, just as global variables are available to all code modules in a project. Another way of regarding 'Network Globals' is to consider them as comprising a simple, read-only reflected memory throughout the control system.
Although any data object can be assigned to a Network Global, these tend to be single valued parameters of system-wide interest, i.e. not arrays. Parameters such as the beam current and energy for instance are good candidates for Network Globals.
The easiest form of producing a Network Global (and historically the first) is to broadcast it on a well-known port. This of course means that those elements wishing to receive the Network Global must be on a subnet on which the globals are sent. If a client is not located on such a control network, then it must subscribe to the server producing the data in a Publish-Subscribe scenerio.
If a server is producing Network Globals and wishes to broadcast them onto the 'control' network, then it must have a local database file 'ipbcast.csv', which contains a list of all subnets to receive the broadcast.
A call to sendNetGlobal() will attempt to deliver the global data to the subnets in the database file. Note that attempting to broadcast onto a subnet does not guarantee that the data arrive. The intermediate routers must allow UDP broadcasts onto the subnet(s) in question or it work be transmitted.
A better form of producing a Network Global is to multicast it onto a well-known port and multicast group. A multicast is an "intelligent broadcast" and is more efficient for the producing server, since it sends the data in question only once. If multicasting is enabled on the network infrastructure, then all of the intermediate routers and switches know where the members of the multicast group are and will forward the multicasted datagram where it is needed.
If multicasting is enabled, a call to sendNetGlobal() will multicast the global data regardless of whether there is an ipbcast.csv file or not.
Note that some legacy systems are unable to join a multicast group and hence are unable to receive Network Globals in this efficient manner. If it is necessary to support legacy systems on which Network Globals are desired, the suggestion is to install these systems on a single subnet and send a broadcast in addition to the multicast from the producing server.
Any client wishing to receive a Network Global need only call recvNetGlobal(). Doing so will only then open the globals socket and join the multicast group. This means that a client not interested in Network Globals will not inform its gateway that it is part of the multicast group and will not have to process unwanted datagrams at the application layer.
This brings up a point worth mentioning. Namely it would not in general be a good idea to send the bulk of machine data as Network Globals, as TINE uses only one globals socket for receiving all Network Globals. Thus, when the socket is open, all Network Globals are received and dealt with at the application level whether they are all needed or not. The more a client must sort through to find what it wants, the busier it becomes. One could imagine partioning Network Globals into different categories each with its own socket and multicast group, but such is currently not the case in the present release of TINE.
We also note that although any server can send Network Globals, it is generally a good idea to collect those parameters of system wide interest at a single middle layer server and designate this a the 'globals' server. Indeed if such a server is configured and has the device server name "GLOBALS" it is contacted to obtain the initial value of the Network Global asked for in the first call to recvNetGLobal(). Otherwise the calling program must wait for the initial value to arrive over the network.
As an example of 'machine parameters of system wide interest' we show a snapshot of the Network Globals in use at the LINAC2 pre-accelerator:
The TINE globals server is contained in the download package for windows or one of the Unix/Linux operating systems. If you install the globals server files (i.e. the binary and sample database files) to a host and type glbsrv /? at the command line prompt, you should see the following:
So one can either supply the various startup parameters (such as which database file, which multicast address to use, etc.) via the command line or via a local 'startup.csv' configuration file which can optionally contain several of these parameters.
For example:
If no command line input is given and no startup.csv configuration file is found, then the default values are used, namely: the target database is called "dataglob.csv", the multicast address to use is given by the last 2 bytes of the local host's IP address ORed together with 239.1.0.0, and the globals server will operate in "ACTIVE" mode.
The keyword values delivered on the network by a globals server can be managed via the archive database manager as the database address structure is essentially the same. The primary differences are that a globals server does not store the globals values in a long-term archive and is able to deliver a targeted set of keywords via multicast.