Main Page | Features | Central Services | csv-Files | Types | Transfer | Access | API-C | API-.NET | API-Java | Examples | Downloads
page generated on 25.04.2024 - 04:45
TINE (Three-fold Integrated Networking Environment)
tine logo pronounced: TEE-NEH
Note
(TINE++ % 4) = INET and Remember: This Is Not Epics!
But you can run EPICS iocs on TINE using Epics2Tine.
TINE is embedded in DOOCS, so you can also run DOOCS clients and servers using TINE.
TINE can also be used in a STARS system and via a STARS-bridge in a COACK system.
You can also include TANGO elements on your TINE system using Tango2Tine.
But you might want to go native ...
Current Release level: 5.2.8
General APIs Services Examples & Help Workshops & Tutorials Low Level Support
Bird's Eye View C API Alarm System Getting Started TINE Workshop 2007 Network Queue
Overview EZ API/Buffered API Archive System TINE Server Wizard Quick Tutorial (Windows) Common Device Interface (CDI)
Features Java API Post Mortem/Event Archive System Console Server (C) Quick Tutorial (UNIX/Linux) TINE CanOpen Manager (TICOM)
Configuration .NET API (C#) State Server Console Client (C) Workshop Tutorial (Buffered Server)
Data Types LabView API Name Server GUI Server (C# and VB.NET) Workshop Tutorial (Standard Server)
Transfer Modes MatLab API Remote Debugging Tools Workshop Tutorial (Clients)
Access Flags XCOMM Matlab API Network Globals Console Client (Java) CDI Tutorial
Array Types Python API Time Synchronization GUI Client (Java) Servers for Dummies
Time Stamps CDI Native API Security Console Server (Java) TINE Users Meeting
Naming Conventions RESTful TINE API Command Line Trouble Shooting TINE Presentations
Data Flow Tips Video System TINE Studio Demos
Stock Properties error codes TINE Repeater
Meta Properties TINE Combobulator TINE/DOOCS Issues
TINE Scope Servers Configuration Tips
TINE Motor Servers
TINE Watchdog Server

TINE is fully supported by ACOP (java), ACOP (.NET) , and Control System Studio.

You may want to have a quick look at a Bird's Eye View of TINE.

Download TINE here.

Questions or comments can be addressed to tine@.nosp@m.desy.nosp@m..de

What is "Three-fold" about TINE?

Perhaps the most distinguishing feature about TINE is its integration of client and server components of vastly different networking environments. To begin with, TINE is a multi-platform system, running on all Windows, Linux, and MAC platforms as well as VxWorks and many legacy platforms such as Solaris and most other Unixes, along with MS-DOS, Win16 (Windows 3.X), Win32 (Windows 95,98, NT, 2K, XP), VAX and ALPHA VMS, and NIOS. TINE is also a multi-protocol system to the extent that UDP, TCP, IPX, and in-proc data transfer via PIPEs or shared memory are all supported as data exchange. Finally TINE is a multi-control system architecture system, allowing client-server, publisher-subscriber, and producer-consumer data exchange in any variation. We shall describer these in more detail below.

Multi-Platform

TINE runs on a number of platforms, and can be thought of as a 'software bus', meaning that one can intermingle host platforms at will. At the basic protocol level, the members of a client-server pair are agnostic as to the host platform of its partner. Although the general user type is carried in the protocol headers. Note that for small systems or subsystems it usually makes more sense to stick to one specific platform for front-end components and/or console components. Although there is nothing that precludes using a heterogeneous mixture, issues of maintenance (where a small number of persons are responsible for a large number of components) come into play.

Note also that by allowing a heterogeneous system, expensive front-end hardware can be used where warranted (for mission-critical devices) and inexpensive hardware used elsewhere. Furthermore a systematic, piecemeal upgrade of a control system is possible, since TINE will run fine on older systems such as VAX-VMS and MSDOS as well as the more modern systems such as Window, Linux, MAC or VxWorks.

Multi-Protocol

TINE supports data exchange via both UDP and TCP in the IP domain as well was as the (now little used, if ever) IPX ethernet protocols. IPv4 and well as IPv6 are both supported.

TINE defaults to UDP datagrams for data transfer. In most cases this is fine. If client-server communications occurs over a 'lossy' network, or there are other flow control issues, you may want to resort to TCP streams, at least for commands which change settings or involve large payloads. This requires only an additional flag to be set in the communications API calls, or can be coerced at the server side. The default data transfer can also be configured by the environment variable TINE_TRANSPORT (e.g. TINE_TRANSPORT=TCP). There are actually two kinds of TCP. Simply specifying "TCP" signifies a payload transfer which can be 'parceled' and reassembled (as per UDP) if the payload is large and which dutifully respects all 'timeout' parameters. By specifying "STREAM", a TCP/IP transport is used which passes the entire payload on to the local network stack and only times out at the connection establishment level (only available on multi-threaded builds). A local pipe or memory-mapped file is used if the transport is from a client-server pair on the same host machine.

Multi-Architecture

Tine supports three modes of data exchange, each of which could be used individually to define the control system architecture. More likely, you will want to use these modes in combination.

  • Client-server : A traditional data exchange mechanism available in most control systems is pure, transactional, synchronous client-server data exchange, where a client makes a request and waits for the completion of the request (a transaction). This is not only traditional, but necessary when sending commands to a front end, where the next action to take depends on the outcome of the command. If however this is used as the sole basis for data exchange, the response on the client-side can suffer (dramatically) when a server goes down or network problems arise. In such a case, all communication directed toward a server will time out, and due to the synchronous nature of the communication, the end-user must wait for the timeout period to expire before continuing. Furthermore, if several clients want the same information (regular updates of control data for instance), a server will see each request from each client separately. This can become a burden to the server if many clients
    are all getting a large data payload at, say, 1 Hz. The server will be interrupted to handle such transactions for each client individually, even though thought the results of each transaction are identical.
  • Publisher-Subscriber : For many cases, a much better approach is the publisher-subscriber data exchange. Here a client (the subscriber) communicates its request to a server (the publisher) and does not wait for a response. Instead it expects to receive a notification within a given timeout period. This can be a single command, or for regular data acquisition it can be a request for data at periodic intervals or upon change of data contents. In this scenario, the server maintains a list of the clients it has and what they are interested in. Now if many clients all want the same kilobyte's worth of data at 1 Hz, the server must acquire this data set only once per second, (a single interrupt) and notify the clients on its list. This is much more efficient than the client-server model in such circumstances.
  • Producer-Consumer : A third alternative for data exchange is the Producer-Consumer model. In this case a server is the producer. It transmits its data via multicast (or broadcast) on the control system network. Clients (i.e. consumers) simply listen for the incoming data. This is in some cases the most efficient data transfer mechanism. For most control systems, there are certain parameters which are of system-wide interest. For instance, beam-energies, beam-currents, beam-lifetimes, states etc. can be made available via system-wide multicast (or broadcast) at 1 Hz. Such read-only, machine parameters are of vital interest to a large number of running applications and front ends. If they were not available on the network in this manner, then some poor server would have to supply these data to a large number of clients at 1 Hz. In "DATACHANGE" mode in the producer-consumer model this would by no means cripple the server, but would just be a waste of bandwidth on the net and CPU load on the server. In some cases, a good multicast is what is called for. Take note, however that if broadcasts are used, then those clients which are to receive the broadcasts must reside on the network where they are found (the designated control network). If multicasts are to be used, then all routers should support multicasts. Those clients not on this network or not a part of the multicast group must obtain the data via other means. This mechanism can be thought of as a 'poor man's' read-only reflected memory.
  • Producer-Subscriber : A hybrid between the above two modes is also possible under TINE, in which subscribers request data to be produced on the network (a "network subscription"). See the discussion of the CM_NETWORK (i.e. CM_BCAST or CM_MCAST) control mode bit below.

Plug and Play

If the control system name server is up and running, TINE clients and servers participate in a plug-and-play scheme for address resolution.

A new server if properly configured can "plug" itself into the control system database maintained at the TINE name server, without requiring administrative intervention. At startup, the server name and all equipment module names are sent to the name server, along with the server's address, port offset, and other descriptive information. The name server will check its database for name and address collisions. If the name server sees that a server is trying to reuse an existing name, the name server will attempt to contact the existing entry. If this entry does indeed respond, the name server does not update its database, but instead sends an "address in use" message to the server which is starting up. On the other hand, if the previous entry does not respond, the name server assumes that the front-end server (or equipment module) is being moved to another location and allows the address change to be made.

When a TINE client first attempts to contact an equipment module, it sends the equipment module export name to the name server for address resolution. If the name server can identify the equipment module, it returns the address information. If not, the client then resorts to its local database. If an address is still not found, the error message "non existent element" is returned. If a match is made however, the address information is cached locally at the client.
Subsequent attempts to contract the same equipment module obtain the address from the local cache.

If a link to a server goes down, the client will generate timeout notifications. After several consecutive failures, the client will again attempt to acquire the address information. In this way, a server process can actually be moved from one machine to another, without requiring a restart of the client.

The Environment

A TINE server will look for the environment variable FEC_HOME to establish the local database directory. (This supercedes the legacy variable FECDB). All server-specific .CSV database files should be located in this directory. If this environment variable is not set, then the server will look in its startup directory. TINE Clients look for relevant .CSV files according to the environment variable TINE_HOME. We note here that TINE servers take on the behavior of clients at startup when they register their services with the equipment name server (ENS). Thus both settings are relevant to TINE servers.

Furthermore if a server is keeping local histories according to specifications in the 'history.csv' configuration file (or via the AppendLocalHistory() API call), then the environment variable TINE_HISTORY_HOME is used to determine the repository for the long term history data. If this variable is missing, and its legacy equivalent (HISTORY_HOME or HISTORYDB) is missing then the location specified by FEC_HOME will be used.

These environment settings should include the database path up to the final slash "/" (UNIX) or backslash "\" (DOS, WINDOWS). Thus you might have:

set FEC_HOME = ~/database/

in the UNIX world, or

set FEC_HOME = C:\DATABASE\

in the DOS, WINDOWS world.

TINE Name space (What's in a Name)

The identity of a particular device is needless to say an important bit of information regarding data exchange between a client and server. A client application will typically make a request to a device via its full device name and not further concern itself with the location of the equipment module. However the devName argument entered in an API call (see ExecLink() and AttachLink()) needs to be resolved into a specific device being serviced by a specific equipment module running on a specific front end server.

As a case in point, a client might want the beam position, i.e. property "POSITION" from device "WL167" (to use the HERA naming convention). So devName might be specified as "BPM/WL167" to denote the targeted device on the "BPM" device server. In this case, the default context is assumed. The devName might also be specified in full as: "/HERA/BPM/WL167". In either case, the system kernel must be able to find the "BPM" device server, and determine that it is located on a local equipment module called "BPMEQM", which runs on a front-end computer called "BPMFEC". The system kernel finds the latter quantities by either consulting a name server or a local database. Thus, the BPM device server must be properly registered for this to work. That is, the information entered in the name server or database must match the information contained locally at the front-end server. If a request for the local equipment module "BPMEQM" comes to the "BPMFEC" front end and there is no such equipment module, then the error code "non existent element" will be returned.

To reiterate, the TINE name-space consists of four levels, consisting of context, device server, device name, and property. Both the context and device name are optional to the extent that the equipment name server will be able to resolve the address for the device server (if no context is given) if any entry matches the device server name specified. The device name (or absence thereof) is passed along the server and dealt with there.

When a server starts it will inform the equipment name server of its identity and its address, as if to say "If anyone asks for a device server named BPM under context HERA, tell him to ask for equipment module BPMEQM at my network address."

Thus, in establishing a front-end server in the control system, some thought must be given for the following quantities:

  • FEC Name : the 16-character, system-wide unique name identifying the server process. "FEC" is an acronym meaning Front End Contoller. For platforms such as MS-DOS, Win16, or VxWorks this also translates into Front End Computer. The point is: on Unix, VMS, or Win32 there can be several independent server processes running on the same computer, each requiring a unique "FEC" name even though they reside on the same machine. Under most circumstances this name is not hard coded into a server, but is instead obtain through an initialization file (see fecid.csv or fec.xml).
    Any FEC can in turn contain many equipment modules.
    There are no restrictions on the FEC name other than it be unique (system wide) and not contain more than 16 characters. In the above example, "BPMFEC" was used.
  • Context Name (also known as: Facility) : the 32-character, context specification. This supplements the Device Server Name to determine uniqueness.
  • Device Server Name (also known as: Equipment Module Export Name, Device Group) : the 32 character, system-wide unique name identifying the equipment module located on a particular server. This name is the actual point of contact between the client API and the equipment module in question.
    Under most circumstances this name is not hard coded into a server, but is instead obtained through an initialization file (see exports.csv). There are no restrictions on the Equipment Module Export name other than it be unique (within its Context) and not contain more than 32 characters. In the above example, "BPM" was used.
  • Equipment Module Name (also known as: Local Name) : the 6 character identifier tag serving as a point of contact among local routines on a given server. This name will perhaps require some explanation. In a purely object-oriented environment, this identifier would not be necessary, as various routines such as RegisterProperty(), SetAlarm(), ClearAlarm(), GetNumContracts(), etc. would all be methods of an equipment module class, and the reference to the equipment module in question would be clear. However, TINE is object-based and not object-oriented and the kernel on most platforms in written in straight C (not C++). The kernel must be able to de-reference the targeted equipment module (as there can be more than one module per FEC). Furthermore, as the Device Server Name is not, in general, hard-coded at the server, it will not do as an identifier. The Equipment Module identifier is on the other hand hard-coded and as it is seen only locally, it does not require system-wide uniqueness. This identifier tag is restricted to 6 characters. This is largely historical and has the benefit that the underlying system headers need only contain this much smaller name (once an address is resolved) as opposed to both a context and device server name (64 characters). As the Equipment Module identifier can (and must) be hard coded on the server, it is entirely possible to run precisely the same server binary executable on different machines (distributed front end). In such cases, the startup files fecid.csv and exports.csv would have to be different on the different machines (so as to guarantee uniqueness). Furthermore, standardized servers, e.g. CDI Servers, generally have a system-wide well-known equipment module name (in this case "CDIEQM") and it is thus trivial to query the naming services to obtain a list of all CDI servers. In the above example, "BPMEQM" was used. The distinction "Local Name" and "Export Name" makes sense when viewed from the perspective of the server. A server executable has a hard-coded "local" equipment function name that it wants to make available on the control system network. That is, it wants to "export" the equipment function. But the exported name must uniquely point to the local equipment function on the server in question. Note that in the above example, one could as easily have used "BPM" for the FEC Name, "BPM" for the Equipment Module Tag name, and "BPM" for the Equipment Module Identifier. This would work, but in the end might lead to confusion.
  • Device Name (also known as location): the (up to) 64 character device name identifying the particular instance of the equipment module in question. This name is not resolved by the name server or local database, but is passed as is to the targeted equipment module. In many cases, it is desirable to work with device numbers rather than names, in such cases simple passing a string representation of the number is suggested (prefaced by '#').

The above discussion is illustrated below:

FEC Equipment Module Device Server Notes
FECName1 => EQM1 => DeviceServer1
device list 1
property list 1
"EQM1"@"FECNAME1"
is exported as
"DeviceServer1"
EQM2 => DeviceServer2
device list 2
property list 2
"EQM2"@"FECNAME1"
is exported as
"DeviceServer2"
EQM3 => DeviceServer3
device list 3
property list 3
"EQM3"@"FECNAME1"
is exported as
"DeviceServer3"
+ ... + ... + ...
FEC Name is unique.
Matched to address.
EQM Name is local,
is hard-coded.
Uniqueness is not required.
Used as a local subdirectory.
Device Server Name is exported.
Seen by the client.
Unique within a context.
Property List is required.
Device List is optional
Devices can be accessed via number.
"FECNAME1" is matched to an
address (including port offset).
=> System-wide unique.

"EQM1" etc. are local names
which identify the equipment modules.
Locally unique within the process.

"DeviceServer1" etc. are exported
and must be unique within a context.

Finally, note that some naming schemes will want to consider device 'groups' rather than device servers. If a particular server is replicated N times in a given facility, so that there are N physical servers (each with its own unique FEC name and exported device server name) it might be nonetheless desireable to take a purely device view of this set of servers. In other words, server #1 might have the first 5 devices of a group, server #2, the next 5 devices, and so on. The TINE naming hierachy will let you define a device 'group' which acts as a logical server for this collection of individual servers. This uses the TINE redirection features and requires a running Group Equipment Name Servers (GENS), which can either be configured by an administrator or which allows plug-and-play requests from the booting servers. In the plug-and-play scenario, an initializing server will signal it's intentions to 'join' a device group.

So a client might make calls to power supply controllers on what looks like a device server called "MAGNETS" and really be redirected to various phyisical servers ("MAGNETS.1", "MAGNETS.2", ... for instance) which really handle the devices requested.


Impressum   |   Imprint   |   Datenschutzerklaerung   |   Data Privacy Policy   |   Declaration of Accessibility   |   Erklaerung zur Barrierefreiheit
Generated for TINE API by  doxygen 1.5.8