Personal tools
You are here: Home / Facilities / DDAS / Expert Guide

Expert Guide

Expert guide for DDAS

DDAS Expert Guide



The Digital Data Acquisition System (DDAS) is made up of two distinct processes; the “front end” consists of the PXI crate, XIA cards connected to a server PC (running the data-acquisition process and data collection modules), and the “control” part on a server PC (running the merge, filter, event-building, online-sorting, storage and GUI) as shown below.

The acquisition and data transfer are done in modular stages and have been designed to allow for further expansion, duplication without hardware limitations.


The core data acquisition is found in the front end, as /opt/ddaq/acquiretraces-pmj/acquire-64 [options]

The various startup options are a 6 bit number represented as base 10, with the following functions:

b0 = enable trace readout

b1 = enable qdc data readout

b2 = enable raw energy readout (always enable)

b3 = read configuration (ddas.par) at startup

b4 = send data to socket (localhost, port = 10000+ModNumber)

b5 = write data to file (1 per module)

Default value for option is 28

Thus, using the options it is possible to setup manually the PXI crate and modules (e.g. nscope) and subsequently acquire data as described here.

Task is started and stopped by script in the directory.

Event Collectors

Data output from the acquire-64 program is in the format described in the XIA manuals. The event collector (/root/ddas/ddas2tdr) takes this format and modifies it to the TDR format ( and also adds extra syncronisation information to allow it to be further merged.

Data items are packaged with Module = SlotID and Ident as:

11 to 1543 to 0
PXI SlotID 0=Energy; 1=CFD time Channel#

QDCsum0-3 data are read out with Module = SlotID + 40 (Channels 0-7),

Module = SlotID+1+40 (Channels 8-15)

Channel 0 reads occupies “PIXIE Channel #” 0-3,

Channel 1 as 4-7, and so on in a logical fashion.


ddas2tdr has the following useful options:

-c [card #] = forces TDR Module as #

-p [port #] = writes data out on port #

-n (network) = reads from network

-b [mark BGO] = (0|1) = takes bgo channel information and repackages as Module 32,33,34,35 if set to 1, otherwise write out normal item information (e.g. energies)

-m [mark pileup] = (0|1) marks items’s Fail bit if Finish Code = 1

-d [IP destination] = name / IP of destination port

-v [verbose level] = level of debug

-x [TimeOrder] = (0|1) = Set to 1 to allow input data to be timeordered before output (always set 1)

-i [items] = (1 | 2 | 5 | 6) = Number of data items per channel, 1=Energy, 2=Energy + CFD, 5=Energy+QDC, 6=Energy+Time+QDC

Collectors are started and stopped by script in the directory.



This is the standard TDR merge process, receiving data from each Event Collector on specified port. For DDAS these are 11000 + ModuleNumber, thus one per Event Collector. Individual merge links should be enabled / disabled through Base Frame.. Merge, and the system “Setup” from the Experiment Control.

Merge will only output data once data from all enabled links are present, thus if there are no data on a link for any reason, no data are merged until all data are present and in time syncronisation.

Data are output on network to host/port defined by merge program / GUI (default: localhost/10310)

See /MIDAS/GREAT++/MergeServer and /MIDAS/GREAT++/startup/merger


Filter / Event Building

Data are read in from a source and filtered, and events built, and output as defined by the user.

In the cluster definitions, the items are describes as “VXI” which is equivalent to TDR Module.

By default the data should be read from merger (localhost/10310) and output to TapeServer (localhost/10305)



This is the standard TapeServer to receive and store data locally, either on disc, USB, network, etc.


Online Sorting

This is a standard sorting system, with input data format matching the output data format (Eurogam) from the Event Builder.



This is tcl/tk scripts, with control of severs names partly hardcoded into the scripts, but should be able to control multiple systems. Note the use of ssh, scp to up/download data without use of passwords.

« December 2018 »