TikiWiki Assistant Thank you for installing TikiWiki!
Click the :: options in the Menu for more options. Please, also see TikiMovies for more setup details. |
GEn RAW DATA STRUCTURE R. Michaels, Jefferson Lab, e-mail: rom@jlab.org, updated Mar 4, 2006
The raw data structure recorded in the GeN DAQ datastream will be described. This includes the following data: 1) Event data recorded with each accepted trigger (fastbus and VME data from the two spectrometers and from the BPM and raster); 2) scalers; 3) EPICS information; and 4) "special events" which are inserted once or infrequently during a run, such as prescale information and detector map. These data are decoded by the Podd Analyzer classes developed for GeN, see also GeN Analyzer documentation. I won't go into that here, but suffice it to say you need an up-to-date database with correct crate map and detector maps. I will not describe the various other datastreams in Hall A, such as Moller polarimeter, e-P energy measurement, Compton Polarimeter, etc. [TOP]
The best detailed reference on the CODA format was the old manual for CODA version 1.4, see appendix E. I have saved this postscript file coda1.4.ps. I will give a simplified summary here. The data come in the form of an array of 32-bit words, in the case of event data, or character data, in the case of some of the special events. In general, the first part of the event structure is "header" information, indicating how long the event is, what the event type is, etc. The most important header and other identifying data words are:
After the header, the data are organized in banks corresponding to each ROC. A ROC is a "readout controller", and it means a crate of fastbus or VME. We have several ROCs in our DAQ event stream listed in the table below. Different CODA configs may exist with different mixtures of crates. GeN DAQ CODA Configrations GeN All crates test* As of this writing, several test configs with subset of crates. Layout of DAQ crates (after switch to FB for DC crates)
do 150 i = 1,MAXROC c n1 = pointer to first word of ROC if(i.eq.1) then n1 = evbuffer(3) + 4 else n1 = n1roc(irn(i-1))+lenroc(irn(i-1))+1 endif c irn = ROC number obtained from CODA irn(i) = ishft(iand(evbuffer(n1+1),'ff0000'x),-16) if(irn(i).le.0.or.irn(i).gt.maxroc) then write(6,*) 'fatal error in decoding' stop endif c Store pointers and lengths if(i.eq.1) then n1roc(irn(1)) = n1 lenroc(irn(1)) = evbuffer(n1) lentot = n1 + evbuffer(n1) else n1roc(irn(i)) = n1roc(irn(i-1))+lenroc(irn(i-1))+1 lenroc(irn(i)) = evbuffer(n1roc(irn(i))) lentot = lentot + lenroc(irn(i)) + 1 endif if(lentot.ge.len-1) goto 200 150 end do 200 numroc = i
To read CODA files with primitve code, try the CODA classes (C++ classes for CODA interface). This and the hana decoder have been integrated into the new Podd analyzer so you probably don't have to worry about it.
Each crate adds some ``event flag'' data, which are extra words that don't come from a module. The set of flags that exist may change over time. At present (Feb 1, 2006) here they are: The following data are kept for each crate. They allow [TOP] Scalers count raw hits on phototubes, as well as important quantities like charge and triggers which are used to normalize the experiment. See also this scaler schematic . Scalers appear in the raw data as event type 140 (so-called "scaler events") as well as in the BBVME1 data periodically (typ. every 200 events, or at synch events). In addition, the scalers are displayed online using the "xscaler" code, and the meaning of the channels is evident from the scaler configuration file "scaler.map" which that code uses. To run "xscaler", login to adaql4 (or 5,6..) as "adaq" account and type "xscaler" and do what it says. Also, scalers are processed by ~a-onl/scripts/halla_scaler_process.tcl for convenient halog display at end-of-run. Note, to run the analyzer you also need the "scaler.map" file. Usage within the analyzer is explained at hallaweb.jlab.org/equipment/daq/THaScaler.html Structure of raw data: The headers are of the form "0xb0dN00XY" for GeN scalers, and other headers if you ever use the HRS (it's explained in the standard DAQ pages). Here, N is the "software slot" and XY encodes the number of scalers channels in the lowest 6 bits (16 or 32 channels). The software slot is closely related to the physical slot, but there is one complication. The 1st physical slot (on the far left) is the helicity-gated scaler. These data are sorted by helicity on-the-fly and packaged into two virtual scalers corresponding to slot 0 and 1. The next slot (2) is the normalization scaler not gated by helicity. Subsequent physical slots correspond to software slots 3,4,5... etc. The helicity gated scalers are FIFO scalers run in G0 helicity mode, see also www.jlab.org/~rom/g0helicity.html. The helicity scalers appear in the data stream in event type 140 already sorted by helicity (the sorting is done online) but also in raw form; see the section below on BigBite scalers.
The scalers are cleared at the beginning of a run, and read out finally at the end of the run. The timing of the gates, which enables the scalers to count, is such that the end-of-run scaler readout is done after gates are turned off, and then after this final readout the gates are re-enabled, so that scalers can count in between runs. The end-of-run scaler data are written automatically to a file on ~a-onl/scaler/scaler_history.dat. Routines exist for reading this file and pulling out the particular scaler channel. Scaler data are also inserted into the data stream as an event type 140 typically every 8 seconds, but not synchronized to any other event. The capability to read scalers at 30 Hz with zero deadtime is also supported, and these data appear in BBVME1 (ROC23), see section below. An event type 100 is written at end of run which has same format as ROC23 which is the final scaler reading from ROC23. [TOP] The ROC23 data contains the normalization scaler data (gated by helicity and not) once per 200 events, or once per synch event (if we run in buffered mode). It also contains a full flushing of the ring buffer for a subset of channels (can't flush all 32 because of deadtime). This scheme allows us to run with the helicity delayed in G0 helicity mode. See also www.jlab.org/~rom/g0helicity.html. An event type 100 is written at end of run which has same format as ROC23 which is the final scaler reading from ROC23. Here is the format of ROC23 data. There is the usual stuff from QDCs and TDCs (unless someone drops them), then starting with the header 0xfb0b0002 is scaler data, e.g.0xfb0b0002 0xb0d00000 0x6c80910a 0x00115144 0x00023455The header b0d00000 denotes the helicity MINUS (bit = 0) scaler and b0d10000 the helicity PLUS (bit = 1) scaler, 32 channels each, the data has been summed for you. After these 64 channels you get 3 words corresponding to the number of readings in the FIFO for the MINUS, PLUS, and "BAD". where "BAD" means the online algorithm had trouble predicting it (usually means a cable fell out). Of course there should be zero bad readings. After these, you get the RING BUFFER data. This starts with 0xfb1b0090 where 0xfb1b0000 is a header and the lower bits (here 0x90 = 144) is the number of readings in the buffer. Subsequent readings are groups of 6 words corresponding to clock, qrt&helicity, trigger 3, upstream BCM with gain 3, L1A, and trigger 1. The qrt&helicity is encoded as follows: qrt = (data & 0x10)>>4 and hel = (data & 0x1). See also the code tscalroc23_main.C and tscalring23_main.C in hana_scaler. These data allows the user to check for at least those vital data that the online-summed data above are correct. After the ring buffer data, starting with header 0xb0d20000 is the 32 channels of non-helicity-gated normalization data. [TOP] The trigger supervisor TS1 appears in the datastream as ROC28. The first word in the hexidecimal dump is a header 0xfadcb0b9 followed by the event type (2 here), then the TIR data (an I/O register) containing helicity, gate, and QRT info as decoded by THaHelicity. After that is an event counter (0x4cc4 here). Then we readout two ADCs, headers are 0xfadc1182 and 0xfadd1182. These are related to the BPM/raster data. If the header 0xfb0b0020 appears (as here) it means the next 32 words are the scaler in that crate. Finally, 0xfabc0007 are the synch data (see section on Event Flags).Example of dump of ROC28 [TOP] Data from various EPICS databases are periodically inserted into the datastream, as well as placed in text files that are saved at the start-of-run and end-of-run. These text files are stored as, for example, ~a-onl/epics/runfiles/Start_of_Run_1047.epics, and End_of_Run_1047.epics where 1047 is the run number. These data are also written to the electronics log book HALOG. Each experiment is permitted to modify the script that controls this (ask me) but please let only ONE experienced, qualifed person modify the script (and it is probably good enough, too). Incidentally, no other dealing with DAQ should ever be modified without asking me. I leave things open but I must be informed. The data which are inserted periodically include the beam current, position monitors, magnetic fields in our spectrometers, as well as some beamline elements, the beam energy, collimator positions, and several other things. Approximately every 30 seconds, a long list of information from the accelerator and from Hall A are inserted as event type 131. Approximately every 5 seconds a shorter list of information, for which rapid updates are desirable, are inserted as event type 131. The shorter list contain the BPMs near the target and the charge monitor data. The synchronization of the data relative to our datastream is good to perhaps 1 second. Here is an fictitious example of the rapidly updated EPICS data which are inserted every approximately 5 seconds : Tue Aug 27 12:59:43 EDT 2002 There is some code in the C++ analyzer to pull out the data. See for example THaOutput. However, since the EPICS "event" data are characters they are viewable simply with grep. From a Linux box you can type "grep -a IPM1H0 e01012_1455.dat.0" on the data file. [TOP] Event types less than 15 are physics triggers of various types. There are also prestart, go, and end "events". Event types greater than 99 are "special events", which are not really "events" and not particularly "special" either -- they are data inserted into the CODA datastream which are useful to record with the event data. The following special events exist:
|
Login Search
|