Bob Michaels, v3.1, Sep 13 2004
pager: (757)-584-7410, e-mail: rom@jlab.org
This file :
hallaweb.jlab.org/equipment/daq/guide.html
This is information about how to run DAQ and how to configure the trigger in Hall A. See also hallaweb.jlab.org for useful links such as the ROOT/C++ analyzer. Warning This is NOT for HAPPEX DAQ. That is a separate DAQ for HAPPEX. See also the HAPPEX web page and the HAPPEX shift worker manual for more info about HAPPEX DAQ.
I. Where to Run Things
Below is a table showing where to run the different codes using the public accounts adaq, atrig, and a-onl. The run coordinators should know the passwords.
Code | Computer | Public Account |
Any HAPPEX stuff | See the HAPPEX "How-to". | apar |
CODA (runcontrol) | adaql2 | a-onl |
ANALYZER or ESPACE | adaql1, 3, 4, or 5 (not l2) | adaq |
trigsetup | adaql1 or 2 (Linux) | atrig |
xscaler (two versions) | adaql1 | adaq |
The spectrometer DAQ runs on the a-onl account on Linux computer adaql2.
II. General Computer Information
Computers: adaqs1 and 'compton' belong to the Compton DAQ, they should be avoided. Typically adaql2 is used for spectrometer DAQ while adaql1 is a backup. adaql1 is used for Parity and Moller DAQ, adaqep for eP energy measurement DAQ. adaql1, l3, l4, or l5 is for running analysis using ROOT/C++ analyzer or ESPACE, but any computer where CODA is running should be avoided. Note the large amount of ``work'' disks where you may keep scratch files like hbook. The disks are /adaqlN/workM where N=1,2... and M=1,2,3...etc. adaqlr3 is a Linux PC that can be used as an X-term. Some PCs are also installed in 2nd floor cubicles. Priority for use goes to running experiments. There are 3 suns (adaqs1, s2 and s3) but they are being phased out. For the time being we still run xscaler on the adaqs3 screen, but we plan to port xscaler to linux soon.
How to reboot workstations : For Linux, hit Ctrl-Alt-F1 to go to console mode, then Ctrl-Alt-Del, or see Ole's instructions which might be posted near the terminal.
III. CODA
Detailed information about running the spectrometer DAQ in Hall A may be found in hallaweb.jlab.org/equipment/daq/guide2.html for CODA 2.x setup (read this if nothing else).
Also available are documentation about the raw data structure (dstruct.html) and FAQ's about deadtime (dtime_faq.html), plus electronics deadtime.
IV. Trigger
The spectrometer trigger was described in some detail in the OPS manual. Here I give a superficial overview and describe how to download a new setup. First, here are some simplified instructions to download and check the trigger:
hallaweb.jlab.org/equipment/daq/trigger.html.
Overview of High Resolution Spectrometer (HRS) trigger: Scintillators make the main trigger in each spectrometer arm, and a coincidence is formed between the spectrometer arms. The main trigger is formed by requiring that scintillator planes S1 and S2 both fired (and both phototubes in each paddle) in a simple overlap. Thus, the main trigger requires four PMTs. The coincidence between spectrometers is formed in an overlap AND circuit. The Right Spectrometer singles triggers are called T1, the Left Spectrometer triggers are called T3, and the coincidence triggers are T5. Other triggers might be formed which require other detectors. The most important are T2 and T4, which requires 2 out of 3 from among the S1, S2, and another detector (this other detector is S0 on L-arm and Gas-Cerenkov on R-arm for the e94107 experiment, but it has been different at other times, ask me for details)> These "loose" or "majority logic" triggers allows to measure the efficiency of the main trigger. The experiment should always keep about 5 - 10 Hz of these loose triggers. During e94107 we also plan to have triggers involving the aerogel, see the above link (trigger.html) for details.
Downloading the trigger: During coincidence experiments, the only change between kinematic settings that affect the trigger are delays that change with the momentum and particle ID. Of course, if you only care about single arm triggers, you may use the default settings. To change the trigger, login to a linux PC like adaql1 or l2 in the "atrig" account (ask run coordinator for the password) and type from anywhere "trigsetup". This starts a GUI whose usage is obvious. Further details are at the link above (trigger.html).
V. Scalers and Scaler Display
There are two versions of ``xscaler'', the GUI that displays scaler data online. The ``old'' version which used to run on SunOS was ported to Linux (thanks Calvin Howell) Normally running on an adaql1 screen. If not running, login as adaq, and go to the appropriate directory, which is (1) ~adaq/$EXPERIMENT/right/scaler for right arm, and (2) ~adaq/$EXPERIMENT/left/scaler for left arm. ($EXPERIMENT is an environment variable, like e94107.) These can normally be reached by typing "golscaler" or "gorscaler" where "l" and "r" are for left and right HRS resp. Once in the correct directory type "./xscaler". And don't forget the dot (.) and the slash (/). Remember to push the button "Start" in the bottom left corner. For experts: The configuration of scaler information is controlled in a file scaler.config in the appropriate directory.
New ROOT/GUI version: Using adaq account on adaql1, go to ~adaq/$EXPERIMENT/scaler and type "./xscaler Left" or "./xscaler Right" or "./xscaler dvcs". These start xscaler for Left, Right, and dvcs crates resp. And don't forget the dot (.) and the slash (/). For more info see the README file there.
The ``old'' asynchronous readout of scalers are event type 140, which are injected into the datastream asynchronously every few seconds. A ``new'' readout exists, in which the data are read every synch event (100 events) if DAQ is in buffered mode and every 200 events regardless of mode. This readout also contains helicity and timestamp info necessary for G0 mode. There is also an event type 100 which is the last reading of ring buffers. Details about these new readouts are at www.jlab.org/~rom/scaler_roc10.html and for the G0 helicity scheme see www.jlab.org/~rom/g0helicity.html.
Scalers are also read and injected into the datastream at the end of the run. A file scaler_history.dat is maintained which is a complete history of scaler readings at the end of the run. These files should be in ~a-onl/scaler.
R. Michaels -- e-mail: rom@jlab.org