Bob Michaels, v4.1, Feb 11, 2011
paging Bob e-mail: firstname.lastname@example.org
This file :
This is information about how to run DAQ and how to configure the trigger in Hall A. See also hallaweb.jlab.org for useful links such as the ROOT/C++ analyzer.
I. Where to Run Things
Below is a table showing where to run the different codes using the public accounts adaq, atrig, and a-onl. The run coordinators should know the passwords.
|Podd ANALYZER||adaql3-10 (but not 2)||adaq|
|trigsetup||adaql1 or 2||atrig|
|xscaler||adaql1 or 4||adaq|
For more info on running the spectrometer DAQ see the Hall A DAQ guide
II. General Computer Information
adaql2 is for running the spectrometer DAQ. adaql1 is for general use and a backup computer for DAQ, also sometimes used for Parity and Moller DAQ. The 'compton' computer belongs to the Compton DAQ and should be avoided for general use. The controls computer is hacsbc2. adaqep for eP energy measurement DAQ. adaql3 and 10 house huge work disks. adaql3, l4, l5, l6 and so on are for running data analysis using ROOT/C++ analyzer ``Podd'', or ESPACE, but any computer where CODA is running should be avoided. The work disks are /adaqlN/workM where N=1,2... and M=1,2,3...etc. Keep root output and other big temporary files on those work disks, NOT on adaqfs fileserver (i.e. /adaqfs/halla...). If adaqfs fills up it causes a lot of problems.
Some PCs are also installed in 2nd floor cubicles. Priority for use goes to running experiments. The passwords for the public accounts on these computers is kept on a sheet of paper on the wall near the whiteboard in the front counting room.
Most of the computers are maintained by Ole Hansen with Bob Michaels as a backup. hacsbc2 is maintained by Javier Gomez.
How to reboot workstations : Hit Ctrl-Alt-F1 to go to console mode, then Ctrl-Alt-Del. (i.e. hold down "ctrl", "alt", and the other button simultaneously)
Detailed information about running the spectrometer DAQ in Hall A may be found in hallaweb.jlab.org/equipment/daq/guide2.html for CODA 2.x setup (read this if nothing else).
Also available are documentation about the raw data structure (dstruct.html) and FAQ's about deadtime (dtime_faq.html), plus electronics deadtime.
IV. HRS Trigger
This information pertains to the HRS. You'll need to look elsewhere for Bigbite info ... sorry.
The HRS spectrometer trigger was described in some detail in the OPS manual. Here I give a superficial overview and describe how to download a new setup. First, here are some simplified instructions to download and check the trigger:
And more details about trigger (diagrams, etc):
Overview of High Resolution Spectrometer (HRS) trigger: Scintillators make the main trigger in each spectrometer arm, and a coincidence is formed between the spectrometer arms. The main trigger is formed by requiring that scintillator planes S1 and S2 both fired (and both phototubes in each paddle) in a simple overlap. Thus, the main trigger requires four PMTs. The coincidence between spectrometers is formed in an overlap AND circuit. The Right Spectrometer singles triggers are called T1, the Left Spectrometer triggers are called T3, and the coincidence triggers are T5. Other triggers might be formed which require other detectors. The most important are T2 and T4, which requires 2 out of 3 from among the S1, S2, and a 3rd detector (this other detector may be, e.g., S0 or Gas-Cherenkov). Note, during e04018 T2 is the ``or'' of S1,S2 because there is no 3rd detector on R-arm. These "loose" or "majority logic" triggers allows to measure the efficiency of the main trigger. The experiment should always keep about 5 - 10 Hz of these loose triggers.
Downloading the trigger: During coincidence experiments, the only change between kinematic settings that affect the trigger are delays that change with the momentum and particle ID. Of course, if you only care about single arm triggers, you may use the default settings. To change the trigger, login to a linux PC like adaql1 or l2 in the "atrig" account (ask run coordinator for the password) and type from anywhere "trigsetup". This starts a GUI whose usage is obvious. Further details are at the link above (trigger.html).
V. Scalers and Scaler Display
``xscaler'' is the GUI that displays scaler data online. Normally xscaler is running on the hapc5 screen which is above the adaql2 console (in rack CH01A09). If not running, login as adaq, and type "xscaler" and follow the instructions it prints out. There are two versions (old and new), the new version is recommended but that's up to you.
The ``old'' asynchronous readout of scalers are event type 140, which are injected into the datastream asynchronously every few seconds. A ``new'' readout exists, in which the data are read every synch event (100 events) if DAQ is in buffered mode and every 200 events regardless of mode. This readout also contains helicity and timestamp info necessary for G0 mode. There is also an event type 100 which is the last reading of ring buffers. Details about these new readouts are at www.jlab.org/~rom/scaler_roc10.html and for the Qweak helicity scheme see hallaweb.jlab.org/equipment/daq/qweak_helicity.html.
Scalers are also read and injected into the datastream at the end of the run. A file scaler_history.dat is maintained which is a complete history of scaler readings at the end of the run. These files should be in ~a-onl/scaler.
R. Michaels -- e-mail: email@example.com