Robert Michaels, rom@jlab.org, Jefferson Lab Hall A

This file:     hallaweb.jlab.org/equipment/daq/dtime_faq.html

•  QUESTION :   What is a good amount of deadtime to tolerate ?   Is 30% ok ?     ANSWER:   Traditionally people doing analysis have preferred deadtime less than 30%.   It is a commonly held belief, whether true or not, that the accuracy of the deadtime correction is of order 10% of itself (or better), so this leaves a residual 3% normalization error.

•  QUESTION :   How do we correct for deadtime ?       ANSWER: The short answer is: use the scalers and event stream to compute the ratio of accepted triggers to input triggers. The long answer: Its best to ask one of the experts on cross section measurements who have run in Hall A. Using the Podd analyzer you can take a look at the THaNormAna class which probably needs to be adapted to your particular experiment.

•  QUESTION :   Is there deadtime when we write to MSS ?     ANSWER:   Usually not, since the era of faster disks.

•  QUESTION :   We see large deadtimes for the given rate !!   Why ?   What do we do about it ?     ANSWER:   There are several possiblities.   Pick among the following:

•  Possibility 1   Abnormally large event sizes (a common problem).       Solution: One thing to look for immediately is the event size. Sometimes one of the chambers, e.g. VDC or FPP, develops a lot of noise, and this slows down CODA. This happens for example if the thresholds are too low or off. Check the thresholds !!   A typical normal event size is 1 kbyte for 2-spectrometer operation. If the event size is not the culprit and this FAQ does not answer for you, you may call me.

•  Possibility 2   A scaler is double pulsing (or multiple pulsing) due to a reflection on a cable or other cabling problem (e.g. bad connector).   A typical example is a 50% apparent deadtime at cosmics rate caused by a reflection on a cable.