Robert Michaels, firstname.lastname@example.org, Jefferson
Lab Hall A
I get asked a lot about deadtime,
so here I assemble some of the frequently asked questions and
What is a good amount of deadtime to tolerate ? Is 30% ok ?
Traditionally people doing analysis have preferred deadtime
less than 30%.
It is a commonly held belief, whether true
or not, that the accuracy of the
deadtime correction is of order 10% of itself (or better),
so this leaves a residual 3% normalization error.
How do we correct for deadtime ?
The short answer is: use the scalers and event stream
to compute the ratio of accepted triggers to input triggers.
The long answer: Its best to ask one of the experts on cross section
measurements who have run in Hall A. Using the Podd analyzer you
can take a look at the THaNormAna class which probably needs to
be adapted to your particular experiment.
Is there deadtime when we write to MSS ?
Usually not, since the era of faster disks.
We see large deadtimes for the given rate !! Why ?
What do we do about it ?
There are several possiblities.
Pick among the following:
Abnormally large event sizes (a common problem).
One thing to look for immediately is the event size.
Sometimes one of the chambers, e.g. VDC or FPP, develops
a lot of noise, and this slows down CODA. This happens
for example if the thresholds are too low or off.
Check the thresholds !! A typical
normal event size is 1 kbyte for 2-spectrometer operation.
If the event size is not the culprit and this FAQ does
not answer for you, you may call me.
A scaler is double pulsing (or multiple pulsing) due to
a reflection on a cable or other cabling problem (e.g. bad connector).
A typical example is a 50% apparent deadtime at cosmics rate
caused by a reflection on a cable.
Beam is in pulsed mode (only 60 Hz DAQ rate, but high deadtime).
I get asked about twice a year about this.
Of course, its not a real problem after you think about it.
Workstation is overloaded with processes or a critical disk partition is full.
Reduce the number of things running on the workstation running runcontrol.
For example, its been observed that running "xscaler"
or Firefox can cause deadtime.
Also, look at the swap space
to see if its full. The computer gets really
slow if that happens. Also a local disk
might be 100% full -- especially bad if its the root partition --
check with `df -k'
Generally, these computers are fast enough for CODA, but if the
resources are exhausted they will slow down.
Reading rapidly from the same disk to which we are writing
may cause deadtime.
Do not run rapid I/O processes on the same disk to
which CODA is writing data.
This causes the disk head
to jump around and creates deadtime.
We avoid this
for MSS writing, for example.
Try to avoid this with the analyzer, e.g. to
analyze the previous run instead of the current run.
For sufficiently low rates, however, this is not an issue.
Workstation is sluggish to respond to any operation
(e.g. changing the work space).
Possibly related to #4, but in any case rebooting
the workstation and restarting CODA may fix this.
It may also
happen that some foreign disk which is mounted by the automounter
has died, for example the disk where we write halog entries.
If a foreign disk dies we can still run CODA but the
workstation will be sluggish until we reboot, or until
the auto-mounter dismounts the foreign disk.
You can give me a call if this problem persists.
This page maintained by email@example.com