G2p Analysis Minutes
From Hall A Wiki
Minutes of the weekly analysis meetings
Contents
6/19/2013
Present: Karl, JP, Vince, Jixie, Ellie, Kalyan, Chao, Jie, Min, Melissa
By-Phone: Ryan, Toby, Pengjia, Alex
Feature presentations:
- Ryan
- Gave a recap of the y-target calibration for the LHRS. Last time he showed that he got strange results
during the optimization, specifically for the (0,0) and (-4,0) settings. By using a different run for
the (-4,0) setting he was able to improve the results. He then tried maximizing the event number per hole,
while still keeping the number of events approximately the same per hole. This improved the resolution of
the results. He also tried increasing the order of the optimization in the database. This also improved
the resolution, but may not be a reasonable solution since increasing the order can over constrain the matrix.
His presentation can be found here.
- Gave a recap of the y-target calibration for the LHRS. Last time he showed that he got strange results
- Pengjia
- Gave an update on BPM analysis. He is currently rewriting the insert and calibration piece of the
beampackage to include the new filter method presented last time. - Also working on a beam move check that uses rms detection for fast beam moves, looks at current trip
information, and splits events if the beam moves more than 0.3mm. He is still debugging this. He
also discussed pedestal subtraction for the bpm calibration. The pedestal shown was quite wide and had
multiple peaks. Pengjia will study this more to determine if this behavior is consistent through all runs,
or if this was a specific period when conditions were changing. His presentation can be found here.
- Gave an update on BPM analysis. He is currently rewriting the insert and calibration piece of the
- Toby
- Worked with Josh and James to determine that there was in fact a bug in the online code to determine
target polarizations. This means the 2.5T polarization is actually ~50% of what was reported online.
This does not affect the 5T results. - Showed details for his method of determining the TE points. He takes the time reported online for the
TE as the approximate start time, does a 0th order fit of the points and looks at the chi-squared value.
He then adds points until he finds the end point of the TE. Finally, he adds points at the beginning of
the TE to minimize the chi-squared value. The polarization is then averaged over each run. He questioned
how the polarization decay contributes to the uncertainty for each run, since the poarization is actually
decaying over each run. JP suggested using the average of the decay curve to determine this. Toby's
presentation can be found here.
- Worked with Josh and James to determine that there was in fact a bug in the online code to determine
- Jie
- Has completed the efficiency study for multi-track events. This information will be in the mysql
database soon. - Also working on data quality checks for optics variables. He looked at the t0 calibration first. One
concern is how "negative time" events are being dealt with. Jie will check that they aren't being cut out.
He also looked at the transport and rotated x,y,theta and phi variables. Within each kinematic settings,
these varibales look stable over all production runs. Jie will take a closer look the elastic settings next.
His presentation can be found here.
- Has completed the efficiency study for multi-track events. This information will be in the mysql
6/12/2013
Present: Karl, JP, Vince, Jixie, Chao, Jie, Min, Alexandre, Kalyan, Melissa
By-Phone: Ellie, Toby, Ryan, Moshe, Pengjia
Feature presentations:
- Chao
- Has fixed the bit-shift problem in the offline helicity decoder. The next round of farm replay will
have correct helicity information.
- Has fixed the bit-shift problem in the offline helicity decoder. The next round of farm replay will
- Melissa
- Showed a comparison between her and Pengjia's results for beam charge asymmetries. The results
show ~0.5% difference. It's possible that the beam trip cuts that Pengjia used is the reason for this
difference. Her presentation can be found here.
- Showed a comparison between her and Pengjia's results for beam charge asymmetries. The results
- Min
- Showed an update of angle and vertex matrix calibration results using previous bpm calibrations. The results
are not better. She is still working on improving it by applying beam current cuts and beam position cuts.
She will also try using a different run to optimize that has the same conditions but a more stable current
She will check that the beam position is also stable during this run. Her slides can be found here
- Showed an update of angle and vertex matrix calibration results using previous bpm calibrations. The results
6/5/2013
Present: Karl, Vince, Kalyan, Chao, Jie, Min, Melissa
By-Phone: Ellie, Toby, Ryan, Moshe, Guy, Pengjia
General discussion:
- There will be a practice talk for Min's Hall A Collaboration Meeting talk sometime soon (Friday or Monday?)
Feature presentations:
- Moshe
- Gave an update on the status of GEp analysis. He did a first round of analysis without calibrations,
to determine a general procedure for analysis. He showed preliminary results for elastic peak
identification, binning optimization, dilution factors and asymmetry extraction. He is also working
on writing a GEp event generator for HRSMC. His presentation can be found here. - Details of status of GEp analysis can be found in this document, written by Moshe.
- Gave an update on the status of GEp analysis. He did a first round of analysis without calibrations,
- Pengjia
- Working on improving BPM resolution. He added a low pass, software FIR filter when processing
the data, which seems to work very well. The previous results were 7-8X larger then the results that
have been processed with the filter. He is working on incorporating this into his beampackage code,
and will repeat his BPM noise study using the filter. Will also check if the central value changes at all
as a result of the filter. His presentation can be found here.
- Working on improving BPM resolution. He added a low pass, software FIR filter when processing
- Ryan
- Working on the LHRS Y-target calibration. He optimized 3 runs in the 2.2 GeV (0T, 6deg) setting
with different beam positions; (4,0), (0,0), (-4,0). The results seemed strange, specifically for the (0,0)
and (-4,0) settings. He first tried optimizing each run individually, which gives results that make sense.
He then tried optimizing them in pairs, and got reasonable results except for the combination of (0,0)
and (-4,0). Finally, he tried to check these results using other runs with the same beam positions, but
found that during these optics runs, the beam position was never moved back to (0,0). Vince pointed
out that the typical resolution for y-target is ~1mm, so these results may be ok. Ryan's presentation
can be found here.
- Working on the LHRS Y-target calibration. He optimized 3 runs in the 2.2 GeV (0T, 6deg) setting
- Toby
- Showed updated target polarization calibration constants. He decided to use a 3rd order polynomial
to fit the wings of the baseline-subtracted signal. It's possible that the discrepancy between using a
2nd and 3rd order polynomial could result from the signal "bleeding" into the wings. He described his
method for minimizing the reduced chi-squared to find the TE points. He starts with a set of 15 points
where the target is most thermalized, and adds points from the beginning/end of the TE to further reduce
the chi-squared. - He also found that the 2.5T calibration constants found offline are ~50% smaller than what was determined
online. He will work closely with James/Josh to confirm that this really is a problem. His presentation can
be found here.
- Showed updated target polarization calibration constants. He decided to use a 3rd order polynomial
- Jie
- Showed an updated version of multi-track efficiency analysis. He determined that his new method of
including background cuts is not as reliable as his previous method to determine uncertainty. By including
background cuts, the systematic uncertainty is decreased by about 20%, but it very position dependent, so
he suggests staying with his previous results. His presentation can be found here.
- Showed an updated version of multi-track efficiency analysis. He determined that his new method of