Difference between revisions of "December 3rd 2021"

From Hall A Wiki
Jump to: navigation, search
 
(No difference)

Latest revision as of 11:21, 3 December 2021

  • Updates
    • HCal
      • latency shift during the evening
      • need to investigate
    • coincidence trigger
      • 60 ns window coincidence
      • shift might have been seen during setting up EDTM
      • veto for bigbite low with veto
    • Bigbite Shower
      • trying to take cosmics run with SBS field off : HV looks pretty good
      • can use HV file for SBS Off run
      • want to take another run with SBS on and same HV
      • error message HV GUI : fixed by closing unused GUIs
    • Hodoscope
      • working ok
      • still HV issues
    • GRINCH
    • CDet
      • almost finished with last cosmic for efficiency analysis
      • next week second half
    • GEM Bigbite
      • MPD with errors but run still starts - need to add more diagnostic - rebooting mpd fixed - Ben will change
      • APV disabled - maybe
    • GEM INFN clean room
    • Helicity
      • David and Paul working on analysis
      • issue when writing two streams ? FADC and scalers information recorded - could think record
      • RP detectors with GEn ?
      • helicity decoder board - would give the seed for reported helicity for the trigger
      • feedback ?
      • could try take time setup PITA - beam charge asymmetries to check system, prompt and delayed
    • Beamline/scalers
      • David working on BCM/ unser analysis
      • Charge script by Bob
    • HV
      • Rpi on remote switch
      • send instruction to Steve on how to reboot RPis
      • Steve will put a SBS EPICS page on wiki, want to add instructions
      • hodoscope HV uses proprietary CAEN library ( radiation or too much traffic ) suggest to pull out unused cards to reduce traffic


  • On-going tasks
    • EPICS / start End of run logging script (fixed Bob 100% ) - just need to add configuration - will make a test configuration - need to make sure we have all variables - and variables to add at begining / end of run and during the run (EPICS 131 event - need to add to analyzer ) issue with ET - could try run ER and EB on same node - way to restart ET ?
      • Carl looking at issues still unresolved - ER and ET communication - code updated - might fix issues - could try to update emu and ET to test
    • (fixed database issue 100% working ) David readout scalers fine and are in tree - from LHRS - still need SBS side will check with Steve - ( done - looks good on sbs side )
    • script check MPD sync ( Ben , Andrew 20%) - code to unpack information already written - need run with those data - timestamp need some work - ( event count aligned, timestamp look odd maybe will check with Andrew )
    • sync check scripts other modules F1, 1190, FADC ( Andrew , Alex, Mark 20%) - Mark will work to have it available
    • reference channel to all crates - need HCal and Shower + RF - need to add RF to HCAL - 3 cables - in hodoscope crate - if shower FADC available can add

separate - need RC box for shower and hodo if possible - need to check - module will be added ( Mark )

    • Trigger latch TS to TDC ( Bob,Ole 100 % ) - trigger bank has latch
    • event type decoding in analyzer - ( done 100% ) - need update CH analyzer
      • Possible new tasks
        • User event inserted at each split with timestamp, spectrometer information - should discuss with
        • autoanalysis at end of run - ( Paul has something setup 75 % but still need to be tested ) some issues launching jobs automatically - could have 3, one on each machine - not priority

- still issues with non interactive shell - could try for next production

        • setup laptop for CH blujeans and Slack - need fix hard drive
    • filling up google sheet with run info


  • Unresolved issue
    • Scaler gating ( need cable and go through TTL/ECL converter ) (fixed)
    • TI shower in William office (fixed)
    • (need to test)FADC bad channel - FADC replaced but problem could not be reproduce - need to test more ( on going - low priority ) - FADC in test lab - DAQ setup - need to feed signal
    • (still there)warning PEB ROC - Bryan checking with Dave - might incorrect flag - not sure why - but data still in sync since only happens - still happening - not sure if it is an issue
    • HV can put documentation for other non EPICS control
    • (50%) near 100 MB/s limit for FADC crates - Shower window was divided by 2 - HCal windows - could do 150 ns - Move FADCs to top crate ? - can reach 5.2 KHz now
    • (cannot be fixed - been that way forever) Raster X1 vs Raster X2 anticorrelation
    • (fixed ) EPICS insertion : working better but still hangs for long runs
    • (other issues ) ROC19 busy fiber : issue became worse - MPD23 disabled - firmware fixes to be implemented for reset -
    • (new) ER1 hangup - related to ET issue - VTP can get stuck - resetting run usually fixes
    • (new) TDC errors - just one run though - https://logbooks.jlab.org/entry/3923168
    • (new)HCAL flag file crashing ROCs : file too long ? - max file length to add -
    • (fixed) Raster saturation ( fixed with 20 db attenuator ) -
    • No clear signal in GRINCH - latency was adjusted - need check with beam ( see signal now )
    • HCAL fifo overflow https://logbooks.jlab.org/entry/3925327 - will check HCal people ( maybe empty channels - trigger info )
    • (new) issue with 50 k scripts making entry - logentry script puts a limit on java memory - 50 k and 5 k scripts were changed to use it - might want to use this new script to avoid issues - HALOG from Paul -

Logentry script: https://logbooks.jlab.org/entry/3933875 - is that a ulimit issue ?

  • 3820 scalers : issue to address - will get more as spare - and need -
  • tftp issue (fixed): network - switch transceiver
  • (new) 75 ns timing shift
  • ( new) DAQ running with APV and MPD in bad state
  • disk filling up
  • helicity with several streams


GEn RP

  • GEN RP DAQ
    • both VTP working : 10 gigE link from VTP to computer
    • working on VTP update to handle more that 32 MPDs
    • debugging fiber connection
    • will bring VME crate
    • new MPD might arrive for Thanksgiving : 15 MPDs
    • new distribution box ready
    • received lemo cables
    • GEM layers can see clusters and cosmics
    • still DAQ very unstable
    • enpcamsonne cannot ssh out,