ECal DAQ

From Hall A Wiki
Jump to: navigation, search

Main Computer

sbs1

located at Test Lab.

Telephone

Ext: 6912

CODA

  • et_strat
  • platform
  • coda_eb_rc3
  • coda_er_rc3
  • rcgui
  • coda_roc_rc3

Simply, use "startcoda" to start CODA and "kcoda" to kill existing CODA.

crl files

located at /home/adaq/crl

Port Server

telnet rasterts 200x For ECAL fastbus telnet rasterts2 200x

Note: Every time the power off/on the crate, the boot parameters appear to be corrupt. So, we use the following python scripts to reprogram the boot parameters.

  • ./progROC1 (for ROC1 - TOP crate)
  • ./progROC2 (for ROC2 - MIDDLE crate)
  • ./progROC3 (for ROC3 - BOTTOM crate)

Booting scripts

Following are the boot parameter used for 3 crates in rack RR-3 (2015/06/15) Boot scripts has changed ( will be updated soon. )

HV setup for ECal

Start the server

ssh rpi3 -l pi
password: 
./start_hv

In another terminal log in to sbs1 as adaq

ssh sbs1 -l adaq
cd slowc
./hvs_rpi3

VXS crate

In the vxs crate, we have new CPU, new TS, TD, SD and a scaler.

sbsvme2

ssh sbsvme2 -l root
password: 

Scaler (to determine the dead time)

/root/scaler/scread

Current plan for the module flipping test (as of 2015/06/15)

Mfplan.png

Status update

06/17/2015

  • According to William Gu, following are the latest firmware needed for TS, TD and TI. The files are located at /group/da/distribution/coda/Firmware/.
    • The TI firmware is: tip18.svf
    • The TD firmware is: tdp17.svf
    • The TS firmware is: tsp17.svf
  • New TS already has the latest firware.
  • No firmware update needed on SD.
  • Upadted the firmware on TD.
  • Updated the firmware on 3 TIs.
    • HALLA TI#353
    • HALLA TI#360
    • HALLA TI#361


06/22/2015

  • Received 25 uCi Sr-90 source for DAQ testing (JLab serial number: 135)
    • Paperwork done.
    • Training done.
    • Work area is posted as "Radiologically controlled area + Radioactive material".


06/26/2015

  • Had trouble with the latest ti libararies (not accepting triggers). Bryan found the bug and fixed it.
  • All libraries/firmware/examples we are currently using are cloned from /site/coda/contrib/devel/


07/01/2015

  • After few modifications to the tiLib.h and .crl (examples comes with new TI - /site/coda/contrib/devel/ti/crl/ti_sfi_master_1881.crl), we were able to read the ADCs.
  • Modifications are posted at https://logbooks.jlab.org/book/superbigbite
  • Both TOP crate and MIDDLE crate were tested.
    • 8 ADCs in each crate
    • Multi-block readout
    • Decoder is ready.
    • Readouts are fine.

07/09/2015

07/22/2015

07/27/2015

We received 3 new CPUs from HallB. Those were transferred to the 36 sub-network and given new host names.

08003E2DE56F -sbsvme4 - 129.57.37.26
08003E2A7B3A -sbsvme5 - 129.57.37.27
08003E2DE58D -sbsvme6 - 129.57.37.28

Tested on RR2 and they all working fine.


Update the SFI inventory and now ee have 6 broken SFIs. Need to send those to STRUCK to repair.

09/11/2015

Four more CPUs registered at 36 sub-network.

08003E261407 - sbsvme7 - 129.57.37.39
08003E27C100 - sbsvme8 - 129.57.37.40
08003E265AB4 - sbsvme9 - 129.57.37.47
08003E265A7C - sbsvme10 - 129.57.37.48

09/11/2015

Two more CPUs registered at 36 sub-network.

MVME 2306

00:01:af:00:b0:b7 : sbsvme11.jlab.org : 129.57.36.199

MVME 6100 (has ETH1 and ETH2)

00:01:af:2f:66:33 : sbsvme12.jlab.org : 129.57.37.45
00:01:af:2f:66:32 : svsvme13.jlab.org : 129.57.37.51

Note: When booting MVME 6100 (in the boot parameters)

12/01/2015

Three more CPUs were added to 36 subnet

00:01:af:2f:6b:88 : sbsvme14 : 129.57.37.53
00:01:af:00:b0:b9 : sbsvme15 : 129.57.37.59
00:01:af:0d:d6:42 : sbsvme16 : 129.57.37.61

12/04/2015

One more CPU

00:01:AF:13:EC:1C : sbsvme17 : 129.57.37.6
  • This CPU was already register in 36 subnet as "sbsfbdaq1b". We changed the host name as sbsvme17.

02/22/2016

  • Received two more MVME 2431 CPUs from Bogdon. Requested to register those in 36 subnet.
08:00:3E:2F:5F:52  :  sbsvme18  :  129.57.36.97
08:00:3E:26:13:DB  :  sbsvme19  :  129.57.37.78
  • even though the frame on the sbsvme19 says it's MVME 2431, we found out that the board is actually 23xx. So, it'll be not used in ECal DAQ.

05/05/2016

  • Received 3 new linux vme cpus for ECal DAQ.
00:20:38:06:53:2F :  sbsvme20  : 129.57.36.154
00:20:38:06:51:0E :  sbsvme21  : 129.57.36.157
00:20:38:06:53:1A :  sbsvme22  : 129.57.36.206