From Hall A Wiki
Jump to: navigation, search

Main Computer


located at Test Lab.


Ext: 6912


  • et_strat
  • platform
  • coda_eb_rc3
  • coda_er_rc3
  • rcgui
  • coda_roc_rc3

Simply, use "startcoda" to start CODA and "kcoda" to kill existing CODA.

crl files

located at /home/adaq/crl

Port Server

telnet rasterts 200x For ECAL fastbus telnet rasterts2 200x

Note: Every time the power off/on the crate, the boot parameters appear to be corrupt. So, we use the following python scripts to reprogram the boot parameters.

  • ./progROC1 (for ROC1 - TOP crate)
  • ./progROC2 (for ROC2 - MIDDLE crate)
  • ./progROC3 (for ROC3 - BOTTOM crate)

Booting scripts

Following are the boot parameter used for 3 crates in rack RR-3 (2015/06/15) Boot scripts has changed ( will be updated soon. )

HV setup for ECal

Start the server

ssh rpi3 -l pi

In another terminal log in to sbs1 as adaq

ssh sbs1 -l adaq
cd slowc

VXS crate

In the vxs crate, we have new CPU, new TS, TD, SD and a scaler.


ssh sbsvme2 -l root

Scaler (to determine the dead time)


Current plan for the module flipping test (as of 2015/06/15)


Status update


  • According to William Gu, following are the latest firmware needed for TS, TD and TI. The files are located at /group/da/distribution/coda/Firmware/.
    • The TI firmware is: tip18.svf
    • The TD firmware is: tdp17.svf
    • The TS firmware is: tsp17.svf
  • New TS already has the latest firware.
  • No firmware update needed on SD.
  • Upadted the firmware on TD.
  • Updated the firmware on 3 TIs.
    • HALLA TI#353
    • HALLA TI#360
    • HALLA TI#361


  • Received 25 uCi Sr-90 source for DAQ testing (JLab serial number: 135)
    • Paperwork done.
    • Training done.
    • Work area is posted as "Radiologically controlled area + Radioactive material".


  • Had trouble with the latest ti libararies (not accepting triggers). Bryan found the bug and fixed it.
  • All libraries/firmware/examples we are currently using are cloned from /site/coda/contrib/devel/


  • After few modifications to the tiLib.h and .crl (examples comes with new TI - /site/coda/contrib/devel/ti/crl/ti_sfi_master_1881.crl), we were able to read the ADCs.
  • Modifications are posted at
  • Both TOP crate and MIDDLE crate were tested.
    • 8 ADCs in each crate
    • Multi-block readout
    • Decoder is ready.
    • Readouts are fine.




We received 3 new CPUs from HallB. Those were transferred to the 36 sub-network and given new host names.

08003E2DE56F -sbsvme4 -
08003E2A7B3A -sbsvme5 -
08003E2DE58D -sbsvme6 -

Tested on RR2 and they all working fine.

Update the SFI inventory and now ee have 6 broken SFIs. Need to send those to STRUCK to repair.


Four more CPUs registered at 36 sub-network.

08003E261407 - sbsvme7 -
08003E27C100 - sbsvme8 -
08003E265AB4 - sbsvme9 -
08003E265A7C - sbsvme10 -


Two more CPUs registered at 36 sub-network.

MVME 2306

00:01:af:00:b0:b7 : :

MVME 6100 (has ETH1 and ETH2)

00:01:af:2f:66:33 : :
00:01:af:2f:66:32 : :

Note: When booting MVME 6100 (in the boot parameters)


Three more CPUs were added to 36 subnet

00:01:af:2f:6b:88 : sbsvme14 :
00:01:af:00:b0:b9 : sbsvme15 :
00:01:af:0d:d6:42 : sbsvme16 :


One more CPU

00:01:AF:13:EC:1C : sbsvme17 :
  • This CPU was already register in 36 subnet as "sbsfbdaq1b". We changed the host name as sbsvme17.


  • Received two more MVME 2431 CPUs from Bogdon. Requested to register those in 36 subnet.
08:00:3E:2F:5F:52  :  sbsvme18  :
08:00:3E:26:13:DB  :  sbsvme19  :
  • even though the frame on the sbsvme19 says it's MVME 2431, we found out that the board is actually 23xx. So, it'll be not used in ECal DAQ.


  • Received 3 new linux vme cpus for ECal DAQ.
00:20:38:06:53:2F :  sbsvme20  :
00:20:38:06:51:0E :  sbsvme21  :
00:20:38:06:53:1A :  sbsvme22  :