- 1 Main Computer
- 2 Telephone
- 3 CODA
- 4 crl files
- 5 Port Server
- 6 Booting scripts
- 7 HV setup for ECal
- 8 VXS crate
- 9 Current plan for the module flipping test (as of 2015/06/15)
- 10 Status update
- 11 06/17/2015
- 12 06/22/2015
- 13 06/26/2015
- 14 07/01/2015
- 15 07/09/2015
- 16 07/22/2015
- 17 07/27/2015
- 18 09/11/2015
- 19 09/11/2015
- 20 12/01/2015
- 21 12/04/2015
- 22 02/22/2016
- 23 05/05/2016
located at Test Lab.
Simply, use "startcoda" to start CODA and "kcoda" to kill existing CODA.
located at /home/adaq/crl
telnet rasterts 200x For ECAL fastbus telnet rasterts2 200x
Note: Every time the power off/on the crate, the boot parameters appear to be corrupt. So, we use the following python scripts to reprogram the boot parameters.
- ./progROC1 (for ROC1 - TOP crate)
- ./progROC2 (for ROC2 - MIDDLE crate)
- ./progROC3 (for ROC3 - BOTTOM crate)
Following are the boot parameter used for 3 crates in rack RR-3 (2015/06/15) Boot scripts has changed ( will be updated soon. )
HV setup for ECal
Start the server
ssh rpi3 -l pi password: ./start_hv
In another terminal log in to sbs1 as adaq
ssh sbs1 -l adaq cd slowc ./hvs_rpi3
In the vxs crate, we have new CPU, new TS, TD, SD and a scaler.
ssh sbsvme2 -l root password:
Scaler (to determine the dead time)
Current plan for the module flipping test (as of 2015/06/15)
- According to William Gu, following are the latest firmware needed for TS, TD and TI. The files are located at /group/da/distribution/coda/Firmware/.
- The TI firmware is: tip18.svf
- The TD firmware is: tdp17.svf
- The TS firmware is: tsp17.svf
- New TS already has the latest firware.
- No firmware update needed on SD.
- Upadted the firmware on TD.
- Updated the firmware on 3 TIs.
- HALLA TI#353
- HALLA TI#360
- HALLA TI#361
- Received 25 uCi Sr-90 source for DAQ testing (JLab serial number: 135)
- Paperwork done.
- Training done.
- Work area is posted as "Radiologically controlled area + Radioactive material".
- Had trouble with the latest ti libararies (not accepting triggers). Bryan found the bug and fixed it.
- All libraries/firmware/examples we are currently using are cloned from /site/coda/contrib/devel/
- After few modifications to the tiLib.h and .crl (examples comes with new TI - /site/coda/contrib/devel/ti/crl/ti_sfi_master_1881.crl), we were able to read the ADCs.
- Modifications are posted at https://logbooks.jlab.org/book/superbigbite
- Both TOP crate and MIDDLE crate were tested.
- 8 ADCs in each crate
- Multi-block readout
- Decoder is ready.
- Readouts are fine.
- We are currently trying to set up the FAST CLEAR. Find more info at https://logbooks.jlab.org/book/superbigbite [log entries: 3345490,3345491,3345507]
- After dealing with few hardware issues, we finally were able to setup Fast Clear on the both top and middle crates. For more details of the hardware issue, see logbook https://logbooks.jlab.org/book/superbigbite.
We received 3 new CPUs from HallB. Those were transferred to the 36 sub-network and given new host names.
08003E2DE56F -sbsvme4 - 22.214.171.124 08003E2A7B3A -sbsvme5 - 126.96.36.199 08003E2DE58D -sbsvme6 - 188.8.131.52
Tested on RR2 and they all working fine.
Update the SFI inventory and now ee have 6 broken SFIs. Need to send those to STRUCK to repair.
Four more CPUs registered at 36 sub-network.
08003E261407 - sbsvme7 - 184.108.40.206 08003E27C100 - sbsvme8 - 220.127.116.11 08003E265AB4 - sbsvme9 - 18.104.22.168 08003E265A7C - sbsvme10 - 22.214.171.124
Two more CPUs registered at 36 sub-network.
00:01:af:00:b0:b7 : sbsvme11.jlab.org : 126.96.36.199
MVME 6100 (has ETH1 and ETH2)
00:01:af:2f:66:33 : sbsvme12.jlab.org : 188.8.131.52 00:01:af:2f:66:32 : svsvme13.jlab.org : 184.108.40.206
Note: When booting MVME 6100 (in the boot parameters)
Three more CPUs were added to 36 subnet
00:01:af:2f:6b:88 : sbsvme14 : 220.127.116.11 00:01:af:00:b0:b9 : sbsvme15 : 18.104.22.168 00:01:af:0d:d6:42 : sbsvme16 : 22.214.171.124
One more CPU
00:01:AF:13:EC:1C : sbsvme17 : 126.96.36.199
- This CPU was already register in 36 subnet as "sbsfbdaq1b". We changed the host name as sbsvme17.
- Received two more MVME 2431 CPUs from Bogdon. Requested to register those in 36 subnet.
08:00:3E:2F:5F:52 : sbsvme18 : 188.8.131.52 08:00:3E:26:13:DB : sbsvme19 : 184.108.40.206
- even though the frame on the sbsvme19 says it's MVME 2431, we found out that the board is actually 23xx. So, it'll be not used in ECal DAQ.
- Received 3 new linux vme cpus for ECal DAQ.
00:20:38:06:53:2F : sbsvme20 : 220.127.116.11 00:20:38:06:51:0E : sbsvme21 : 18.104.22.168 00:20:38:06:53:1A : sbsvme22 : 22.214.171.124
- I summarized the process of installing Linux, CODA, and etc. for these new cps for future reference.Installing Linux and other drivers in GE XVB602 cpus