HV HowTo for Experts
High Voltage in Hall A
First of all, please be aware of the simple instructions for users:
Overview of Architecture
The HV crates sit in the hall, e.g. on the detector stacks. A set of cards with (usually) 12 channels of negative or positive HV are inserted into the crates. A custom "serial board" (built by Javier Gomez and Jack Segal) talks to the cards. This "serial board" replaces an old, obsolete motherboard. (There are still a few crates with this motherboard, however -- e.g. the beamline crate.) A Perl "Shim" server (written by Brad Sawatzky) runs on a PC nearby the HV crate. The "Shim" server uses (via a socket connection) a low-level C code written by Javier to talk to his serial card in the HV crate. On the User end, a Java GUI (written by Roman Pomatsalyuk) displays the HV information and provides User control. This Java GUI talks to the Shim server. Alternatively, the Java GUI can talk to the motherboard via a Portserver.
In the summer of 2014, the nearby PC where the Shim server runs is now a Raspberry PI (rPI) board, a small PC that sits in the crate. This work, done by Javier Gomez, Roman Pomatsalyuk, and Chuck Long, has made the server much faster and more stable.
The portserver/motherboard alternative is being phased out, but still exists in at least one place at the moment: the beamline HV crate. It may be in some older setups elsewhere, too.
Here is a list of HV crates in Hall A as of Sept 2014
The crates that use portservers are talking to the old motherboard. The crates that use an rPI PC with Shim have had their motherboards removed. See Architecture above.
Location Portserver, or PC for Shim Config on ~/slowc How to start on aslow Left HRS (1 crate) rpi8 LEFT cd ~aslow/slowc ; ./hvs LEFT Right HRS (bottom crate) rpi7 RIGHT Right HRS (top crate) rpi4 RIGHT cd ~aslow/slowc ; ./hvs RIGHT (this starts both R-HRS crates) Beamline portserver hatsv4: 2003 BEAMLINE cd ~aslow/slowc ; ./hvs BEAMLINE
Restarting the Servers
The servers can run on a PC with a serial connection. As of Sept 2014, all the HRS servers are running on Raspberry PI PCs.
A cron script ensures that the server is running on pi@rpi
$ crontab -l
# cron jobs 5,10,15,20,25,30,35,40,45,50,53,55,58 * * * * /home/pi/scripts/start_hv_cron
The start_hv_cron script checks if the server is running. If it is running, the script does nothing. If it is not running, the script starts it. This means that the server should always be running (within about 5 min of restarting the rPI.) and you should not have to do anything.
A simple way to restart the server is to reboot the rpi. The cron script will restart it.
If you prefer to be faster here's how
ssh into the rpi of interest (rpi4 or rpi7 on R-HRS or rpi8 on L-HRS). User is "pi" and sorry I can't say the password here. I'll put it near the whiteboard.
These should be typed at the command prompt
ps -awx | grep -i shim kill -9 27913 (assuming the PID of the process was 27913) then /home/pi/scripts/start_hv_cron exit
Next, on adaqsc, or wherever you run the Java GUI, and using the aslow account: kill the Java Gui and restart it
cd /adaqfs/home/aslow/slowc ./hvs LEFT or ./hvs RIGHT
Setting Up A New Installation
For the rPI, the minimum installation kit is in /mss/home/rom/rpi_minimal_9sep14.tar but there are more complete forms of this which have Perl, etc. Then you might need something like rpi_all_9sep14.tar. There is typically some documentation (README) that comes with the tar file which explains how to proceed.
Probably a better way to obtain the distribution is from github. There are two packages.
You should always pick the latest date (the date is usually written in the filename) because we tend to update / improve things.
The rest of these instructions pertain to running the Shim server on a nearby PC like a laptop (not an rPI board).
If you install on an Intel PC in Hall A, note that these share a root partition, so they all see the software. Suppose, however, that you want to install on a new PC like a laptop. That's what these instructions are for.
1. Need Redhat 5 or greater in order to have proper glibc.
2. Needed Perl 5.8.8 or later install. On intel PC this appears as /perl after being put (on adaql1) in the shared area.
3. Set write permission for /dev/ttyS0 (or is your PC using /dev/ttyS2 ?) Typically the permissions gets reset when the PC is rebooted, such that users cannot write there. A wrong write permission causes a silent failure of the software.
4. Need telnet server because the network connection is via telnet
Install telnetd server by typing "yum install telnet-server" and allow telnet as follows: /etc/xinetd.d/telnet needs "disable no" and you need to restart xinetd /sbin/service xinetd restart
Note, if telnet mysteriously stops working ... Kill cfengine, which is Computer Center's rather rude security script that deletes /etc/xinetd.d/telnet. Yes, telnet is an old, insecure protocol, but it's needed by this software. The Java code uses telnet to communicate with the Perl server.
On ahut1 or ahut2, I have a cron script /root/scripts/prepHV which takes care of running the server automatically. It also periodically restores the "telnet" file mentioned above, and periodically restarts xinetd
Here is a simple test of the telnet server: If you are on, for example, ahut2 and can "telnet ahut2" (i.e. telnet into yourself). Then yes, the server is running.
5. Install the software. Assuming you don't need the Java code because it runs on an adaq machine in the counting room, there are two pieces: the Shim perl server and the low-level frontend C code, see "Architecture" above. You'll need to find the tar file for this, which is in the MSS.
What can go wrong ?
Common troublshooting items are listed in the User Guide (search for troubleshooting)
Below is a list of other problems I've seen and their solution.
1. Power-cycling a HV crate has been known to help. Especially if more than one HV card does not respond. Then it's probably a crate problem not a card problem.
2. No connection to server. Try restarting the server. Instructions in "Restarting the Server" above.
3. I think that less goes wrong now that we're using the rPI boards, but we need to accumulate more experience with those and then write here about it.