Difference between revisions of "HV HowTo for Experts"

From Hall A Wiki
Jump to: navigation, search
(Link to EPICS HV notes)
Line 9: Line 9:
 
The HV crates sit in the hall, e.g. on the detector stacks.  A set of cards with (usually) 12 channels of negative or positive HV are inserted into the crates.
 
The HV crates sit in the hall, e.g. on the detector stacks.  A set of cards with (usually) 12 channels of negative or positive HV are inserted into the crates.
 
A custom "serial board" (built by Javier Gomez and Jack Segal) talks to the cards.  This "serial board" replaces an old, obsolete motherboard.  (There are still a few crates with this motherboard, however -- e.g. the beamline crate.)
 
A custom "serial board" (built by Javier Gomez and Jack Segal) talks to the cards.  This "serial board" replaces an old, obsolete motherboard.  (There are still a few crates with this motherboard, however -- e.g. the beamline crate.)
A Perl "Shim" server (written by Brad Sawatzky) runs on a PC nearby the HV crate.  The "Shim" server uses (via a socket connection) a low-level C  
+
A Perl "Shim" server (written by Brad Sawatzky) runs on a PC (usually a Raspberry Pi) nearby the HV crate.  The "Shim" server uses (via a socket connection) a low-level C  
 
code written by Javier to talk to his serial card in the HV crate.  On the User end, a Java GUI (written by Roman Pomatsalyuk) displays the HV information and
 
code written by Javier to talk to his serial card in the HV crate.  On the User end, a Java GUI (written by Roman Pomatsalyuk) displays the HV information and
provides User control.  This Java GUI talks to the Shim server.  Alternatively, the Java GUI can talk to the motherboard via a Portserver.
+
provides User control.  This Java GUI talks to the Shim server.  Alternatively, the Java GUI can talk to the motherboard via a Portserver.
 
+
A relatively new development (2021) is an EPICS layerThe Shim server can talk to the Java GUI as well as the EPICS layer.   
In the summer of 2014, the nearby PC where the Shim server runs is now a Raspberry PI (rPI) board, a small PC that sits in the crateThis work, done by Javier Gomez, Roman Pomatsalyuk, and Chuck Long, has made the server much faster and more stable.
+
The EPICS support (based on the old LeCroy HV EPICS support developed by Chris Slominski and modernized by Stephen Wood) has been added to the rPI boards in the crates used for SBS.  The EPICS server on each rPI communicates with the shim which has been upgraded to support simultaneous connections from EPICS and the Java GUI.  The addition of EPICS support allows the use of the broad range of EPICS tools for GUIs, stripcharts and archiving.  See [[SBS EPICS]] for more information
 
+
The portserver/motherboard alternative is being phased out, but still exists in at least one place at the moment:  the beamline HV crateIt may be
+
in some older setups elsewhere, too.
+
 
+
In 2021, EPICS support (based on the old LeCroy HV EPICS support developed by Chris Slominski and modernized by Stephen Wood) has been added to the rPI boards in the crates used for SBS.  The EPICS server on each rPI communicates with the shim which has been upgraded to support simultaneous connections from EPICS and the Java GUI.  The addition of EPICS support allows the use of the broad range of EPICS tools for GUIs, stripcharts and archiving.  See [[SBS EPICS]] for more information
+
  
 
==  Existing Crates ==
 
==  Existing Crates ==
Line 66: Line 61:
 
$ crontab -l
 
$ crontab -l
  
2,6,10,14,18,22,26,30,34,38,42,46,50,54,58 * * * * /home/pi/scripts/start_hv_cron_new
+
2,6,10,14,18,22,26,30,34,38,42,46,50,54,58 * * * * /home/pi/scripts/start_hv_cron
 
</pre>
 
</pre>
  
The start_hv_cron_new script checks that each of the three servers are running.  If they are running, the script does nothing. If any of the servers are not running, the script starts them.  This means that the servers should always be running (within about 5 min of restarting the rPI.) and you should not have to do anything.
+
The start_hv_cron script checks that each of the three servers are running.  If they are running, the script does nothing. If any of the servers are not running, the script starts them.  This means that the servers should always be running (within about 5 min of restarting the rPI.) and you should not have to do anything.
  
The start_hv_cron_new also configures the EPICS setup so that EPICS PVs (process variables) are defined for each HV channel in each slot that is installed in the crate.
+
The start_hv_cron also configures the EPICS setup so that EPICS PVs (process variables) are defined for each HV channel in each slot that is installed in the crate.
  
 
If the servers need to be restarted, that can be done in one of three ways.
 
If the servers need to be restarted, that can be done in one of three ways.
Line 80: Line 75:
  
 
In each of these cases, the servers will be restarted by cron within four to five minutes.
 
In each of these cases, the servers will be restarted by cron within four to five minutes.
 
  
  
Line 125: Line 119:
 
</pre>
 
</pre>
  
== Setting Up A New Installation ==
+
== May 2022 updates the Raspberry Pi ==
  
For the rPI, the minimum installation kit is in /mss/home/rom/rpi_minimal_9sep14.tar but there are more complete forms of this which have
+
Some goals of the recent work on the software for HV controls were: 1) update the software so that HV can work on newer rpi3, 4.
Perl, etc. Then you might need something like rpi_all_9sep14.tar.  There is typically some documentation (README) that comes with the tar
+
2) make several backup SD cards so that we can rapidly swap them to fix broken SD cards (presumed to be zapped by radiation).
file which explains how to proceed.
+
  
Probably a better way to obtain the distribution is from github. There are two repositories.
+
Generally the most recent version is in a directory like
 +
/soft1
 +
(could be /soft2, /soft3, etc)
  
1. [https://github.com/rwmichaels/HV-control Frontend C code and Perl Shim server] (credit:  Javier Gomez wrote the C code and Brad Sawatzky wrote the Perl server.)
+
within /soft1 are some directories
+
2. [https://github.com/romanivanovi4/hvg_software Java GUI] (credit: Roman Pomatsalyuk wrote the Java GUI.)
+
  
You should always pick the latest date tar file, or use Git.  For the tar file the date is usually written in the filename.
+
/scripts
  
The rest of these instructions pertain to running the Shim server on a nearby PC like a laptop (not an rPI board).
+
  contains the script(s) that launch the whole thing.
 +
  Mainly you need start_hv_cron, as explained above.
 +
 
 +
  configure_epics.py configures the EPICS.  Currently there is a hack to force the crate # to be 8 instead
 +
  of deriving it from rpi22.jlab.org.  See "hack" in that script.
  
If you install on an Intel PC in Hall A, note that these share a root partition, so they all see the software.  Suppose, however, that you want to install on a new PC like a laptop.  That's what these instructions are for.
+
/lecroy_servers
  
1. Need Redhat 5 or greater in order to have proper glibc.
+
  Contains the C code that talks to the crate and the Perl "shim" server.
 +
    -- pay attention to GPIO memory offset, it depends on version of rpi
  
2. Needed Perl 5.8.8 or later install. On intel PC this appears as /perl after being put (on adaql1) in the shared area.
+
  20220504_i2lchv_rPI-linux  driver to talk to HV crate
 +
  20220504_i2lchv_rPI-linux.c  source of driver
 +
  LecroyHV_shim-01May2022-epics  Perl "shim" server
 +
 
 +
/epics
  
3. Set write permission for /dev/ttyS0 (or is your PC using /dev/ttyS2 ?)
+
  The EPICS layer
Typically the permissions gets reset when the PC is rebooted, such that users
+
cannot write there.  A wrong write permission causes a silent failure of the software.
+
  
4. Need telnet server because the network connection is via telnet
+
/pkg_to_install
  
Install telnetd server by typing "yum install telnet-server"
+
  See the README.  But it was probably already installed.
and allow telnet as follows:
+
/etc/xinetd.d/telnet needs "disable no"
+
In /home/pi there are links to these, e.g. lecroy_servers -> ./soft1/lecroy_servers/
and you need to restart xinetd
+
/sbin/service xinetd restart
+
  
Note, if telnet mysteriously stops working ...
+
== Setting Up A New Installation ==
Kill cfengine, which is Computer Center's rather rude security script that deletes /etc/xinetd.d/telnet.
+
Yes, telnet is an old, insecure protocol, but it's needed by this software. The Java code uses telnet to communicate with the Perl server.
+
  
On ahut1 or ahut2, I have a cron script /root/scripts/prepHV which takes care of running the server automatically. It also periodically restores the "telnet" file mentioned above, and periodically restarts xinetd
+
For the rPI, the installation kit is in /mss/home/rom/rpi*tar*. The most recent one as I write this is
  
Here is a simple test of the telnet server: If you are on, for example, ahut2 and can "telnet ahut2" (i.e. telnet into yourself). Then yes, the server is running.
+
There is typically a READMEpi or README in each directory.  See also the notes above (updates to rpi).
 +
 
 +
I will try to update this on github.  There are two repositories.
 +
 
 +
1. [https://github.com/rwmichaels/HV-control Frontend C code and Perl Shim server] (credit:  Javier Gomez wrote the C code and Brad Sawatzky wrote the Perl server.)
 +
 +
2. [https://github.com/romanivanovi4/hvg_software Java GUI] (credit: Roman Pomatsalyuk wrote the Java GUI.)
 +
 
 +
You should always pick the latest date tar file, or use Git.  For the tar file the date is usually written in the filename.  
  
5. Install the software.  Assuming you don't need the Java code because it runs on an adaq machine in the counting room, there are two pieces:  the Shim perl server and the  low-level frontend C code, see "Architecture" above.  You'll need to find the tar file for this, which is in the MSS.
 
  
 
== What can go wrong ? ==
 
== What can go wrong ? ==
Line 180: Line 183:
  
 
2. No connection to server.  Try restarting the server.  Instructions in "Restarting the Server" above.
 
2. No connection to server.  Try restarting the server.  Instructions in "Restarting the Server" above.
 
3. I think that less goes wrong now that we're using the rPI boards, but we need to accumulate more experience with those and then write here about it.
 

Revision as of 16:40, 4 May 2022

High Voltage in Hall A

First of all, please be aware of the simple instructions for users: HowTo for Users

For the SBS experiments, the high voltage is controllable with EPICS: See SBS EPICS

Overview of Architecture

The HV crates sit in the hall, e.g. on the detector stacks. A set of cards with (usually) 12 channels of negative or positive HV are inserted into the crates. A custom "serial board" (built by Javier Gomez and Jack Segal) talks to the cards. This "serial board" replaces an old, obsolete motherboard. (There are still a few crates with this motherboard, however -- e.g. the beamline crate.) A Perl "Shim" server (written by Brad Sawatzky) runs on a PC (usually a Raspberry Pi) nearby the HV crate. The "Shim" server uses (via a socket connection) a low-level C code written by Javier to talk to his serial card in the HV crate. On the User end, a Java GUI (written by Roman Pomatsalyuk) displays the HV information and provides User control. This Java GUI talks to the Shim server. Alternatively, the Java GUI can talk to the motherboard via a Portserver. A relatively new development (2021) is an EPICS layer. The Shim server can talk to the Java GUI as well as the EPICS layer. The EPICS support (based on the old LeCroy HV EPICS support developed by Chris Slominski and modernized by Stephen Wood) has been added to the rPI boards in the crates used for SBS. The EPICS server on each rPI communicates with the shim which has been upgraded to support simultaneous connections from EPICS and the Java GUI. The addition of EPICS support allows the use of the broad range of EPICS tools for GUIs, stripcharts and archiving. See SBS EPICS for more information

Existing Crates

Here is a list of HV crates in Hall A as of May 2021.

The crates that use portservers are talking to the old motherboard. The crates that use an rPI PC with Shim have had their motherboards removed. See Architecture above.

Location                rPi or Portserver for Shim           Config on ~/slowc     How to start on aslow

Left HRS (1 crate)         rpi8                                 LEFT               cd ~aslow/slowc ; ./hvs LEFT

BigBite detectors          rpi17
Ecal, Grinch               rpi18

SBS Hcal                   rpi20
                           rpi21

Beamline                portserver hatsv4: 2003                 BEAMLINE           cd ~aslow/slowc ; ./hvs BEAMLINE
                                                                                   (Not working 5/26/2021)

This following list, from September 2014, is obsolete:

Location                Portserver, or PC for Shim            Config on ~/slowc     How to start on aslow

Left HRS (1 crate)       rpi8                                     LEFT               cd ~aslow/slowc ; ./hvs LEFT

Right HRS (bottom crate)    rpi7                                RIGHT 
 
Right HRS (top crate)       rpi4                                 RIGHT              cd ~aslow/slowc ; ./hvs RIGHT  
                                                                                    (this starts both R-HRS crates)

Beamline                portserver hatsv4: 2003                  BEAMLINE           cd ~aslow/slowc ; ./hvs BEAMLINE

Restarting the Servers that include EPICS (2021)

This section applies to those crates that have been updated to include the EPICS server.

Three servers run on each Raspberry PI PC's connected to a Lecroy HV mainframes. They are a low level server that communicates with HV cards in the mainframe, a "shim" that communicates with the low level server and presents an interface that mimics the original mainframe interface, and and EPICS server that allows control and monitoring by EPICS applications.

A cron script ensures that the servers are running on on pi account on each rPI.

$ crontab -l

2,6,10,14,18,22,26,30,34,38,42,46,50,54,58 * * * * /home/pi/scripts/start_hv_cron

The start_hv_cron script checks that each of the three servers are running. If they are running, the script does nothing. If any of the servers are not running, the script starts them. This means that the servers should always be running (within about 5 min of restarting the rPI.) and you should not have to do anything.

The start_hv_cron also configures the EPICS setup so that EPICS PVs (process variables) are defined for each HV channel in each slot that is installed in the crate.

If the servers need to be restarted, that can be done in one of three ways.

1. Power cycle the mainframe. 2. Reboot the rPI by logging on to the pi account and typing "sudo reboot" 3. Killing the servers by logging on the pi account and typing "~/scripts/kill_hv_servers"

In each of these cases, the servers will be restarted by cron within four to five minutes.


Restarting the Servers

The servers can run on a PC with a serial connection. As of Sept 2014, all the HRS servers are running on Raspberry PI PCs.

A cron script ensures that the server is running on pi@rpi

$ crontab -l

# cron jobs
5,10,15,20,25,30,35,40,45,50,53,55,58 * * * * /home/pi/scripts/start_hv_cron

The start_hv_cron script checks if the server is running. If it is running, the script does nothing. If it is not running, the script starts it. This means that the server should always be running (within about 5 min of restarting the rPI.) and you should not have to do anything.

A simple way to restart the server is to reboot the rpi. The cron script will restart it.

If you prefer to be faster here's how

ssh into the rpi of interest (rpi4 or rpi7 on R-HRS or rpi8 on L-HRS). User is "pi" and sorry I can't say the password here. I'll put it near the whiteboard.

These should be typed at the command prompt

ps -awx | grep -i shim
kill -9 27913
(assuming the PID of the process was 27913)
then 
/home/pi/scripts/start_hv_cron
exit

Next, on adaqsc, or wherever you run the Java GUI, and using the aslow account: kill the Java Gui and restart it

cd /adaqfs/home/aslow/slowc
./hvs LEFT
or
./hvs RIGHT

May 2022 updates the Raspberry Pi

Some goals of the recent work on the software for HV controls were: 1) update the software so that HV can work on newer rpi3, 4. 2) make several backup SD cards so that we can rapidly swap them to fix broken SD cards (presumed to be zapped by radiation).

Generally the most recent version is in a directory like /soft1 (could be /soft2, /soft3, etc)

within /soft1 are some directories

/scripts

  contains the script(s) that launch the whole thing.
  Mainly you need start_hv_cron, as explained above.
  
  configure_epics.py configures the EPICS.  Currently there is a hack to force the crate # to be 8 instead
  of deriving it from rpi22.jlab.org.  See "hack" in that script.

/lecroy_servers

  Contains the C code that talks to the crate and the Perl "shim" server.
   -- pay attention to GPIO memory offset, it depends on version of rpi
  20220504_i2lchv_rPI-linux  driver to talk to HV crate
  20220504_i2lchv_rPI-linux.c  source of driver
  LecroyHV_shim-01May2022-epics  Perl "shim" server
  

/epics

  The EPICS layer

/pkg_to_install

  See the README.  But it was probably already installed.

In /home/pi there are links to these, e.g. lecroy_servers -> ./soft1/lecroy_servers/

Setting Up A New Installation

For the rPI, the installation kit is in /mss/home/rom/rpi*tar*. The most recent one as I write this is

There is typically a READMEpi or README in each directory. See also the notes above (updates to rpi).

I will try to update this on github. There are two repositories.

1. Frontend C code and Perl Shim server (credit: Javier Gomez wrote the C code and Brad Sawatzky wrote the Perl server.)

2. Java GUI (credit: Roman Pomatsalyuk wrote the Java GUI.)

You should always pick the latest date tar file, or use Git. For the tar file the date is usually written in the filename.


What can go wrong ?

Common troublshooting items are listed in the User Guide (search for troubleshooting)

https://hallaweb.jlab.org/wiki/index.php?title=HV_HowTo_for_Users

Below is a list of other problems I've seen and their solution.

1. Power-cycling a HV crate has been known to help. Especially if more than one HV card does not respond. Then it's probably a crate problem not a card problem.

2. No connection to server. Try restarting the server. Instructions in "Restarting the Server" above.