Difference between revisions of "HV HowTo for Experts"

From Hall A Wiki
Jump to: navigation, search
(Setting Up A New Installation)
 
(36 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 
<big>High Voltage in Hall A</big>
 
<big>High Voltage in Hall A</big>
  
First of all, please be aware of the simple instructions for users:
+
First of all, please be aware of the simple instructions for users: [[HowTo for Users]]
  
https://hallaweb.jlab.org/wiki/index.php?title=HowTo_for_Users
+
For the SBS experiments, the high voltage is controllable with EPICS: See [[SBS EPICS]]
  
 
==  Overview of Architecture ==
 
==  Overview of Architecture ==
Line 9: Line 9:
 
The HV crates sit in the hall, e.g. on the detector stacks.  A set of cards with (usually) 12 channels of negative or positive HV are inserted into the crates.
 
The HV crates sit in the hall, e.g. on the detector stacks.  A set of cards with (usually) 12 channels of negative or positive HV are inserted into the crates.
 
A custom "serial board" (built by Javier Gomez and Jack Segal) talks to the cards.  This "serial board" replaces an old, obsolete motherboard.  (There are still a few crates with this motherboard, however -- e.g. the beamline crate.)
 
A custom "serial board" (built by Javier Gomez and Jack Segal) talks to the cards.  This "serial board" replaces an old, obsolete motherboard.  (There are still a few crates with this motherboard, however -- e.g. the beamline crate.)
A Perl "Shim" server (written by Brad Sawatzky) runs on a PC nearby the HV crate.  The "Shim" server uses (via the Perl "Expect" module) a low-level C  
+
A Perl "Shim" server (written by Brad Sawatzky) runs on a PC (usually a Raspberry Pi) nearby the HV crate.  The "Shim" server uses (via a socket connection) a low-level C  
 
code written by Javier to talk to his serial card in the HV crate.  On the User end, a Java GUI (written by Roman Pomatsalyuk) displays the HV information and
 
code written by Javier to talk to his serial card in the HV crate.  On the User end, a Java GUI (written by Roman Pomatsalyuk) displays the HV information and
provides User control.  This Java GUI talks to the Shim server.  Alternatively, the Java GUI can talk to the motherboard via a Portserver.   
+
provides User control.  This Java GUI talks to the Shim server.  Alternatively, the Java GUI can talk to the motherboard via a Portserver.
 +
A relatively new development (2021) is an EPICS layerThe Shim server can talk to the Java GUI as well as the EPICS layer. 
 +
The EPICS support (based on the old LeCroy HV EPICS support developed by Chris Slominski and modernized by Stephen Wood) has been added to the rPI boards in the crates used for SBS.  The EPICS server on each rPI communicates with the shim which has been upgraded to support simultaneous connections from EPICS and the Java GUI.  The addition of EPICS support allows the use of the broad range of EPICS tools for GUIs, stripcharts and archiving.  See [[SBS EPICS]] for more information
  
The portserver/motherboard alternative is being phased out, but still exists in two places at the moment: the beamline HV and one of the HV crates in
+
== Existing Crates ==
the test lab DVCS setup, as I write this (Jan 2014).
+
  
All the elements in this chain must be working in order to have control and readback of HV.
+
Here is a list of HV crates in Hall A as of May 2021.
  
There are three ways to control HV:
+
The crates that use portservers are talking to the old motherboard.  The crates that use an rPI PC with Shim have had their motherboards removed.  See Architecture above.
  
1. Using the Java GUI and the software chain mentioned above.
+
<pre>
 +
Location                rPi or Portserver for Shim          Config on ~/slowc    How to start on aslow
  
2. Using the "i2lchv_linux_XXX" C code in shim/LecroyHV_shim/LecroyHV_FE.  At the moment, XXX = bob on ahut1,2  and  XXX = S2 on intelha3  and  XXX = S0 on halladaq8
+
Left HRS (1 crate)        rpi8                                LEFT              cd ~aslow/slowc ; ./hvs LEFT
  
3. hvctrl code -- explained in a section below.
+
BigBite detectors          rpi17
 +
Ecal, Grinch              rpi18
  
==  Existing Crates ==
+
SBS Hcal                  rpi20
 +
                          rpi21
  
Here is a list of HV crates in Hall A and the Test Lab as of Jan 2014.
+
Beamline                portserver hatsv4: 2003                BEAMLINE          cd ~aslow/slowc ; ./hvs BEAMLINE
 
+
                                                                                  (Not working 5/26/2021)
The crates that use portservers are talking to the old motherboard.  The crates that use a PC with Shim have no motherboard.  See Architecture above.
+
</pre>
  
 +
This following list, from September 2014, is obsolete:
 
<pre>
 
<pre>
Location                Portserver, or PC for Shim            Config on ~/slowc    How to start on adev
+
Location                Portserver, or PC for Shim            Config on ~/slowc    How to start on aslow
  
Left HRS (1 crate)     Intel PC: intelha3                      LEFT              cd ~adev/slowc ; ./hvs LEFT
+
Left HRS (1 crate)       rpi8                                    LEFT              cd ~aslow/slowc ; ./hvs LEFT
  
Right HRS (top crate)    laptop: ahut1                          RIGHT  
+
Right HRS (bottom crate)    rpi7                                RIGHT  
 
   
 
   
Right HRS (bottom crate) Intel PC: halladaq8                    RIGHT              cd ~adev/slowc ; ./hvs RIGHT   
+
Right HRS (top crate)       rpi4                                RIGHT              cd ~aslow/slowc ; ./hvs RIGHT   
 
                                                                                     (this starts both R-HRS crates)
 
                                                                                     (this starts both R-HRS crates)
  
Beamline                portserver hatsv4: 2003                  BEAMLINE          cd ~adev/slowc ; ./hvs BEAMLINE
+
Beamline                portserver hatsv4: 2003                  BEAMLINE          cd ~aslow/slowc ; ./hvs BEAMLINE
 
+
Test Lab                1 crate on portserver                    DVCS              cd ~/slowc ; ./hvs DVCS
+
                        1 crate on a PC                                            (see the README file)
+
 
</pre>
 
</pre>
  
== Where the Servers are ==
+
== Restarting the Servers that include EPICS (2021) ==
  
The Intel PCs share the following which becomes their root partition
+
This section applies to those crates that have been updated to include the EPICS server.
/root/diskless/i386/Centos5.8/root/shim/LecroyHV_shim
+
and the low-level C code is in ./LecroyHV_FE
+
  
Software that runs on the Intel PCs must, however, run from their local disk because output
+
Servers run on each Raspberry PI PC's connected to a Lecroy HV mainframes.  They are a low level server that communicates with HV cards in the mainframe, a "shim" that communicates with the low level server and presents an interface that mimics the original mainframe interface, and and EPICS server that allows control and monitoring by EPICS applications.
is not permitted on the root partition.
+
  
A cron script ensures that the server is running
+
A cron script ensures that the servers are running on on pi account on each rPI.
[root@intelha3 ~]# crontab -l
+
# Start the shim server for HV
+
2,10,20,30,40,50 * * * * /shim/scripts/prepHV
+
  
On the "ahut" laptops, it's a bit different. 
+
<pre>
See  /home/ahut/shim/LecroyHV_shim.
+
$ crontab -l
An ahut* a cron script (prepHV) runs under ahut to ensure the HV server is running.
+
  
On the DVCS computer in the test lab, you have to start everything by hand.  Go to ~/slowc and see the README file.
+
2,6,10,14,18,22,26,30,34,38,42,46,50,54,58 * * * * /home/pi/scripts/start_hv_cron
 +
</pre>
  
== Restarting the Servers ==
+
The start_hv_cron script checks that the servers are running.  If they are running, the script does nothing. If any of the servers are not running, the script starts them.  This means that the servers should always be running (within about 5 min of restarting the rPI.) and you should not have to do anything.
  
The biggest problem we have at the moment is that the Java GUI becomes disconnected from the servers and/or there are alarms related to this disconnect.  Here I explain how to reconnect.
+
The start_hv_cron also configures the EPICS setup so that EPICS PVs (process variables) are defined for each HV channel in each slot that is installed in the crate.
  
On ahut* the servers are run from the ahut account.  If you need to restart, first make sure the old server is not running
+
If the servers need to be restarted, that can be done in one of three ways.
  
[ahut@ahut2 ~]$ ps awx | grep -i shim
+
1.  Power cycle the mainframe.
 +
2.  Reboot the rPI by logging on to the pi account and typing "sudo reboot"
 +
3.  Killing the servers by logging on the pi account and typing "~/scripts/kill_hv_servers"
  
and "kill -9" the PID.  Then it will either restart on it's own within a few minutes (there's a cron job), or you can restart by hand:
+
In each of these cases, the servers will be restarted by cron within four to five minutes.
  
/home/ahut/scripts/startHV
+
== Restarting the Servers ==
  
On the intelPCs we do not (yet) have an ordinary account -- only root.  The simplest way to restart the server is to reboot the intelPCHowever, if you know the root password, you can follow a similar procedure as above to kill and restart the HV, i.e. killing the "shim" server and restarting itThe HV is started on intelPC with a command /shim/scripts/prepHV  which runs as a cron job under root.
+
The servers can run on a PC with a serial connectionAs of Sept 2014, all the HRS servers are running
 +
on Raspberry PI PCs.   
  
== hvctrl standalone local control ==
+
A cron script ensures that the server is running on pi@rpi
  
Note: Although Java GUI is preferred because of SAFETY, I'll explain here an alternative way to control HVIt cannot control every parameter yet -- only the basic ones : turning on/off a card, enable/disable a channel, and read/write a voltage value.
+
$ crontab -l
 +
<pre>
 +
# cron jobs
 +
5,10,15,20,25,30,35,40,45,50,53,55,58 * * * * /home/pi/scripts/start_hv_cron
 +
</pre>
 +
The start_hv_cron script checks if the server is runningIf it is running, the script does nothing.
 +
If it is not running, the script starts it.  This means that the server should always be running (within
 +
about 5 min of restarting the rPI.) and you should not have to do anything.
  
On the ahut computers, go to /home/ahut/hvctrl
+
A simple way to restart the server is to reboot the rpi.  The cron script will restart it.
  
On the IntelPC it's a little different.  Use /hvctrl/hvctrl.  The input/output files are on /root/hvctrl.  These have to be different because /hvctrl is part of a disk area shared by all IntelPCs which, however, is not writeable, while /root/hvctrl is a writeable area which is local to each IntelPC.
+
If you prefer to be faster here's how
  
If you run the code "hvctrl" it's usage is self-explanatory and there is a README file, normally, in those directories.  The code talks to the HV crate to which that PC is connected via a serial cable.  If we like this code we may expand on it.  I find it useful for quick tests.  However, since not every parameter can be modified, and since it doesn't have a GUI, it's not as good as the Java GUI control.
+
ssh into the rpi of interest (rpi4 or rpi7 on R-HRS or rpi8 on L-HRS).  User is "pi" and
 +
sorry I can't say the password here.  I'll put it near the whiteboard.
  
== Setting Up A New Installation ==
+
These should be typed at the command prompt
 +
<pre>
 +
ps -awx | grep -i shim
 +
kill -9 27913
 +
(assuming the PID of the process was 27913)
 +
then
 +
/home/pi/scripts/start_hv_cron
 +
exit
 +
</pre>
  
If you install on an Intel PC in Hall A, note that these share a root partition, so they all see the software.  Suppose, however, that you want to install on a new PC like a laptop. That's what these instructions are for.
+
Next, on adaqsc, or wherever you run the Java GUI, and using the aslow account:
 +
kill the Java Gui and restart it
 +
<pre>
 +
cd /adaqfs/home/aslow/slowc
 +
./hvs LEFT
 +
or
 +
./hvs RIGHT
 +
</pre>
  
1. Need Redhat 5 or greater in order to have proper glibc.
+
== May 2022 updates the Raspberry Pi ==
  
2. Needed Perl 5.8.8 or later install. On intel PC this appears as /perl after being put (on adaql1) in the shared area.
+
Some goals of the recent work on the software for HV controls were:  1) update the software so that HV can work on newer rpi3, 4.
 +
2) make several backup SD cards so that we can rapidly swap them to fix broken SD cards (presumed to be zapped by radiation).
  
3. Set write permission for /dev/ttyS0 (or is your PC using /dev/ttyS2 ?)
+
Generally the most recent version is in a directory like
Typically the permissions gets reset when the PC is rebooted, such that users
+
/soft1
cannot write there.  A wrong write permission causes a silent failure of the software.
+
(could be /soft2,  /soft3, etc)
  
4. Need telnet server because the network connection is via telnet
+
within /soft1 are some directories
  
Install telnetd server by typing "yum install telnet-server"
+
/scripts
and allow telnet as follows:
+
/etc/xinetd.d/telnet needs "disable no"
+
and you need to restart xinetd
+
/sbin/service xinetd restart
+
  
Note, if telnet mysteriously stops working ...
+
  contains the script(s) that launch the whole thing.
Kill cfengine, which is Computer Center's rather rude security script that deletes /etc/xinetd.d/telnet.  
+
  Mainly you need start_hv_cron, as explained above.
Yes, telnet is an old, insecure protocol, but it's needed by this software. The Java code uses telnet to communicate with the Perl server.
+
 
 +
  configure_epics.py configures the EPICS.  Currently there is a hack to force the crate # to be 8 instead
 +
  of deriving it from rpi22.jlab.org.  See "hack" in that script.
  
On ahut1 or ahut2, I have a cron script /root/scripts/prepHV which takes care of running the server automatically. It also periodically restores the "telnet" file mentioned above, and periodically restarts xinetd
+
/lecroy_servers
  
Here is a simple test of the telnet server: If you are on, for example, ahut2 and can "telnet ahut2" (i.e. telnet into yourself). Then yes, the server is running.
+
  Contains the C code that talks to the crate and the Perl "shim" server.
 +
    -- pay attention to GPIO memory offset, it depends on version of rpi
  
5. Install the softwareAssuming you don't need the Java code because it runs on adaql1 in the counting room, there are two pieces:  the Shim perl server and the  low-level frontend C code, see "Architecture" above.  You'll need to find the tar file for this, which is in the MSS.
+
  20220504_i2lchv_rPI-linux  driver to talk to HV crate
 +
  20220504_i2lchv_rPI-linux.c source of driver
 +
  LecroyHV_shim-01May2022-epics  Perl "shim" server
 +
 
 +
/epics
  
/mss/home/rom/hv_ahut2_14apr14.tar
+
  The EPICS layer
  
You should always pick the latest date (the date is usually written in the filename) because I tend to update / improve things.
+
/pkg_to_install
  
== What can go wrong ? ==
+
  See the README.  But it was probably already installed.
 +
 +
In /home/pi there are links to these, e.g. lecroy_servers -> ./soft1/lecroy_servers/
  
Common troublshooting items are listed in the User Guide (search for troubleshooting)
+
== Setting Up A New Installation ==
  
https://hallaweb.jlab.org/wiki/index.php?title=HV_HowTo_for_Users
+
For the rPI, the installation kit is in /mss/home/rom/rpi*tar*.  
  
Below is a list of other problems I've seen and their solution.
+
There is typically a READMEpi or README in each directory.  See also the notes above (updates to rpi).
  
1. Power-cycling a HV crate has been known to help.  Especially if more than one HV card does not respondThen it's probably a crate problem not a card problem.
+
I will update this on githubThere are two repositories.
  
2. Cannot login to intelPC as root.   Probably because you don't have /root, in which case the solution is to mount it by doing the following as superuser on adaql1: "cd /root ; ./mount_diskless". (but I think this has been fixed now)
+
1. [https://github.com/rwmichaels/HV-control Frontend C code and Perl Shim server] (credit:  Javier Gomez wrote the C code and Brad Sawatzky wrote the Perl server.)
 +
   
 +
2. [https://github.com/romanivanovi4/hvg_software Java GUI] (credit: Roman Pomatsalyuk wrote the Java GUI.)
  
3. No connection to server.  Try restarting the serverInstructions in step 5.
+
3. Not sure about EPICSI have to learn this part (noted, May 2022).
  
4. If you try to run the Shim software by hand you might see "Can't connect to mainframe" and the Shim script silently dies !  This is a good one. The problem is that on IntelPCs the /shim directory where the software exists is not writeable, so you must run from /root/shim which is a local writeable disk area.
+
You should always pick the latest date tar file, or use Git. For the tar file the date is usually written in the filename.
  
5. Restarting the servers. 
+
<pre>
 +
Detailed steps
  
This was a big problem for awhile, but I think it has largely gone away (see 8).  The symptom is that the Java GUI loses connection and/or emits false alarms.
+
1. Format SD card using formatter software for rpi.
 +
2. Download the rpi image (operating system) on SD card using a saved image.
 +
3. Enable rpi on network -- ask CST, need to give mac address and
 +
  add to DHCP. 
 +
4. password for "pi" account set to our hall A/C value.
 +
6. Enable sshd on raspberry (Google how).
 +
6. scp rom@jlabl2:/home/rom/rpi_setup_6May22.tar .
 +
  or whatever is the latest version.
 +
7. tar xvf this on rpi
 +
8. Descend into each /pkg_to_install subdir and do this:
 +
  sudo perl Makefile.PL ; sudo make ; sudo make test ; sudo make install
 +
9. Might have to build and install some items like readline and ncurses "by hand"
 +
  from pkg_to_install
 +
  cd readline-6.3/
 +
  sudo make libreadline.a
 +
  cd ncurses-5.9/lib
 +
  sudo make libncurses.a
 +
  cd /usr/lib
 +
  sudo cp /home/pi/pkg_to_install/readline-6.3/libreadline.a .
 +
  sudo cp /home/pi/pkg_to_install/ncurses-5.9/lib/libncurses.a .
 +
  cd /usr/include
 +
  sudo ln -s /home/pi/pkg_to_install/readline-6.3 readline
 +
  sudo ln -s /home/pi/pkg_to_install/ncurses-5.9 ncurses
 +
10. For EPICS, in /home/pi/scripts need to run configure_epics.py.   
 +
    I think this is automatic.  You get EPICS variables like HAHV8:Pw
 +
    where the "8" should be replaced by the IP number, e.g. rpi22 produces
 +
    HAHV22:Pw.
 +
    To check:
 +
    From rpi22, you can do “telnet localhost 20000” and type the command “dbl”. 
 +
    That should list all the EPICS variables
 +
11. Create crontab entry as per /home/pi/scripts/cron.tab
 +
  Done !  (gulp)
 +
</pre>
  
On ahut* the servers are run from the ahut account.  If you need to restart: 
+
== What can go wrong ? ==
  
[ahut@ahut2 ~]$ ps awx | grep -i shim
+
Common troublshooting items are listed in the User Guide (search for troubleshooting)
  
and "kill -9" the PID.  Then it will either restart on it's own within a few minutes (there's a cron job), or you can restart by hand:
+
https://hallaweb.jlab.org/wiki/index.php?title=HV_HowTo_for_Users
  
/home/ahut/scripts/startHV
+
Below is a list of other problems I've seen and their solution.
  
On the intelPCs we do not (yet) have an ordinary account -- only root.  The simplest way to restart the server is to reboot the intelPCHowever, if you know the root password, you can follow a similar procedure as above to kill and restart the HV.  The HV is started on intelPC with a command /shim/scripts/prepHV  which runs as a cron job under root.
+
1. Power-cycling a HV crate has been known to helpEspecially if more than one HV card does not respondThen it's probably a crate problem not a card problem.
 
+
6. Another possibility is that the mainframe really cannot be connected to.  First thing to try is to run, by hand, ./shim/LecroyHV_shim/LecroyHV_FE/i2lchv_linux* depending on the computer (* = _bob for ahut, = S0 on halladaq8, = S2 on intelha3).  If that runs ok, the HV crate is on and is talking.  If not, then check your cabling and power status.
+
  
7. On Feb 20, 2014 I found I could not raise HV on any of the 4 cards in a particular cratePower cycling, pulling out cards, reseating them -- nothing helped.  I tested the 4 cards in another crate and they worked.  Some halogs about this.  Finally, I put the cards back into the original crate but in different slots, and they worked again -- leading to much speculation about bad slots, bent pins, poor contact, possible temperature effects (meaning it will happen again ?).
+
2. No connection to server.  Try restarting the serverInstructions in "Restarting the Server" above.
  
8. On Feb 25, 2014 it was reported that cards 4, 6, and 9 on upper HV crate on R-HRS did not appear in the list, despite repeated reboot of the serverWhat I did to recover:  a) save HV settings  b) turn off HV in software c) turn off HV in hardware (two switches)  d) turn on HV again.  The cards came back.  I admit this is not very satisfying, thoughLater we found that the "fast" version of software runs fast but eventually causes cards to be disabled, such that only a power-cycle restores them. The "slow" version that runs on an IntelPC works reliably for many days, so we've reverted to that version on all platforms.  Slow, but steady.
+
3. It happened to me a couple times that, after power-cycling etc, I could not connect to the LeCroy cardsWhen I ran the code in /home/pi/hvctrl
 +
it somehow "woke up" the crate and then the software listed here worked.  I know this sounds like voodooAnyway, see the README in /home/pi/hvctrl.
 +
It is an alternative, very primitive and simple control system.

Latest revision as of 11:20, 11 May 2022

High Voltage in Hall A

First of all, please be aware of the simple instructions for users: HowTo for Users

For the SBS experiments, the high voltage is controllable with EPICS: See SBS EPICS

Overview of Architecture

The HV crates sit in the hall, e.g. on the detector stacks. A set of cards with (usually) 12 channels of negative or positive HV are inserted into the crates. A custom "serial board" (built by Javier Gomez and Jack Segal) talks to the cards. This "serial board" replaces an old, obsolete motherboard. (There are still a few crates with this motherboard, however -- e.g. the beamline crate.) A Perl "Shim" server (written by Brad Sawatzky) runs on a PC (usually a Raspberry Pi) nearby the HV crate. The "Shim" server uses (via a socket connection) a low-level C code written by Javier to talk to his serial card in the HV crate. On the User end, a Java GUI (written by Roman Pomatsalyuk) displays the HV information and provides User control. This Java GUI talks to the Shim server. Alternatively, the Java GUI can talk to the motherboard via a Portserver. A relatively new development (2021) is an EPICS layer. The Shim server can talk to the Java GUI as well as the EPICS layer. The EPICS support (based on the old LeCroy HV EPICS support developed by Chris Slominski and modernized by Stephen Wood) has been added to the rPI boards in the crates used for SBS. The EPICS server on each rPI communicates with the shim which has been upgraded to support simultaneous connections from EPICS and the Java GUI. The addition of EPICS support allows the use of the broad range of EPICS tools for GUIs, stripcharts and archiving. See SBS EPICS for more information

Existing Crates

Here is a list of HV crates in Hall A as of May 2021.

The crates that use portservers are talking to the old motherboard. The crates that use an rPI PC with Shim have had their motherboards removed. See Architecture above.

Location                rPi or Portserver for Shim           Config on ~/slowc     How to start on aslow

Left HRS (1 crate)         rpi8                                 LEFT               cd ~aslow/slowc ; ./hvs LEFT

BigBite detectors          rpi17
Ecal, Grinch               rpi18

SBS Hcal                   rpi20
                           rpi21

Beamline                portserver hatsv4: 2003                 BEAMLINE           cd ~aslow/slowc ; ./hvs BEAMLINE
                                                                                   (Not working 5/26/2021)

This following list, from September 2014, is obsolete:

Location                Portserver, or PC for Shim            Config on ~/slowc     How to start on aslow

Left HRS (1 crate)       rpi8                                     LEFT               cd ~aslow/slowc ; ./hvs LEFT

Right HRS (bottom crate)    rpi7                                RIGHT 
 
Right HRS (top crate)       rpi4                                 RIGHT              cd ~aslow/slowc ; ./hvs RIGHT  
                                                                                    (this starts both R-HRS crates)

Beamline                portserver hatsv4: 2003                  BEAMLINE           cd ~aslow/slowc ; ./hvs BEAMLINE

Restarting the Servers that include EPICS (2021)

This section applies to those crates that have been updated to include the EPICS server.

Servers run on each Raspberry PI PC's connected to a Lecroy HV mainframes. They are a low level server that communicates with HV cards in the mainframe, a "shim" that communicates with the low level server and presents an interface that mimics the original mainframe interface, and and EPICS server that allows control and monitoring by EPICS applications.

A cron script ensures that the servers are running on on pi account on each rPI.

$ crontab -l

2,6,10,14,18,22,26,30,34,38,42,46,50,54,58 * * * * /home/pi/scripts/start_hv_cron

The start_hv_cron script checks that the servers are running. If they are running, the script does nothing. If any of the servers are not running, the script starts them. This means that the servers should always be running (within about 5 min of restarting the rPI.) and you should not have to do anything.

The start_hv_cron also configures the EPICS setup so that EPICS PVs (process variables) are defined for each HV channel in each slot that is installed in the crate.

If the servers need to be restarted, that can be done in one of three ways.

1. Power cycle the mainframe. 2. Reboot the rPI by logging on to the pi account and typing "sudo reboot" 3. Killing the servers by logging on the pi account and typing "~/scripts/kill_hv_servers"

In each of these cases, the servers will be restarted by cron within four to five minutes.

Restarting the Servers

The servers can run on a PC with a serial connection. As of Sept 2014, all the HRS servers are running on Raspberry PI PCs.

A cron script ensures that the server is running on pi@rpi

$ crontab -l

# cron jobs
5,10,15,20,25,30,35,40,45,50,53,55,58 * * * * /home/pi/scripts/start_hv_cron

The start_hv_cron script checks if the server is running. If it is running, the script does nothing. If it is not running, the script starts it. This means that the server should always be running (within about 5 min of restarting the rPI.) and you should not have to do anything.

A simple way to restart the server is to reboot the rpi. The cron script will restart it.

If you prefer to be faster here's how

ssh into the rpi of interest (rpi4 or rpi7 on R-HRS or rpi8 on L-HRS). User is "pi" and sorry I can't say the password here. I'll put it near the whiteboard.

These should be typed at the command prompt

ps -awx | grep -i shim
kill -9 27913
(assuming the PID of the process was 27913)
then 
/home/pi/scripts/start_hv_cron
exit

Next, on adaqsc, or wherever you run the Java GUI, and using the aslow account: kill the Java Gui and restart it

cd /adaqfs/home/aslow/slowc
./hvs LEFT
or
./hvs RIGHT

May 2022 updates the Raspberry Pi

Some goals of the recent work on the software for HV controls were: 1) update the software so that HV can work on newer rpi3, 4. 2) make several backup SD cards so that we can rapidly swap them to fix broken SD cards (presumed to be zapped by radiation).

Generally the most recent version is in a directory like /soft1 (could be /soft2, /soft3, etc)

within /soft1 are some directories

/scripts

  contains the script(s) that launch the whole thing.
  Mainly you need start_hv_cron, as explained above.
  
  configure_epics.py configures the EPICS.  Currently there is a hack to force the crate # to be 8 instead
  of deriving it from rpi22.jlab.org.  See "hack" in that script.

/lecroy_servers

  Contains the C code that talks to the crate and the Perl "shim" server.
   -- pay attention to GPIO memory offset, it depends on version of rpi
  20220504_i2lchv_rPI-linux  driver to talk to HV crate
  20220504_i2lchv_rPI-linux.c  source of driver
  LecroyHV_shim-01May2022-epics  Perl "shim" server
  

/epics

  The EPICS layer

/pkg_to_install

  See the README.  But it was probably already installed.

In /home/pi there are links to these, e.g. lecroy_servers -> ./soft1/lecroy_servers/

Setting Up A New Installation

For the rPI, the installation kit is in /mss/home/rom/rpi*tar*.

There is typically a READMEpi or README in each directory. See also the notes above (updates to rpi).

I will update this on github. There are two repositories.

1. Frontend C code and Perl Shim server (credit: Javier Gomez wrote the C code and Brad Sawatzky wrote the Perl server.)

2. Java GUI (credit: Roman Pomatsalyuk wrote the Java GUI.)

3. Not sure about EPICS. I have to learn this part (noted, May 2022).

You should always pick the latest date tar file, or use Git. For the tar file the date is usually written in the filename.

Detailed steps

1. Format SD card using formatter software for rpi.
2. Download the rpi image (operating system) on SD card using a saved image.
3. Enable rpi on network -- ask CST, need to give mac address and
   add to DHCP.  
4. password for "pi" account set to our hall A/C value.
6. Enable sshd on raspberry (Google how).
6. scp rom@jlabl2:/home/rom/rpi_setup_6May22.tar .
   or whatever is the latest version.
7. tar xvf this on rpi
8. Descend into each /pkg_to_install subdir and do this:
   sudo perl Makefile.PL ; sudo make ; sudo make test ; sudo make install
9. Might have to build and install some items like readline and ncurses "by hand" 
   from pkg_to_install
   cd readline-6.3/
   sudo make libreadline.a
   cd ncurses-5.9/lib
   sudo make libncurses.a
   cd /usr/lib
   sudo cp /home/pi/pkg_to_install/readline-6.3/libreadline.a .
   sudo cp /home/pi/pkg_to_install/ncurses-5.9/lib/libncurses.a . 
   cd /usr/include
   sudo ln -s /home/pi/pkg_to_install/readline-6.3 readline
   sudo ln -s /home/pi/pkg_to_install/ncurses-5.9 ncurses
10. For EPICS, in /home/pi/scripts need to run configure_epics.py.  
    I think this is automatic.  You get EPICS variables like HAHV8:Pw
    where the "8" should be replaced by the IP number, e.g. rpi22 produces
    HAHV22:Pw.
    To check:
    From rpi22, you can do “telnet localhost 20000” and type the command “dbl”.  
    That should list all the EPICS variables 
11. Create crontab entry as per /home/pi/scripts/cron.tab
   Done !  (gulp)

What can go wrong ?

Common troublshooting items are listed in the User Guide (search for troubleshooting)

https://hallaweb.jlab.org/wiki/index.php?title=HV_HowTo_for_Users

Below is a list of other problems I've seen and their solution.

1. Power-cycling a HV crate has been known to help. Especially if more than one HV card does not respond. Then it's probably a crate problem not a card problem.

2. No connection to server. Try restarting the server. Instructions in "Restarting the Server" above.

3. It happened to me a couple times that, after power-cycling etc, I could not connect to the LeCroy cards. When I ran the code in /home/pi/hvctrl it somehow "woke up" the crate and then the software listed here worked. I know this sounds like voodoo. Anyway, see the README in /home/pi/hvctrl. It is an alternative, very primitive and simple control system.