Difference between revisions of "HV HowTo for Experts"

From Hall A Wiki
Jump to: navigation, search
(hvctrl standalone local control)
 
(54 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 
<big>High Voltage in Hall A</big>
 
<big>High Voltage in Hall A</big>
  
First of all, please be aware of the simple instructions for users:
+
First of all, please be aware of the simple instructions for users: [[HowTo for Users]]
  
https://hallaweb.jlab.org/wiki/index.php?title=HowTo_for_Users
+
For the SBS experiments, the high voltage is controllable with EPICS: See [[SBS EPICS]]
  
 
==  Overview of Architecture ==
 
==  Overview of Architecture ==
  
 
The HV crates sit in the hall, e.g. on the detector stacks.  A set of cards with (usually) 12 channels of negative or positive HV are inserted into the crates.
 
The HV crates sit in the hall, e.g. on the detector stacks.  A set of cards with (usually) 12 channels of negative or positive HV are inserted into the crates.
A custom "serial board" (built by Javier Gomez and Jack Segal) talks to the cards.  This "serial board" replaces an old, obsolete motherboard.  (There are still a few crates with this motherboard, however -- e.g. the beamline crate.)
+
A custom "serial board" (built by Javier Gomez and Jack Segal) talks to the cards.  This "serial board" replaces an old, obsolete motherboard.  (There might be a few of these old crates somewhere, but I doubt it.)
A Perl "Shim" server (written by Brad Sawatzky) runs on a PC nearby the HV crate.  The "Shim" server uses (via the Perl "Expect" module) a low-level C  
+
A Perl "Shim" server (written by Brad Sawatzky) runs on a PC (usually a Raspberry Pi) nearby the HV crate.  The "Shim" server uses (via a socket connection) a low-level C  
 
code written by Javier to talk to his serial card in the HV crate.  On the User end, a Java GUI (written by Roman Pomatsalyuk) displays the HV information and
 
code written by Javier to talk to his serial card in the HV crate.  On the User end, a Java GUI (written by Roman Pomatsalyuk) displays the HV information and
provides User control.  This Java GUI talks to the Shim server.  Alternatively, the Java GUI can talk to the motherboard via a Portserver.   
+
provides User control.  This Java GUI talks to the Shim server.  Alternatively, the Java GUI can talk to the motherboard via a Portserver.
 +
A relatively new development (2021) is an EPICS layerThe Shim server can talk to the Java GUI as well as the EPICS layer. 
 +
The EPICS support (based on the old LeCroy HV EPICS support developed by Chris Slominski and modernized by Stephen Wood) has been added to the rPI boards in the crates used for SBS.  The EPICS server on each rPI communicates with the shim which has been upgraded to support simultaneous connections from EPICS and the Java GUI.  The addition of EPICS support allows the use of the broad range of EPICS tools for GUIs, stripcharts and archiving.  See [[SBS EPICS]] for more information
  
The portserver/motherboard alternative is being phased out, but still exists in two places at the moment: the beamline HV and one of the HV crates in
+
== Existing Crates ==
the test lab DVCS setup, as I write this (Jan 2014).
+
  
All the elements in this chain must be working in order to have control and readback of HV.
+
Here is a list of HV crates in Hall A as of May 2021.
  
There are three ways to control HV:
+
The crates that use portservers are talking to the old motherboard.  The crates that use an rPI PC with Shim have had their motherboards removed.  See Architecture above.
  
1. Using the Java GUI and the software chain mentioned above.
+
<pre>
 +
Location                rPi or Portserver for Shim          Config on ~/slowc    How to start on aslow
  
2. Using the "i2lchv_linux_XXX" code in shim/LecroyHV_shim/LecroyHV_FE.   At the moment, XXX = bob on ahut1,2 /  XXX = S2 on intelha3 / XXX = S0 on halladaq8
+
Left HRS (1 crate)        rpi8                                LEFT              cd ~aslow/slowc ; ./hvs LEFT
  
3. hvctrl code -- explained below.
+
BigBite detectors          rpi17
 +
Ecal, Grinch              rpi18
  
==  Existing Crates ==
+
SBS Hcal                  rpi20
 +
                          rpi21
  
Here is a list of HV crates in Hall A and the Test Lab as of Jan 2014.
+
Beamline                portserver hatsv4: 2003                BEAMLINE          cd ~aslow/slowc ; ./hvs BEAMLINE
 
+
                                                                                  (Not working 5/26/2021)
The crates that use portservers are talking to the old motherboard.  The crates that use a PC with Shim have no motherboard.  See Architecture above.
+
</pre>
  
 +
This following list, from September 2014, is obsolete:
 
<pre>
 
<pre>
Location                Portserver, or PC for Shim            Config on ~/slowc    How to start on adev
+
Location                Portserver, or PC for Shim            Config on ~/slowc    How to start on aslow
  
Left HRS (1 crate)     Intel PC: intelha3                      LEFT              cd ~adev/slowc ; ./hvs LEFT
+
Left HRS (1 crate)       rpi8                                    LEFT              cd ~aslow/slowc ; ./hvs LEFT
  
Right HRS (top crate)    laptop: ahut1                          RIGHT  
+
Right HRS (bottom crate)    rpi7                                RIGHT  
 
   
 
   
Right HRS (bottom crate) Intel PC: halladaq8                    RIGHT              cd ~adev/slowc ; ./hvs RIGHT   
+
Right HRS (top crate)       rpi4                                RIGHT              cd ~aslow/slowc ; ./hvs RIGHT   
 
                                                                                     (this starts both R-HRS crates)
 
                                                                                     (this starts both R-HRS crates)
  
Beamline                portserver hatsv4: 2003                  BEAMLINE          cd ~adev/slowc ; ./hvs BEAMLINE
+
Beamline                portserver hatsv4: 2003                  BEAMLINE          cd ~aslow/slowc ; ./hvs BEAMLINE
 
+
Test Lab                1 crate on portserver                    DVCS              cd ~/slowc ; ./hvs DVCS
+
                        1 crate on a PC                                            (see the README file)
+
 
</pre>
 
</pre>
  
== Where the Servers are ==
+
== Restarting the Servers that include EPICS (2021) ==
  
The Intel PCs share the following which becomes their root partition
+
This section applies to those crates that have been updated to include the EPICS server.
/root/diskless/i386/Centos5.8/root/shim/LecroyHV_shim
+
and the low-level C code is in ./LecroyHV_FE
+
  
Software that runs on the Intel PCs must, however, run from their local disk because output
+
Servers run on each Raspberry PI PC's connected to a Lecroy HV mainframes.  They are a low level server that communicates with HV cards in the mainframe, a "shim" that communicates with the low level server and presents an interface that mimics the original mainframe interface, and and EPICS server that allows control and monitoring by EPICS applications.
is not permitted on the root partition.
+
  
A cron script ensures that the server is running
+
A cron script ensures that the servers are running on on pi account on each rPI.
[root@intelha3 ~]# crontab -l
+
# Start the shim server for HV
+
2,10,20,30,40,50 * * * * /shim/scripts/prepHV
+
  
On the "ahut" laptops, it's a bit different. 
+
<pre>
See  /home/ahut/shim/LecroyHV_shim.
+
$ crontab -l
An ahut* a cron script (prepHV) runs under ahut to ensure the HV server is running.
+
 
 +
2,6,10,14,18,22,26,30,34,38,42,46,50,54,58 * * * * /home/pi/scripts/start_hv_cron
 +
</pre>
 +
 
 +
The start_hv_cron script checks that the servers are running.  If they are running, the script does nothing. If any of the servers are not running, the script starts them.  This means that the servers should always be running (within about 5 min of restarting the rPI.) and you should not have to do anything.
 +
 
 +
The start_hv_cron also configures the EPICS setup so that EPICS PVs (process variables) are defined for each HV channel in each slot that is installed in the crate.
 +
 
 +
If the servers need to be restarted, that can be done in one of three ways.
 +
 
 +
1.  Power cycle the mainframe.
 +
2.  Reboot the rPI by logging on to the pi account and typing "sudo reboot"
 +
3.  Killing the servers by logging on the pi account and typing "~/scripts/kill_hv_servers"
  
On the DVCS computer in the test lab, you have to start everything by hand.  Go to ~/slowc and see the README file.
+
In each of these cases, the servers will be restarted by cron within four to five minutes.
  
 
== Restarting the Servers ==
 
== Restarting the Servers ==
  
The biggest problem we have at the moment is that the Java GUI becomes disconnected from the servers and/or there are alarms related to this disconnectHere I explain how to reconnect.
+
The servers can run on a PC with a serial connection.  As of Sept 2014, all the HRS servers are running
 +
on Raspberry PI PCs.   
  
On ahut* the servers are run from the ahut account.  If you need to restart, first make sure the old server is not running
+
A cron script ensures that the server is running on pi@rpi
  
[ahut@ahut2 ~]$ ps awx | grep -i shim
+
$ crontab -l
 +
<pre>
 +
# cron jobs
 +
5,10,15,20,25,30,35,40,45,50,53,55,58 * * * * /home/pi/scripts/start_hv_cron
 +
</pre>
 +
The start_hv_cron script checks if the server is running.  If it is running, the script does nothing.
 +
If it is not running, the script starts it.  This means that the server should always be running (within
 +
about 5 min of restarting the rPI.) and you should not have to do anything.
  
and "kill -9" the PIDThen it will either restart on it's own within a few minutes (there's a cron job), or you can restart by hand:
+
A simple way to restart the server is to reboot the rpiThe cron script will restart it.
  
/home/ahut/scripts/startHV
+
If you prefer to be faster here's how
  
On the intelPCs we do not (yet) have an ordinary account -- only root.   The simplest way to restart the server is to reboot the intelPC.  However, if you know the root password, you can follow a similar procedure as above to kill and restart the HV, i.e. killing the "shim" server and restarting itThe HV is started on intelPC with a command /shim/scripts/prepHV  which runs as a cron job under root.
+
ssh into the rpi of interest (rpi4 or rpi7 on R-HRS or rpi8 on L-HRS). User is "pi" and
 +
sorry I can't say the password hereI'll put it near the whiteboard.
  
== hvctrl standalone local control ==
+
These should be typed at the command prompt
 +
<pre>
 +
ps -awx | grep -i shim
 +
kill -9 27913
 +
(assuming the PID of the process was 27913)
 +
then
 +
/home/pi/scripts/start_hv_cron
 +
exit
 +
</pre>
  
Note: Although Java GUI is preferred because of SAFETY, I'll explain here an alternative way to control HV.  Note, it cannot control every parameter yet -- only the basic ones : turning on/off a card, enable/disable a channel, and read/write a voltage value.
+
Next, on adaqsc, or wherever you run the Java GUI, and using the aslow account:
 +
kill the Java Gui and restart it
 +
<pre>
 +
cd /adaqfs/home/aslow/slowc
 +
./hvs LEFT
 +
or
 +
./hvs RIGHT
 +
</pre>
  
On the ahut computers, go to /home/ahut/hvctrl
+
== May 2022 updates the Raspberry Pi ==
  
On the IntelPC it's a little different. Use /hvctrl/hvctrl.  The input/output files are on /root/hvctrl. These have to be different because /hvctrl is part of a disk area shared by all IntelPCs which, however, is not writeable, while /root/hvctrl is a writeable area which is local to each IntelPC.
+
Some goals of the recent work on the software for HV controls were: 1) update the software so that HV can work on newer rpi3, 4.
 +
2) make several backup SD cards so that we can rapidly swap them to fix broken SD cards (presumed to be zapped by radiation).
  
If you run the code "hvctrl" it's usage is self-explanatory. The code talks to the HV crate to which that PC is connected via a serial cable.
+
Generally the most recent version is in a directory like
 +
/soft1
 +
(could be /soft2, /soft3, etc)
  
== Setting Up A New Installation ==
+
within /soft1 are some directories
  
If you install on an Intel PC in Hall A, note that these share a root partition, so they all see the software.  Suppose, however, that you want to install on a new PC like a laptop.  That's what these instructions are for.
+
/scripts
  
1. Need Redhat 5 or greater in order to have proper glibc.
+
  contains the script(s) that launch the whole thing.
 +
  Mainly you need start_hv_cron, as explained above.
 +
 
 +
  configure_epics.py configures the EPICS.  Currently there is a hack to force the crate # to be 8 instead
 +
  of deriving it from rpi22.jlab.org.  See "hack" in that script.
  
2. Needed Perl 5.8.8 or later install. On intel PC this appears as /perl after being put (on adaql1) in the shared area.
+
/lecroy_servers
  
3. Set write permission for /dev/ttyS0 (or is your PC using /dev/ttyS2 ?)
+
  Contains the C code that talks to the crate and the Perl "shim" server.
Typically the permissions gets reset when the PC is rebooted, such that users
+
    -- pay attention to GPIO memory offset, it depends on version of rpi
cannot write there.  A wrong write permission causes a silent failure of the software.
+
  
4. Need telnet server because the network connection is via telnet
+
  20220504_i2lchv_rPI-linux  driver to talk to HV crate
 +
  20220504_i2lchv_rPI-linux.c  source of driver
 +
  Note about rpi3:  need to use /dev/serial0 instead of /dev/ttyAMA0.
 +
  Also, don't forget to enable the serial lines (raspi-config --> Interface options --> Serial interface --> enable, also explained below)
 +
  LecroyHV_shim-01May2022-epics  Perl "shim" server
 +
 
 +
/epics
  
Install telnetd server by typing "yum install telnet-server"
+
  The EPICS layer
and allow telnet as follows:
+
/etc/xinetd.d/telnet needs "disable no"
+
and you need to restart xinetd
+
/sbin/service xinetd restart
+
  
Note, if telnet mysteriously stops working ...
+
/pkg_to_install
Kill cfengine, which is Computer Center's rather rude security script that deletes /etc/xinetd.d/telnet.
+
Yes, telnet is an old, insecure protocol, but it's needed by this software. The Java code uses telnet to communicate with the Perl server.
+
  
On ahut1 or ahut2, I have a cron script /root/scripts/prepHV which takes care of running the server automatically. It also periodically restores the "telnet" file mentioned above, and periodically restarts xinetd
+
  See the README.  But it was probably already installed.
 +
 +
In /home/pi there are links to these, e.g. lecroy_servers -> ./soft1/lecroy_servers/
  
Here is a simple test of the telnet server: If you are on, for example, ahut2 and can "telnet ahut2" (i.e. telnet into yourself). Then yes, the server is running.
+
== Setting Up A New Installation ==
  
5. Install the software.  Assuming you don't need the Java code because it runs on adaql1 in the counting room, there are two pieces:  the Shim perl server and the  low-level frontend C code, see "Architecture" above. You'll need to find the tar file for this, which
+
For the rPI, the installation kit is in /mss/home/rom/rpi*tar*.  
  
ahut1:/home/ahut//home/shim_laptop_8nov13.tar
+
There is typically a READMEpi or README in each directory.  See also the notes above (updates to rpi).
  
I probably have this in /mss/home/rom, and you should always pick the latest date (which is written in the filename) because I tend to update / improve things.
+
I will update this on github.  There are two repositories.
  
== What can go wrong ? ==
+
1. [https://github.com/rwmichaels/HV-control Frontend C code and Perl Shim server] (credit:  Javier Gomez wrote the C code and Brad Sawatzky wrote the Perl server.)
 +
 +
2. [https://github.com/romanivanovi4/hvg_software Java GUI] (credit: Roman Pomatsalyuk wrote the Java GUI.)
  
Common troublshooting items are listed in the User Guide (search for troubleshooting)
+
3. Not sure about EPICS.  I have to learn this part (noted, May 2022).
  
https://hallaweb.jlab.org/wiki/index.php?title=HV_HowTo_for_Users
+
You should always pick the latest date tar file, or use Git. For the tar file the date is usually written in the filename.
  
Below is a list of other problems I've seen and their solution.
+
<pre>
 +
Detailed steps
  
1. Power-cycling a HV crate has been known to helpEspecially if more than one HV card does not respondThen it's probably a crate problem not a card problem.
+
1. Format SD card using formatter software for rpi.
 +
2. Download the rpi image (operating system) on SD card using a saved image.
 +
3. Enable rpi on network -- ask CST, need to give mac address and
 +
  add to DHCP.   
 +
4. password for "pi" account set to our hall A/C value.
 +
6. Enable sshd on raspberry (Google how)In the same config menu, you can enable serial port. (step 12)
 +
6. scp rom@jlabl2:/home/rom/rpi_setup_6May22.tar .
 +
  or whatever is the latest version.
 +
7. tar xvf this on rpi
 +
8. Descend into each /pkg_to_install subdir and do this:
 +
  sudo perl Makefile.PL ; sudo make ; sudo make test ; sudo make install
 +
9. Might have to build and install some items like readline and ncurses "by hand"
 +
  from pkg_to_install
 +
  cd readline-6.3/
 +
  sudo make libreadline.a
 +
  cd ncurses-5.9/lib
 +
  sudo make libncurses.a
 +
  cd /usr/lib
 +
  sudo cp /home/pi/pkg_to_install/readline-6.3/libreadline.a .
 +
  sudo cp /home/pi/pkg_to_install/ncurses-5.9/lib/libncurses.a .
 +
  cd /usr/include
 +
  sudo ln -s /home/pi/pkg_to_install/readline-6.3 readline
 +
  sudo ln -s /home/pi/pkg_to_install/ncurses-5.9 ncurses
 +
10. For EPICS, in /home/pi/scripts need to run configure_epics.py. 
 +
    I think this is automatic.  You get EPICS variables like HAHV8:Pw
 +
    where the "8" should be replaced by the IP number, e.g. rpi22 produces
 +
    HAHV22:Pw.
 +
    To check:
 +
    From rpi22, you can do “telnet localhost 20000” and type the command “dbl”. 
 +
    That should list all the EPICS variables
 +
11. Create crontab entry as per /home/pi/scripts/cron.tab
 +
12. (edit: June 2024).  It seems necessary to enable the serial connection.  Login to the rpi and then do this:
  
2. Cannot login to intelPC as root.  Probably because you don't have /root, in which case the solution is to mount it by doing the following as superuser on adaql1:  "cd /root ; ./mount_diskless".  (Note, at the moment, /root/diskless disappears when you reboot adaql1 ! Hopefully this can get fixed.)
+
              sudo raspi-config
 +
   
 +
                  3. Interface Options
 +
                  I6 Serial interface
  
3. No connection to server.  Try rebooting the PC.
+
            First question (login shell ?) No
 +
            Second question (enable serial port hardware ?) Yes 
  
4. If you try to run the Shim software by hand you might see "Can't connect to mainframe" and the Shim script dies ! This is a good one. Note that:
+
After the above, you should reboot. Then /dev/serial0 will appear.
  
5. Restarting the servers.
+
13. It seems to be a good idea to sync everything and stop the errors when compiling, by running the following command once when connected to the network:
 +
sudo date -s "$(wget -qSO- --max-redirect=0 google.com 2>&1 | grep Date: | cut -d' ' -f5-8)Z"
  
On ahut* the servers are run from the ahut account.  If you need to restart:  
+
  Done ! (gulp)
 +
</pre>
  
[ahut@ahut2 ~]$ ps awx | grep -i shim
+
== What can go wrong ? ==
  
and "kill -9" the PID.  Then it will either restart on it's own within a few minutes (there's a cron job), or you can restart by hand:
+
Common troublshooting items are listed in the User Guide (search for troubleshooting)
  
/home/ahut/scripts/startHV
+
https://hallaweb.jlab.org/wiki/index.php?title=HV_HowTo_for_Users
  
On the intelPCs we do not (yet) have an ordinary account -- only root.  The simplest way to restart the server is to reboot the intelPC.  However, if you know the root password, you can follow a similar procedure as above to kill and restart the HV.  The HV is started on intelPC with a command /shim/scripts/prepHV  which runs as a cron job under root.
+
Below is a list of other problems I've seen and their solution.
  
 +
1. Power-cycling a HV crate has been known to help.  Especially if more than one HV card does not respond.  Then it's probably a crate problem not a card problem.
  
On the IntelPCs, the /shim directory where the software exists is not writeableYou must run from /root/shim, or else the software cannot write its log file and will die very mysteriously !  But this should not affect ordinary users since the Shim software is running automatically.
+
2. No connection to server.  Try restarting the serverInstructions in "Restarting the Server" above.
  
Another possibility is that the mainframe really cannot be connected to. First thing to try is to run, by hand, ./shim/LecroyHV_shim/LecroyHV_FE/i2lchv_linux* depending on if you're connected to S0 or S2.  If that runs ok, the HV crate is on and is talkingIf not, then check your cabling and power status.
+
3. It happened to me a couple times that, after power-cycling etc, I could not connect to the LeCroy cards. When I ran the code in /home/pi/hvctrl
 +
it somehow "woke up" the crate and then the software listed here workedI know this sounds like voodoo.  Anyway, see the README in /home/pi/hvctrl.
 +
It is an alternative, very primitive and simple control system.

Latest revision as of 14:13, 3 February 2025

High Voltage in Hall A

First of all, please be aware of the simple instructions for users: HowTo for Users

For the SBS experiments, the high voltage is controllable with EPICS: See SBS EPICS

Overview of Architecture

The HV crates sit in the hall, e.g. on the detector stacks. A set of cards with (usually) 12 channels of negative or positive HV are inserted into the crates. A custom "serial board" (built by Javier Gomez and Jack Segal) talks to the cards. This "serial board" replaces an old, obsolete motherboard. (There might be a few of these old crates somewhere, but I doubt it.) A Perl "Shim" server (written by Brad Sawatzky) runs on a PC (usually a Raspberry Pi) nearby the HV crate. The "Shim" server uses (via a socket connection) a low-level C code written by Javier to talk to his serial card in the HV crate. On the User end, a Java GUI (written by Roman Pomatsalyuk) displays the HV information and provides User control. This Java GUI talks to the Shim server. Alternatively, the Java GUI can talk to the motherboard via a Portserver. A relatively new development (2021) is an EPICS layer. The Shim server can talk to the Java GUI as well as the EPICS layer. The EPICS support (based on the old LeCroy HV EPICS support developed by Chris Slominski and modernized by Stephen Wood) has been added to the rPI boards in the crates used for SBS. The EPICS server on each rPI communicates with the shim which has been upgraded to support simultaneous connections from EPICS and the Java GUI. The addition of EPICS support allows the use of the broad range of EPICS tools for GUIs, stripcharts and archiving. See SBS EPICS for more information

Existing Crates

Here is a list of HV crates in Hall A as of May 2021.

The crates that use portservers are talking to the old motherboard. The crates that use an rPI PC with Shim have had their motherboards removed. See Architecture above.

Location                rPi or Portserver for Shim           Config on ~/slowc     How to start on aslow

Left HRS (1 crate)         rpi8                                 LEFT               cd ~aslow/slowc ; ./hvs LEFT

BigBite detectors          rpi17
Ecal, Grinch               rpi18

SBS Hcal                   rpi20
                           rpi21

Beamline                portserver hatsv4: 2003                 BEAMLINE           cd ~aslow/slowc ; ./hvs BEAMLINE
                                                                                   (Not working 5/26/2021)

This following list, from September 2014, is obsolete:

Location                Portserver, or PC for Shim            Config on ~/slowc     How to start on aslow

Left HRS (1 crate)       rpi8                                     LEFT               cd ~aslow/slowc ; ./hvs LEFT

Right HRS (bottom crate)    rpi7                                RIGHT 
 
Right HRS (top crate)       rpi4                                 RIGHT              cd ~aslow/slowc ; ./hvs RIGHT  
                                                                                    (this starts both R-HRS crates)

Beamline                portserver hatsv4: 2003                  BEAMLINE           cd ~aslow/slowc ; ./hvs BEAMLINE

Restarting the Servers that include EPICS (2021)

This section applies to those crates that have been updated to include the EPICS server.

Servers run on each Raspberry PI PC's connected to a Lecroy HV mainframes. They are a low level server that communicates with HV cards in the mainframe, a "shim" that communicates with the low level server and presents an interface that mimics the original mainframe interface, and and EPICS server that allows control and monitoring by EPICS applications.

A cron script ensures that the servers are running on on pi account on each rPI.

$ crontab -l

2,6,10,14,18,22,26,30,34,38,42,46,50,54,58 * * * * /home/pi/scripts/start_hv_cron

The start_hv_cron script checks that the servers are running. If they are running, the script does nothing. If any of the servers are not running, the script starts them. This means that the servers should always be running (within about 5 min of restarting the rPI.) and you should not have to do anything.

The start_hv_cron also configures the EPICS setup so that EPICS PVs (process variables) are defined for each HV channel in each slot that is installed in the crate.

If the servers need to be restarted, that can be done in one of three ways.

1. Power cycle the mainframe. 2. Reboot the rPI by logging on to the pi account and typing "sudo reboot" 3. Killing the servers by logging on the pi account and typing "~/scripts/kill_hv_servers"

In each of these cases, the servers will be restarted by cron within four to five minutes.

Restarting the Servers

The servers can run on a PC with a serial connection. As of Sept 2014, all the HRS servers are running on Raspberry PI PCs.

A cron script ensures that the server is running on pi@rpi

$ crontab -l

# cron jobs
5,10,15,20,25,30,35,40,45,50,53,55,58 * * * * /home/pi/scripts/start_hv_cron

The start_hv_cron script checks if the server is running. If it is running, the script does nothing. If it is not running, the script starts it. This means that the server should always be running (within about 5 min of restarting the rPI.) and you should not have to do anything.

A simple way to restart the server is to reboot the rpi. The cron script will restart it.

If you prefer to be faster here's how

ssh into the rpi of interest (rpi4 or rpi7 on R-HRS or rpi8 on L-HRS). User is "pi" and sorry I can't say the password here. I'll put it near the whiteboard.

These should be typed at the command prompt

ps -awx | grep -i shim
kill -9 27913
(assuming the PID of the process was 27913)
then 
/home/pi/scripts/start_hv_cron
exit

Next, on adaqsc, or wherever you run the Java GUI, and using the aslow account: kill the Java Gui and restart it

cd /adaqfs/home/aslow/slowc
./hvs LEFT
or
./hvs RIGHT

May 2022 updates the Raspberry Pi

Some goals of the recent work on the software for HV controls were: 1) update the software so that HV can work on newer rpi3, 4. 2) make several backup SD cards so that we can rapidly swap them to fix broken SD cards (presumed to be zapped by radiation).

Generally the most recent version is in a directory like /soft1 (could be /soft2, /soft3, etc)

within /soft1 are some directories

/scripts

  contains the script(s) that launch the whole thing.
  Mainly you need start_hv_cron, as explained above.
  
  configure_epics.py configures the EPICS.  Currently there is a hack to force the crate # to be 8 instead
  of deriving it from rpi22.jlab.org.  See "hack" in that script.

/lecroy_servers

  Contains the C code that talks to the crate and the Perl "shim" server.
   -- pay attention to GPIO memory offset, it depends on version of rpi
  20220504_i2lchv_rPI-linux  driver to talk to HV crate
  20220504_i2lchv_rPI-linux.c  source of driver
  Note about rpi3:  need to use /dev/serial0 instead of /dev/ttyAMA0.
  Also, don't forget to enable the serial lines (raspi-config --> Interface options --> Serial interface --> enable, also explained below)
  LecroyHV_shim-01May2022-epics  Perl "shim" server
  

/epics

  The EPICS layer

/pkg_to_install

  See the README.  But it was probably already installed.

In /home/pi there are links to these, e.g. lecroy_servers -> ./soft1/lecroy_servers/

Setting Up A New Installation

For the rPI, the installation kit is in /mss/home/rom/rpi*tar*.

There is typically a READMEpi or README in each directory. See also the notes above (updates to rpi).

I will update this on github. There are two repositories.

1. Frontend C code and Perl Shim server (credit: Javier Gomez wrote the C code and Brad Sawatzky wrote the Perl server.)

2. Java GUI (credit: Roman Pomatsalyuk wrote the Java GUI.)

3. Not sure about EPICS. I have to learn this part (noted, May 2022).

You should always pick the latest date tar file, or use Git. For the tar file the date is usually written in the filename.

Detailed steps

1. Format SD card using formatter software for rpi.
2. Download the rpi image (operating system) on SD card using a saved image.
3. Enable rpi on network -- ask CST, need to give mac address and
   add to DHCP.  
4. password for "pi" account set to our hall A/C value.
6. Enable sshd on raspberry (Google how).  In the same config menu, you can enable serial port. (step 12)
6. scp rom@jlabl2:/home/rom/rpi_setup_6May22.tar .
   or whatever is the latest version.
7. tar xvf this on rpi
8. Descend into each /pkg_to_install subdir and do this:
   sudo perl Makefile.PL ; sudo make ; sudo make test ; sudo make install
9. Might have to build and install some items like readline and ncurses "by hand" 
   from pkg_to_install
   cd readline-6.3/
   sudo make libreadline.a
   cd ncurses-5.9/lib
   sudo make libncurses.a
   cd /usr/lib
   sudo cp /home/pi/pkg_to_install/readline-6.3/libreadline.a .
   sudo cp /home/pi/pkg_to_install/ncurses-5.9/lib/libncurses.a . 
   cd /usr/include
   sudo ln -s /home/pi/pkg_to_install/readline-6.3 readline
   sudo ln -s /home/pi/pkg_to_install/ncurses-5.9 ncurses
10. For EPICS, in /home/pi/scripts need to run configure_epics.py.  
    I think this is automatic.  You get EPICS variables like HAHV8:Pw
    where the "8" should be replaced by the IP number, e.g. rpi22 produces
    HAHV22:Pw.
    To check:
    From rpi22, you can do “telnet localhost 20000” and type the command “dbl”.  
    That should list all the EPICS variables 
11. Create crontab entry as per /home/pi/scripts/cron.tab
12. (edit: June 2024).  It seems necessary to enable the serial connection.  Login to the rpi and then do this:

              sudo raspi-config
 
                   3. Interface Options
                   I6 Serial interface

             First question (login shell ?) No
             Second question (enable serial port hardware ?) Yes  

After the above, you should reboot.  Then /dev/serial0 will appear.

13. It seems to be a good idea to sync everything and stop the errors when compiling, by running the following command once when connected to the network:
sudo date -s "$(wget -qSO- --max-redirect=0 google.com 2>&1 | grep Date: | cut -d' ' -f5-8)Z" 

   Done !  (gulp)

What can go wrong ?

Common troublshooting items are listed in the User Guide (search for troubleshooting)

https://hallaweb.jlab.org/wiki/index.php?title=HV_HowTo_for_Users

Below is a list of other problems I've seen and their solution.

1. Power-cycling a HV crate has been known to help. Especially if more than one HV card does not respond. Then it's probably a crate problem not a card problem.

2. No connection to server. Try restarting the server. Instructions in "Restarting the Server" above.

3. It happened to me a couple times that, after power-cycling etc, I could not connect to the LeCroy cards. When I ran the code in /home/pi/hvctrl it somehow "woke up" the crate and then the software listed here worked. I know this sounds like voodoo. Anyway, see the README in /home/pi/hvctrl. It is an alternative, very primitive and simple control system.