Note about container
From Hall A Wiki
- warning
- if you are running singularity-2.4 on a centos7.2 host, you could have kernel panic if you use ROOT TMD5 and load your host dir
- installation
- refer to installation instruction and the latest version is recommend.
- singularity-2.4 can run singularity-2.3 images and has many new features and we should use it instead of 2.3 whenever possible
- On redhat-kind linux, build an rpm first and install it.
- On windows and Mac, the official way is cumbersome and not good enough to run image with graphic. The better alternative is to use a linux virtual machine. (see below)
- the linux virtual machine to run singularity
- (It's nice, but not a must, to run singularity image in shared folder because it keeps the virtual machine size small. Vmware's shared folder doesn't work for this, but virtualbox does)
- download and install virtualbox https://www.virtualbox.org/wiki/Downloads
- download the linux virtual machine at http://www.phy.duke.edu/~zz81/package/CentOS7_x86_64_20171030.ova (a centos7.4 with latest on 20171030 and singularity-2.4 installed)
- import the virtual machine into virtualbox, set up a shared folder with name "share" in setting and put the singularity image into the shared folder on host
- boot up the linux virtual machine and login with both "root" and user "user" password are "111111"
- install the latest singularity or a particular version you want
- mount the shared folder "sudo mount -t vboxsf -o uid=$uid,gid=$gid share share" and use /home/user/share" as your working dir, then you can test singularity
- singularity at jlab
- jlab farm and ifarm has singularity-2.3.1 installed as default
- on ifarm1402, you can also test singularity-2.4.2 and singularity-2.3.2 by using module with "module load singularity-2.4.2" or "module load singularity-2.3.2" and "module rm singularity-2.4.2" or "module rm singularity-2.3.2". They are not on farm nodes, so we can't run jobs yet
- test singularity
- cd some_where_with_enough_space
- "setenv http_proxy http://jprox.jlab.org:8082" "setenv https_proxy http://jprox.jlab.org:8082" if you are on jlab ifarm
- setenv SINGULARITY_CACHEDIR ./ (change cache dir from default ~/.singularity, MUST do at jlab ifarm with very limited space at hom)
- singularity pull docker://godlovedc/lolcow
- singularity run lolcow.img
- setenv PYTHONHTTPSVERIFY 0 (sometime needed to bypass singularity hub certificate check)
- singularity pull shub://GodloveD/lolcow
- singularity run GodloveD-lolcow-master-latest.simg
questions and comments should go to Zhiwen Zhao zwzhao at jlab.org