FORUMInstallation via GNU Guix
spongebob asked 2 weeks ago



Hi there,
I am trying to install the package singleM (https://github.com/wwood/singlem#installation) onto cedar.  According to the github the simplest way is as follows:
Installation via GNU Guix
The most straightforward way of installing SingleM is to use the GNU Guix package which is part of the ACE Guix package collection. This method installs not just the Python libraries required but the compiled bioinformatics tools needed as well. Once you have installed Guix, clone the ACE collection and install:

git clone https://github.com/Ecogenomics/ace-guix
GUIX_PACKAGE_PATH=ace-guix guix package --install singlem

 

However, from looking online i am unclear as to how this works on cedar?  Any pointers in the right direction much appreciated.

Thanks

5 Answers
Rob Syme Staff answered 2 weeks ago



Hi Spongebob(?)
While GNU Guix is certainly an excellent package manager, it is not installed on the Compute Canada systems. However, I notice that singlem is also available as a docker container, which means that you can run singlem inside a singularity container on any of the Compute Canada facilities.
Full instructions are available here, but the basic idea would be to

  1. Pull the image down from docker hub and convert it into a singularity image with
    • singularity pull docker://wwood/singlem:v0.13.0
  2. Run an executable inside the container, either using the interactive shell or call the and bind some of the folders so that they are available inside the container:
    • singularity exec -B /home -B /project -B /scratch -B /localscratch singlem pipe ...

Be sure to check out the Compute Canada docs linked above for a more full tutorial on running singularity containers.

spongebob replied 2 weeks ago

great thanks!

I tried to pull down from docker but got this:

INFO: Starting build…
Getting image source signatures
Skipping fetch of repeat blob sha256:5bcdc10480c772636ba756abfe1c1c27350946a71ed1f2454e44cdcf9e1e935f
Copying config sha256:e6da18da680b639ff1583eb51941d2bb0a1df069718f8b09df4e75d4d8f60df7
613 B / 613 B [============================================================] 0s
Writing manifest to image destination
Storing signatures
INFO: Creating SIF file…
FATAL: Unable to pull docker://wwood/singlem:v0.13.0: While running mksquashfs: exit status 1: FATAL ERROR:Failed to create thread

Rob Syme Staff answered 2 weeks ago



It’s possible that you have exceeded your disk quota on the filesystem you happen to be on.
You can check your quotas with:

$ diskusage_report

I’ve pulled down the same image and it will consume about 620Mb:

[robsyme@beluga2 robsyme]$ singularity pull singlem.img docker://wwood/singlem:v0.13.0
INFO: Starting build...
Getting image source signatures
Copying blob sha256:5bcdc10480c772636ba756abfe1c1c27350946a71ed1f2454e44cdcf9e1e935f
626.64 MiB / 626.64 MiB [=================================================] 36s
Copying config sha256:e6da18da680b639ff1583eb51941d2bb0a1df069718f8b09df4e75d4d8f60df7
613 B / 613 B [============================================================] 0s
Writing manifest to image destination
Storing signatures
INFO: Creating SIF file...
INFO: Build complete: singlem.img
[robsyme@beluga2 robsyme]$ ls -lh singlem.img
-rwxr-xr-x 1 robsyme rrg-bourqueg-ad 621M Nov 27 12:22 singlem.img

In which directory/fileystem were you running the singularity command?

spongebob replied 2 weeks ago

check disk usage looks fine – running from home/username

Rob Syme Staff answered 2 weeks ago



Hi Spongebob
The ‘ERROR:Failed to create thread’ message suggests to me that the mksquashfs command is being run on a node that is being heavily used. If you’re running the command on the login nodes (rather than the compute notes), the may be some uncharitable users running CPU-intensive code on the same machine that is causing your command to fail.
Which cedar node are you running on (what is the output of hostname)?
If you run top, do you see processes consuming a lot of CPU?
-Rob

spongebob replied 2 weeks ago

cedar1.cedar.computecanada.ca
there are about 4 jobs with 80-90% CPU each

spongebob replied 2 weeks ago

i got a bit further along this time:

[rsimiste@cedar1 ~]$ singularity pull singlem.img docker://wwood/singlem:v0.13.0
INFO: Starting build…
Getting image source signatures
Skipping fetch of repeat blob sha256:5bcdc10480c772636ba756abfe1c1c27350946a71ed1f2454e44cdcf9e1e935f
Copying config sha256:e6da18da680b639ff1583eb51941d2bb0a1df069718f8b09df4e75d4d8f60df7
613 B / 613 B [============================================================] 0s
Writing manifest to image destination
Storing signatures
INFO: Creating SIF file…
FATAL: Unable to pull docker://wwood/singlem:v0.13.0: While running mksquashfs: exit status 1: FATAL ERROR:Failed to create thread
[rsimiste@cedar1 ~]$

is there a better way to pull it down?

Rob Syme Staff answered 2 weeks ago



The issue is that building singularity containers requires more resources than are available on the login nodes.

To build the image, try running the singularity pull command inside an interactive session:

$ salloc --ntasks=1 --mem-per-cpu=4G --cpus-per-task=2
$ singularity pull singlem.v0.13.0.img docker://wwood/singlem:v0.13.0

Let me know if you run into any trouble

spongebob replied 2 weeks ago

Hey sorry – this has turned into something more complex then i imagined!

i don’t have permission in home/username and in /projects i get the following:

$ salloc –ntasks=1 –mem-per-cpu=4G –cpus-per-task=2
salloc: Pending job allocation 31857903
salloc: job 31857903 queued and waiting for resources
salloc: job 31857903 has been allocated resources
salloc: Granted job allocation 31857903
salloc: Waiting for resource configuration
salloc: Nodes cdr768 are ready for job
$ singularity pull singlem.v0.13.0.img docker://wwood/singlem:v0.13.0
ERROR: pull is only supported for shub URIs
$ singularity pull singlem.img docker://wwood/singlem:v0.13.0
ERROR: pull is only supported for shub URIs
$ exit
exit
srun: error: cdr768: task 0: Exited with exit code 255
srun: Terminating job step 31857903.0
salloc: Relinquishing job allocation 31857903

Thanks again for your assistance

Rob Syme Staff replied 2 weeks ago

Try running with the latest version of singularity:

$ salloc –ntasks=1 –mem-per-cpu=4G –cpus-per-task=2
$ module load singularity/3.4
$ singularity pull singlem.img docker://wwood/singlem:v0.13.0

spongebob answered 2 weeks ago



It starts to work and times out………….
$ module load singularity/3.4
The following have been reloaded with a version change:
1) singularity/3.2 => singularity/3.4
$ singularity pull singlem.img docker://wwood/singlem:v0.13.0
WARN[0000] “/run/user/3067299” directory set by $XDG_RUNTIME_DIR does not exist. Either create the directory or unset $XDG_RUNTIME_DIR.: stat /run/user/3067299: no such file or directory: Trying to pull image in the event that it is a public image.
INFO: Converting OCI blobs to SIF format
INFO: Starting build…
WARN[0001] “/run/user/3067299” directory set by $XDG_RUNTIME_DIR does not exist. Either create the directory or unset $XDG_RUNTIME_DIR.: stat /run/user/3067299: no such file or directory: Trying to pull image in the event that it is a public image.
WARN[0002] “/run/user/3067299” directory set by $XDG_RUNTIME_DIR does not exist. Either create the directory or unset $XDG_RUNTIME_DIR.: stat /run/user/3067299: no such file or directory: Trying to pull image in the event that it is a public image.
Getting image source signatures
Skipping fetch of repeat blob sha256:5bcdc10480c772636ba756abfe1c1c27350946a71ed1f2454e44cdcf9e1e935f
Copying config sha256:e6da18da680b639ff1583eb51941d2bb0a1df069718f8b09df4e75d4d8f60df7
613 B / 613 B [============================================================] 0s
Writing manifest to image destination
Storing signatures
2019/11/27 14:56:44 info unpack layer: sha256:5bcdc10480c772636ba756abfe1c1c27350946a71ed1f2454e44cdcf9e1e935f
INFO: Creating SIF file…
salloc: Job 31857831 has exceeded its time limit and its allocation has been revoked.
srun: Job step aborted: Waiting up to 32 seconds for job step to finish.
srun: error: cdr768: task 0: Killed
srun: Terminating job step 31857831.0
 
 
 

Rob Syme Staff replied 2 weeks ago

I’ve got an image that I built on beluga. I’ve opened up the read permissions so you should be able to see it on cedar at /home/robsyme/singlem.v0.13.0.img