FORUMProblems running Trinity
ariute demandée il y a 1 an



Hello everyone I’ve been trying to run Trinity on compute canada to assemble my transcriptome but the following error keeps coming back:

Error:

cmd: /cvmfs/soft.computecanada.ca/easybuild/software/2017/avx2/Compiler/gcc5.4/trinity/2.5.0/trinityrnaseq-Trinity-v2.5.0/util/..//trinity-plugins/jellyfish/bin/jellyfish count -t 2 -m 25 -s 6200785198 both.fa died with ret 9 at
/cvmfs/soft.computecanada.ca/easybuild/software/2017/avx2/Compiler/gcc5.4/trinity/2.5.0/trinityrnaseq-Trinity-v2.5.0/util/insilico_read_normalization.pl line 758.

I’ve done some googling and some people said that the problem it\’s on the memory/disk space. I changed it (lower and higher) but didn’t have any success. Has anybody ever had the same issue? Thank you all in advance

[edited by Eloi for clarity]

3 Réponses
jrosner personnel répondue il y a 1 an



Hi ariute,

I did some digging around as well and it does appear this is a memory issue.
The authors recommend using ~1GB per ~2M PE reads.
In addition to this they recommend allocating slightly more memory to the sbatch command than to the Trinity command.

e.g.
in sbatch:
#SBATCH –mem 120G

in Trinity cmd:
Trinity … –max_memory 100G

perhaps you can give this a try and see if it works?

ariute répondue il y a 1 an

Thank you for your answer.

I have already tried this as well, with no success. Is there any way to « organize » or allocate better the memory available at compute canada?
I’ve been having oom-kill events that are also related to problems with memory.

Thanks,

jrosner personnel répondue il y a 1 an

hmmm, oom-kill event means the kernel is running out of memory.
so, there are in fact different node types/sizes on cedar ranging from 128G to 3T mem.
https://docs.computecanada.ca/wiki/Cedar#Node_types_and_characteristics

how much memory are you requesting? Perhaps you can post your sbatch script?
thanks

ariute répondue il y a 1 an

#!/bin/bash
#SBATCH –account=user
#SBATCH –reboot n
#SBATCH –mem=100M
#SBATCH –time=24:00:00
#SBATCH –job-name=test
#SBATCH –output=trinity.out
module load rsem/1.3.0
module load samtools
module load bowtie2/2.3.4.1
module load nixpkgs/16.09 gcc/5.4.0 openmpi/2.1.1
module load salmon/0.9.1
module load transdecoder/3.0.1
module load jellyfish/2.2.6
module load trinity/2.6.5
cd $dir
Trinity –seqType fq \
–left R1.fastq \
–right R2.fastq \
–SS_lib_type RF \
–no_normalize_reads \
–max_memory 128G \
–CPU 2 \

jrosner personnel répondue il y a 1 an

I see on line 4 you have
#SBATCH –mem=100M

Considering you are asking Trinity for 128G you should change this to something like
#SBATCH –mem=156G

If this is how you were launching the job, it would certainly explain why it was getting killed.
Also, if you’re requesting 156G, this will limit you to the larger nodes on Cedar, which is fine but it might just take longer to get through the queue. If you think Trinity will use less, e.g. 110G, then you could use that number for Trinity and use 128G for the SBATCH statement. This should put you in the main pool for available nodes.

jrosner personnel répondue il y a 1 an

Just following up… is it working now?

ariute répondue il y a 1 an

Hello, thank you for your kind advice.
I’m trying to work on it now and will share the results as soon as possible.

ariute répondue il y a 1 an

So now I’m passing through the Jellyfish phase but it says that my job is being cancelled (still getting the oom-kill event error) at the Inchworm phase.
That’s the only thing happening now.
Considering I couldn’t get through Jellyfish, it’s an advance!

jrosner personnel répondue il y a 1 an

Progress!!!
So the fact that we were able to advance past the jellyfish stage by increasing the sbatch memory request makes me think we’re encountering the same problem on the inchworm stage.
And so, you can try increasing the sbatch memory some more, and probably try to assess how much memory it needs beforehand via the web or literature.
I assume the script you used was the same as above with the exception of the increased sbatch, but let me know if there is anything different.
Also, if it fails again, let me know any error messages you get back.
good luck!

ariute répondue il y a 1 an

Hello!
If you’re able to help me again, now the error shows Trinity it’s not recognizing the module numpy (even though in the begining of the job, slurm says it has been installed without errors).
Would you (or someone else) have any advice?
Thank you in advance

jrosner personnel répondue il y a 1 an

sure thing… can you point me to the location of the log file on Cedar?
Then i can get a better idea of what’s going on.
thx

ariute répondue il y a 1 an

Hey, thank you.
It’s slurm-10390408.out (juanariu/scratch/fastq_files)

jrosner personnel répondue il y a 1 an

ok, looks like you need to load the following in your sbatch before launching trinity:

module load python35-scipy-stack/2017a

give it a try and let me know if that works.

ariute répondue il y a 1 an



Hey there
So, now the error about python is gone (!), but I’m getting the following:
RuntimeError: dictionary changed size during iteration
Trinity run failed. Must investigate error above.
I did some googling and it seems that it occurs with some Python versions on Trinity. Would you mind helping me again, please?
Thank you a lot.

jrosner personnel répondue il y a 1 an



Indeed, this looks like a compatibility issue between Python 2 and Python 3.
Try using a python2 version…

module load python/2.7.14

this should fix the problem