Hello everyone So I\’m trying to use Fastx Toolkit Collapser on my data. The scripts run (almost) perfectly, except that the outputfiles are all empty. I have already checked the input files (they are not), and using different formats for the output file, with no success. Has anybodyy ever had the same problem? I\’m open to suggestions. Thank you,
Hi, I just ran a test which worked fine. In order for us to help you further, could you provide a small reproducible example:
1- A description of the working environment (cluster system, PC with windows, or linux, mac, etc)
2- the version of fastx
3- An small input file on which the problem occurs (as a path on a compute canada system, otherwise, as a dropbox/google drive link for instance )
4- the exact command used
Hi, thank you for your answer!
So, I have running the script in modax/compute.ca so it’s a Linux envrionment
Here’s the following script:
#PBS -A def- [username]
#PBS -l walltime=24:00:00
#PBS -l nodes=2:ppn=8
#PBS -r n
module load nixpkgs gcc
module load fastx-toolkit/0.0.14
fastx_collapser -v -i [infile.fastq]>[outputfile.fa]
I run it using the sbatch command.
Hi, unfortunately I cannot figure out what » modax/compute.ca » and an example input file would have helped to fully reproduce the issue.
At first sight, here are some observations:
1) Slurm (sbatch) will likely understand PBS files, but you should probably try to convert the file to one with proper Slurm arguments (#SBATCH).
2) requesting nodes=2 and(ppn=8) seems a bit overkill
I think you need to try to reproduce the issue on a smaller fastq (say with head -n 400) on the login node, i.e. without submitting a job. If this test is successful it will point to something wrong with your job submission, not the fastx toolkit.