FORUMUsing SLURM to pass stdin variables
mttcrown demandée il y a 9 mois



Hello, I am trying migrate my bioinformatic pipeline from my lab\’s server to Cedar. I normally use the python\’s argparse function for my own script, is there an equivalent for SLURM? I tried using the #–export option but I have no luck. For example:

sbatch YourFavouriteScript.sh variable1 
#!/bin/bash
#SBATCH --time=02:00:00
#SBATCH --mem-per-cpu=2G
#SBATCH --cpus-per-task=2
#SBATCH --account=test
#SBATCH --output=/out/%A-%a.out 
#%A = jobID
#SBATCH --error=/out/%A-%a.err
#SBATCH --array=1-8 

#######################
# Defining variablesMyFavouriteVariable=$1 echo \"...${MyFavouriteVariable}...\"   should say ... variable1...   Thank you!
2 Réponses
Best Answer
ghoad personnel répondue il y a 8 mois



Hi mttcrown,
Could you try the following:

#!/bin/bash
#SBATCH –time=00:01:00
#SBATCH –cpus-per-task=1
#SBATCH –account=def-mttcrown
#######################
echo « …${MyFavouriteVariable} … should say … variable1… Thank you! »

…and on the command line:
export MyFavouriteVariable=variable1
sbatch srulm-command-line-variable.sh –export=MyFavouriteVariable

Let me know how it goes.

mttcrown répondue il y a 8 mois

Woot! Amazing, it fixed it.

May I ask why? Is it because of the environment that needs to be propagated into the SLURM script and the order of the variable?

ghoad personnel répondue il y a 8 mois

Apologies for the delay in replying. The option « –export » will pass « MyFavouriteVariable » variable into script that is run by slurm.

jrosner personnel répondue il y a 9 mois



Hi mttcrown,
your sbatch command should look something like this for passing in variables

sbatch --export=A=5,b='test' jobscript.sbatch

have a look here for a full description
https://help.rc.ufl.edu/doc/Using_Variables_in_SLURM_Jobs

give that a try and let me know if it works for you,
Cheers

jrosner personnel répondue il y a 9 mois

found this too..
https://vsoch.github.io/lessons/sherlock-jobs/

more detailed, gives a step by step

mttcrown répondue il y a 9 mois

Hi Jamie,

I tried using the –export option again. No luck. Perhaps I’m doing it wrong.

Here is what I tried and what I got in return

Attempt 1:
#!/bin/bash
#SBATCH –time=00:01:00
#SBATCH –job-name=TEST
#SBATCH –mem-per-cpu=1M
#SBATCH –cpus-per-task=1
#SBATCH –output=./testvariables.out
#SBATCH –error=./testvariables.err

#Define Variables
echo ‘Passing variables A is ${A} and b is ${b}’
sleep 30
_____________________________________
sbatch –export=A=5,b=’test’ simple_job.sh
_____________________________________
cat testvariables.out
Passing variables A is ${A} and b is ${b}
______
______
Attempt 2
#!/bin/bash
#SBATCH –time=00:01:00
#SBATCH –job-name=TEST
#SBATCH –mem-per-cpu=1M
#SBATCH –cpus-per-task=1
#SBATCH –output=./testvariables.out
#SBATCH –error=./testvariables.err

#Define Variables

echo ‘Passing variables A is $A and b is $b’
sleep 30

_______________
sbatch –export=A=5,b=’test’ simple_job.sh
_______________

cat testvariables.out
Passing variables A is $A and b is $b

** I removed the account info for posting purposes.

I like the 2nd page, thank you!
Where should I execute the wrapper bash script? Login node or interactive mode?
As a « wrapper » slurm script?I do worry that if I use the following example (copy and pasted from the website to below), my overall job submission priority will reduce because it would take a long time to complete the whole for-loop.

#!/bin/bash

# We assume running this from the script directory
job_directory=$PWD/.job
data_dir= »${SCRATCH}/project/LizardLips »

lizards=(« LizardA » « LizardB »)

for lizard in ${lizards[@]}; do

job_file= »${job_directory}/${lizard}.job »

echo « #!/bin/bash
#SBATCH –job-name=${lizard}.job
#SBATCH –output=.out/${lizard}.out
#SBATCH –error=.out/${lizard}.err
#SBATCH –time=2-00:00
#SBATCH –mem=12000
#SBATCH –qos=normal
#SBATCH –mail-type=ALL
#SBATCH –mail-user=$USER@stanford.edu
Rscript $HOME/project/LizardLips/run.R tomato potato shiabato » > $job_file
sbatch $job_file

done

Thank you for your prompt reply! 🙂
-mttcrown

jrosner personnel répondue il y a 9 mois

I tried running your script on both Cedar and Graham and for both I got the expected output
i.e.
[jrosner@gra-login1 ~]$ cat testvariables.out
‘Passing variables A is 5 and b is test’

So i’m not sure why it’s not working on your end. perhaps you could try and remove the #SBATCH lines from the script an add them to the command line instead.
e.g. sbatch –export=blah –time=blah2 … simple_job.sh

if this works, then you can add the SBATCH commands back into your script one by one to see if and when it stops working again.

you could also try logging into graham and test it there as well, just to make sure it’s not an issue with your environment.

As for the wrapper script, you would run this on the headnode as each ‘sbatch’ command within the script would be submitted to the scheduler. Also, the loop would execute quickly, i.e. it wouldn’t wait for each job to complete, rather all jobs would be launched immediately and your script would exit. Then you would wait for each of your submitted jobs to complete on the cluster, make sense?

Getting an interactive node is good for development/testing, but in your case, you’re testing sbatch, so an interactive session doesn’t really help.

mttcrown répondue il y a 9 mois

Hi Jamie,

That is really helpful. I will try that and see if I can figure out why it doesn’t work.

-mttcrown

mttcrown répondue il y a 8 mois

Hello!

I was able to get that to work! But whenever I use the option « –export= », the « module load » function fails.
I get this error:
/var/spool/slurmd/job18374540/slurm_script: line 14: module: command not found

Module load works just fine if I don’t use the –export function. How do I get both –export and module load to work? I found a post below discussing this:
https://bugs.schedmd.com/show_bug.cgi?id=1124

Thank you in advance!