Condor works differently to many other batch systems so it is advised that you have a look at the
User Manual
. We are currently only supporting the "Vanilla" universe.
Submitting a Job
To submit a jobs to the condor batch system you first need to write a "submit description file" to describe the job to the system: A very simple file would look like this:
####################
#
# Example 1
# Simple HTCondor submit description file
#
####################
Executable = myexe
Log = myexe.log
Input = inputfile
Output = outputfile
Queue
That runs
myexe
on the batch machine (after copying it and
inputfile
to a temporary directory on the machine) and copies back the standard output of the job to a file called
outputfile
A more complex example submit description would look like:
####################
#
# Example 2
# More Complex HTCondor submit description file
#
####################
Universe = vanilla
Executable = my_analysis.sh
Arguments = input-$(process).txt result/output-$(process).txt
Log = log/my_analysis-$(Process).log
Input = input/input-$(process).txt
Output = output/my_analysis-$(Process).out
Error = output/my_analysis-$(Process).err
Request_memory = 2 GB
Transfer_output_files = result/output-$(process).txt
Transfer_output_remaps = "output-$(process).txt = results/output-$(process).txt"
Notification = complete
Notify_user = your.name@stfc.ac.uk
Getenv = True
Queue 20
This submit runs 20 copies (
Queue 20
) of
my_analysis.sh input-$(process).txt result/output-$(process).txt
where
$(process)
is replaced by a number 0 to 19. It will copy
my_analysis.sh
and
input-$(process).txt
to each of the worker nodes (taking the input file from the local
input
directory). The standard output and error from the job are copied back to the local
output
directory at the end of the job and the file
result/output-$(process).txt
is copied back to the local
results directory. It copies over the local environment to the worker node (Getenv = True
) and requests 2GB of memory to run in on the worker node. Finally it e-mails the user when each job completes.
Monitoring Your Jobs
The basic command for monitoring jobs is condor_q
by default this only shows jobs submitted to the "schedd" (essentially submit node) you are using, to see all the jobs in the system run condor_q -global
If jobs have been idle for a while you can use condor_q -analyze <job_id>
to look at the resources requested by the job and how they match to the available resources on the cluster.
Failed jobs often go into a "Held" state rather than disappearing, condor_q -held <jobid>
will often give some information on why the job failed.
condor_userprio
will give you an idea of the current usage and failshares on the cluster.
-- ChrisBrew - 2014-03-26