Frequently Asked Questions

For Users

  1. Why is my job/node in COMPLETING state?
  2. Why are my resource limits not propagated?
  3. Why is my job not running?
  4. Why does the srun --overcommit option not permit multiple jobs to run on nodes?
  5. Why is my job killed prematurely?
  6. Why are my srun options ignored?
  7. Why is the SLURM backfill scheduler not starting my job?
  8. How can I run multiple jobs from within a single script?
  9. Why do I have job steps when my job has already COMPLETED?
  10. How can I run a job within an existing job allocation?
  11. How does SLURM establish the environment for my job?
  12. How can I get shell prompts in interactive mode?
  13. How can I get the task ID in the output or error file name for a batch job?
  14. Can the make command utilize the resources allocated to a SLURM job?
  15. Can tasks be launched with a remote terminal?
  16. What does "srun: Force Terminated job" indicate?
  17. What does this mean: "srun: First task exited 30s ago" followed by "srun Job Failed"?
  18. Why is my MPI job failing due to the locked memory (memlock) limit being too low?
  19. Why is my batch job that launches no job steps being killed?
  20. How do I run specific tasks on certain nodes in my allocation?
  21. How can I temporarily prevent a job from running (e.g. place it into a hold state)?
  22. Why are jobs not getting the appropriate memory limit?
  23. Is an archive available of messages posted to the slurm-dev mailing list?
  24. Can I change my job's size after it has started running?
  25. Why is my MPIHCH2 or MVAPICH2 job not running with SLURM? Why does the DAKOTA program not run with SLURM?

For Administrators

  1. How is job suspend/resume useful?
  2. How can I configure SLURM to use the resources actually found on a node rather than what is defined in slurm.conf?
  3. Why is a node shown in state DOWN when the node has registered for service?
  4. What happens when a node crashes?
  5. How can I control the execution of multiple jobs per node?
  6. When the SLURM daemon starts, it prints "cannot resolve X plugin operations" and exits. What does this mean?
  7. Why are user tasks intermittently dying at launch with SIGPIPE error messages?
  8. How can I dry up the workload for a maintenance period?
  9. How can PAM be used to control a user's limits on or access to compute nodes?
  10. Why are jobs allocated nodes and then unable to initiate programs on some nodes?
  11. Why does slurmctld log that some nodes are not responding even if they are not in any partition?
  12. How should I relocated the primary or backup controller?
  13. Can multiple SLURM systems be run in parallel for testing purposes?
  14. Can slurm emulate a larger cluster?
  15. Can SLURM emulate nodes with more resources than physically exist on the node?
  16. What does a "credential replayed" error in the SlurmdLogFile indicate?
  17. What does "Warning: Note very large processing time" in the SlurmctldLogFile indicate?
  18. How can I add support for lightweight core files?
  19. Is resource limit propagation useful on a homogeneous cluster?
  20. Do I need to maintain synchronized clocks on the cluster?
  21. Why are "Invalid job credential" errors generated?
  22. Why are "Task launch failed on node ... Job credential replayed" errors generated?
  23. Can SLURM be used with Globus?
  24. Can SLURM time output format include the year?
  25. What causes the error "Unable to accept new connection: Too many open files"?
  26. Why does the setting of SlurmdDebug fail to log job step information at the appropriate level?
  27. Why isn't the auth_none.so (or other file) in a SLURM RPM?
  28. Why should I use the slurmdbd instead of the regular database plugins?
  29. How can I build SLURM with debugging symbols?
  30. How can I easily preserve drained node information between major SLURM updates?
  31. Why doesn't the HealthCheckProgram execute on DOWN nodes?
  32. What is the meaning of the error "Batch JobId=# missing from master node, killing it"?
  33. What does the messsage "srun: error: Unable to accept connection: Resources temporarily unavailable" indicate?
  34. How could I automatically print a job's SLURM job ID to its standard output?
  35. I run SLURM with the Moab or Maui scheduler. How can I start a job under SLURM without the scheduler?
  36. Why are user processes and srun running even though the job is supposed to be completed?
  37. How can I prevent the slurmd and slurmstepd daemons from being killed when a node's memory is exhausted?
  38. I see my host of my calling node as 127.0.1.1 instead of the correct IP address. Why is that?
  39. How can I stop SLURM from scheduling jobs?
  40. Can I update multiple jobs with a single scontrol command?
  41. Can SLURM be used to run jobs on Amazon's EC2?
  42. If a SLURM daemon core dumps, where can I find the core file?
  43. How can TotalView be configured to operate with SLURM?

For Users

1. Why is my job/node in COMPLETING state?
When a job is terminating, both the job and its nodes enter the COMPLETING state. As the SLURM daemon on each node determines that all processes associated with the job have terminated, that node changes state to IDLE or some other appropriate state for use by other jobs. When every node allocated to a job has determined that all processes associated with it have terminated, the job changes state to COMPLETED or some other appropriate state (e.g. FAILED). Normally, this happens within a second. However, if the job has processes that cannot be terminated with a SIGKILL signal, the job and one or more nodes can remain in the COMPLETING state for an extended period of time. This may be indicative of processes hung waiting for a core file to complete I/O or operating system failure. If this state persists, the system administrator should check for processes associated with the job that cannot be terminated then use the scontrol command to change the node's state to DOWN (e.g. "scontrol update NodeName=name State=DOWN Reason=hung_completing"), reboot the node, then reset the node's state to IDLE (e.g. "scontrol update NodeName=name State=RESUME"). Note that setting the node DOWN will terminate all running or suspended jobs associated with that node. An alternative is to set the node's state to DRAIN until all jobs associated with it terminate before setting it DOWN and re-booting.

Note that SLURM has two configuration parameters that may be used to automate some of this process. UnkillableStepProgram specifies a program to execute when non-killable processes are identified. UnkillableStepTimeout specifies how long to wait for processes to terminate. See the "man slurm.conf" for more information about these parameters.

2. Why are my resource limits not propagated?
When the srun command executes, it captures the resource limits in effect at submit time. These limits are propagated to the allocated nodes before initiating the user's job. The SLURM daemon running on that node then tries to establish identical resource limits for the job being initiated. There are several possible reasons for not being able to establish those resource limits.

NOTE: This may produce the error message "Can't propagate RLIMIT_...". The error message is printed only if the user explicitly specifies that the resource limit should be propagated or the srun command is running with verbose logging of actions from the slurmd daemon (e.g. "srun -d6 ...").

3. Why is my job not running?
The answer to this question depends upon the scheduler used by SLURM. Executing the command

scontrol show config | grep SchedulerType

will supply this information. If the scheduler type is builtin, then jobs will be executed in the order of submission for a given partition. Even if resources are available to initiate your job immediately, it will be deferred until no previously submitted job is pending. If the scheduler type is backfill, then jobs will generally be executed in the order of submission for a given partition with one exception: later submitted jobs will be initiated early if doing so does not delay the expected execution time of an earlier submitted job. In order for backfill scheduling to be effective, users' jobs should specify reasonable time limits. If jobs do not specify time limits, then all jobs will receive the same time limit (that associated with the partition), and the ability to backfill schedule jobs will be limited. The backfill scheduler does not alter job specifications of required or excluded nodes, so jobs which specify nodes will substantially reduce the effectiveness of backfill scheduling. See the backfill section for more details. If the scheduler type is wiki, this represents The Maui Scheduler or Moab Cluster Suite. Please refer to its documentation for help. For any scheduler, you can check priorities of jobs using the command scontrol show job.

4. Why does the srun --overcommit option not permit multiple jobs to run on nodes?
The --overcommit option is a means of indicating that a job or job step is willing to execute more than one task per processor in the job's allocation. For example, consider a cluster of two processor nodes. The srun execute line may be something of this sort

srun --ntasks=4 --nodes=1 a.out

This will result in not one, but two nodes being allocated so that each of the four tasks is given its own processor. Note that the srun --nodes option specifies a minimum node count and optionally a maximum node count. A command line of

srun --ntasks=4 --nodes=1-1 a.out

would result in the request being rejected. If the --overcommit option is added to either command line, then only one node will be allocated for all four tasks to use.

More than one job can execute simultaneously on the same nodes through the use of srun's --shared option in conjunction with the Shared parameter in SLURM's partition configuration. See the man pages for srun and slurm.conf for more information.

5. Why is my job killed prematurely?
SLURM has a job purging mechanism to remove inactive jobs (resource allocations) before reaching its time limit, which could be infinite. This inactivity time limit is configurable by the system administrator. You can check its value with the command

scontrol show config | grep InactiveLimit

The value of InactiveLimit is in seconds. A zero value indicates that job purging is disabled. A job is considered inactive if it has no active job steps or if the srun command creating the job is not responding. In the case of a batch job, the srun command terminates after the job script is submitted. Therefore batch job pre- and post-processing is limited to the InactiveLimit. Contact your system administrator if you believe the InactiveLimit value should be changed.

6. Why are my srun options ignored?
Everything after the command srun is examined to determine if it is a valid option for srun. The first token that is not a valid option for srun is considered the command to execute and everything after that is treated as an option to the command. For example:

srun -N2 hostname -pdebug

srun processes "-N2" as an option to itself. "hostname" is the command to execute and "-pdebug" is treated as an option to the hostname command. This will change the name of the computer on which SLURM executes the command - Very bad, Don't run this command as user root!

7. Why is the SLURM backfill scheduler not starting my job?
The most common problem is failing to set job time limits. If all jobs have the same time limit (for example the partition's time limit), then backfill will not be effective. Note that partitions can have both default and maximum time limits, which can be helpful in configuring a system for effective backfill scheduling.

In addition, there are significant limitations in the current backfill scheduler plugin. It was designed to perform backfill node scheduling for a homogeneous cluster. It does not manage scheduling on individual processors (or other consumable resources). It does not update the required or excluded node list of individual jobs. It does support job's with constraints/features unless the exclusive OR operator is used in the constraint expression. You can use the scontrol show command to check if these conditions apply.

If the partitions specifications differ from those listed above, no jobs in that partition will be scheduled by the backfills scheduler. Their jobs will only be scheduled on a First-In-First-Out (FIFO) basis.

Jobs failing to satisfy the requirements above (i.e. with specific node requirements) will not be considered candidates for backfill scheduling and other jobs may be scheduled ahead of these jobs. These jobs are subject to starvation, but will not block other jobs from running when sufficient resources are available for them.

8. How can I run multiple jobs from within a single script?
A SLURM job is just a resource allocation. You can execute many job steps within that allocation, either in parallel or sequentially. Some jobs actually launch thousands of job steps this way. The job steps will be allocated nodes that are not already allocated to other job steps. This essential provides a second level of resource management within the job for the job steps.

9. Why do I have job steps when my job has already COMPLETED?
NOTE: This only applies to systems configured with SwitchType=switch/elan or SwitchType=switch/federation. All other systems will purge all job steps on job completion.

SLURM maintains switch (network interconnect) information within the job step for Quadrics Elan and IBM Federation switches. This information must be maintained until we are absolutely certain that the processes associated with the switch have been terminated to avoid the possibility of re-using switch resources for other jobs (even on different nodes). SLURM considers jobs COMPLETED when all nodes allocated to the job are either DOWN or confirm termination of all its processes. This enables SLURM to purge job information in a timely fashion even when there are many failing nodes. Unfortunately the job step information may persist longer.

10. How can I run a job within an existing job allocation?
There is a srun option --jobid that can be used to specify a job's ID. For a batch job or within an existing resource allocation, the environment variable SLURM_JOB_ID has already been defined, so all job steps will run within that job allocation unless otherwise specified. The one exception to this is when submitting batch jobs. When a batch job is submitted from within an existing batch job, it is treated as a new job allocation request and will get a new job ID unless explicitly set with the --jobid option. If you specify that a batch job should use an existing allocation, that job allocation will be released upon the termination of that batch job.

11. How does SLURM establish the environment for my job?
SLURM processes are not run under a shell, but directly exec'ed by the slurmd daemon (assuming srun is used to launch the processes). The environment variables in effect at the time the srun command is executed are propagated to the spawned processes. The ~/.profile and ~/.bashrc scripts are not executed as part of the process launch.

12. How can I get shell prompts in interactive mode?
srun -u bash -i
Srun's -u option turns off buffering of stdout. Bash's -i option tells it to run in interactive mode (with prompts).

13. How can I get the task ID in the output or error file name for a batch job?

If you want separate output by task, you will need to build a script containing this specification. For example:

$ cat test
#!/bin/sh
echo begin_test
srun -o out_%j_%t hostname

$ sbatch -n7 -o out_%j test
sbatch: Submitted batch job 65541

$ ls -l out*
-rw-rw-r--  1 jette jette 11 Jun 15 09:15 out_65541
-rw-rw-r--  1 jette jette  6 Jun 15 09:15 out_65541_0
-rw-rw-r--  1 jette jette  6 Jun 15 09:15 out_65541_1
-rw-rw-r--  1 jette jette  6 Jun 15 09:15 out_65541_2
-rw-rw-r--  1 jette jette  6 Jun 15 09:15 out_65541_3
-rw-rw-r--  1 jette jette  6 Jun 15 09:15 out_65541_4
-rw-rw-r--  1 jette jette  6 Jun 15 09:15 out_65541_5
-rw-rw-r--  1 jette jette  6 Jun 15 09:15 out_65541_6

$ cat out_65541
begin_test

$ cat out_65541_2
tdev2

14. Can the make command utilize the resources allocated to a SLURM job?
Yes. There is a patch available for GNU make version 3.81 available as part of the SLURM distribution in the file contribs/make.slurm.patch. This patch will use SLURM to launch tasks across a job's current resource allocation. Depending upon the size of modules to be compiled, this may or may not improve performance. If most modules are thousands of lines long, the use of additional resources should more than compensate for the overhead of SLURM's task launch. Use with make's -j option within an existing SLURM allocation. Outside of a SLURM allocation, make's behavior will be unchanged.

15. Can tasks be launched with a remote terminal?
In SLURM version 1.3 or higher, use srun's --pty option. Until then, you can accomplish this by starting an appropriate program or script. In the simplest case (X11 over TCP with the DISPLAY environment already set), executing srun xterm may suffice. In the more general case, the following scripts should work. NOTE: The pathname to the additional scripts are included in the variables BS and IS of the first script. You must change this in the first script. Execute the script with the sbatch options desired. For example, interactive -N2 -pdebug.

#!/bin/bash
# -*- coding: utf-8 -*-
# Author: Pär Andersson (National Supercomputer Centre, Sweden)
# Version: 0.3 2007-07-30
#
# This will submit a batch script that starts screen on a node.
# Then ssh is used to connect to the node and attach the screen.
# The result is very similar to an interactive shell in PBS
# (qsub -I)

# Batch Script that starts SCREEN
BS=/INSTALL_DIRECTORY/_interactive
# Interactive screen script
IS=/INSTALL_DIRECTORY/_interactive_screen

# Submit the job and get the job id
JOB=`sbatch --output=/dev/null --error=/dev/null $@ $BS 2>&1 \
    | egrep -o -e "\b[0-9]+$"`

# Make sure the job is always canceled
trap "{ /usr/bin/scancel -q $JOB; exit; }" SIGINT SIGTERM EXIT

echo "Waiting for JOBID $JOB to start"
while true;do
    sleep 5s

    # Check job status
    STATUS=`squeue -j $JOB -t PD,R -h -o %t`

    if [ "$STATUS" = "R" ];then
	# Job is running, break the while loop
	break
    elif [ "$STATUS" != "PD" ];then
	echo "Job is not Running or Pending. Aborting"
	scancel $JOB
	exit 1
    fi

    echo -n "."

done

# Determine the first node in the job:
NODE=`srun --jobid=$JOB -N1 hostname`

# SSH to the node and attach the screen
sleep 1s
ssh -X -t $NODE $IS slurm$JOB
# The trap will now cancel the job before exiting.

NOTE: The above script executes the script below, named _interactive.

#!/bin/sh
# -*- coding: utf-8 -*-
# Author: Pär Andersson  (National Supercomputer Centre, Sweden)
# Version: 0.2 2007-07-30
#
# Simple batch script that starts SCREEN.

exec screen -Dm -S slurm$SLURM_JOB_ID

The following script named _interactive_screen is also used.

#!/bin/sh
# -*- coding: utf-8 -*-
# Author: Pär Andersson  (National Supercomputer Centre, Sweden)
# Version: 0.3 2007-07-30
#

SCREENSESSION=$1

# If DISPLAY is set then set that in the screen, then create a new
# window with that environment and kill the old one.
if [ "$DISPLAY" != "" ];then
    screen -S $SCREENSESSION -X unsetenv DISPLAY
    screen -p0 -S $SCREENSESSION -X setenv DISPLAY $DISPLAY
    screen -p0 -S $SCREENSESSION -X screen
    screen -p0 -S $SCREENSESSION -X kill
fi

exec screen -S $SCREENSESSION -rd

16. What does "srun: Force Terminated job" indicate?
The srun command normally terminates when the standard output and error I/O from the spawned tasks end. This does not necessarily happen at the same time that a job step is terminated. For example, a file system problem could render a spawned task non-killable at the same time that I/O to srun is pending. Alternately a network problem could prevent the I/O from being transmitted to srun. In any event, the srun command is notified when a job step is terminated, either upon reaching its time limit or being explicitly killed. If the srun has not already terminated, the message "srun: Force Terminated job" is printed. If the job step's I/O does not terminate in a timely fashion thereafter, pending I/O is abandoned and the srun command exits.

17. What does this mean: "srun: First task exited 30s ago" followed by "srun Job Failed"?
The srun command monitors when tasks exit. By default, 30 seconds after the first task exists, the job is killed. This typically indicates some type of job failure and continuing to execute a parallel job when one of the tasks has exited is not normally productive. This behavior can be changed using srun's --wait=<time> option to either change the timeout period or disable the timeout altogether. See srun's man page for details.

18. Why is my MPI job failing due to the locked memory (memlock) limit being too low?
By default, SLURM propagates all of your resource limits at the time of job submission to the spawned tasks. This can be disabled by specifically excluding the propagation of specific limits in the slurm.conf file. For example PropagateResourceLimitsExcept=MEMLOCK might be used to prevent the propagation of a user's locked memory limit from a login node to a dedicated node used for his parallel job. If the user's resource limit is not propagated, the limit in effect for the slurmd daemon will be used for the spawned job. A simple way to control this is to insure that user root has a sufficiently large resource limit and insuring that slurmd takes full advantage of this limit. For example, you can set user root's locked memory limit ulimit to be unlimited on the compute nodes (see "man limits.conf") and insuring that slurmd takes full advantage of this limit (e.g. by adding something like "ulimit -l unlimited" to the /etc/init.d/slurm script used to initiate slurmd). Related information about PAM is also available.

19. Why is my batch job that launches no job steps being killed?
SLURM has a configuration parameter InactiveLimit intended to kill jobs that do not spawn any job steps for a configurable period of time. Your system administrator may modify the InactiveLimit to satisfy your needs. Alternately, you can just spawn a job step at the beginning of your script to execute in the background. It will be purged when your script exits or your job otherwise terminates. A line of this sort near the beginning of your script should suffice:
srun -N1 -n1 sleep 999999 &

20. How do I run specific tasks on certain nodes in my allocation?
One of the distribution methods for srun '-m or --distribution' is 'arbitrary'. This means you can tell slurm to layout your tasks in any fashion you want. For instance if I had an allocation of 2 nodes and wanted to run 4 tasks on the first node and 1 task on the second and my nodes allocated from SLURM_NODELIST where tux[0-1] my srun line would look like this.

srun -n5 -m arbitrary -w tux[0,0,0,0,1] hostname

If I wanted something similar but wanted the third task to be on tux 1 I could run this...

srun -n5 -m arbitrary -w tux[0,0,1,0,0] hostname

Here is a simple perl script named arbitrary.pl that can be ran to easily lay out tasks on nodes as they are in SLURM_NODELIST

#!/usr/bin/perl
my @tasks = split(',', $ARGV[0]);
my @nodes = `scontrol show hostnames $SLURM_NODELIST`;
my $node_cnt = $#nodes + 1;
my $task_cnt = $#tasks + 1;

if ($node_cnt < $task_cnt) {
	print STDERR "ERROR: You only have $node_cnt nodes, but requested layout on $task_cnt nodes.\n";
	$task_cnt = $node_cnt;
}

my $cnt = 0;
my $layout;
foreach my $task (@tasks) {
	my $node = $nodes[$cnt];
	last if !$node;
	chomp($node);
	for(my $i=0; $i < $task; $i++) {
		$layout .= "," if $layout;
		$layout .= "$node";
	}
	$cnt++;
}
print $layout;
We can now use this script in our srun line in this fashion.

srun -m arbitrary -n5 -w `arbitrary.pl 4,1` -l hostname

This will layout 4 tasks on the first node in the allocation and 1 task on the second node.

21. How can I temporarily prevent a job from running (e.g. place it into a hold state)?
The easiest way to do this is to change a job's earliest begin time (optionally set at job submit time using the --begin option). The example below places a job into hold state (preventing its initiation for 30 days) and later permitting it to start now.

$ scontrol update JobId=1234 StartTime=now+30days
... later ...
$ scontrol update JobId=1234 StartTime=now

22. Why are jobs not getting the appropriate memory limit?
This is probably a variation on the locked memory limit problem described above. Use the same solution for the AS (Address Space), RSS (Resident Set Size), or other limits as needed.

23. Is an archive available of messages posted to the slurm-dev mailing list?
Yes, it is at http://groups.google.com/group/slurm-devel

24. Can I change my job's size after it has started running?
Support to decrease the size of a running job was added to SLURM version 2.2. The ability to increase the size of a running job was added to SLURM version 2.3. While the size of a pending job may be changed with few restrictions, several significant restrictions apply to changing the size of a running job as noted below:

  1. Support is not available on BlueGene or Cray system due to limitations in the software underlying SLURM.
  2. Job(s) changing size must not be in a suspended state, including jobs suspended for gang scheduling. The jobs must be in a state of pending or running. We plan to modify the gang scheduling logic in the future to concurrently schedule a job to be used for expanding another job and the job to be expanded.

Use the scontrol command to change a job's size either by specifying a new node count (NumNodes=) for the job or identify the specific nodes (NodeList=) that you want the job to retain. Any job steps running on the nodes which are reliquished by the job will be killed unless initiated with the --no-kill option. After the job size is changed, some environment variables created by SLURM containing information about the job's environment will no longer be valid and should either be removed or altered (e.g. SLURM_NNODES, SLURM_NODELIST and SLURM_NPROCS). The scontrol command will generate a script that can be executed to reset local environment variables. You must retain the SLURM_JOBID environment variable in order for the srun command to gather information about the job's current state and specify the desired node and/or task count in subsequent srun invocations. A new accounting record is generated when a job is resized showing the to have been resubmitted and restarted at the new size. An example is shown below.

#!/bin/bash
srun my_big_job
scontrol update JobId=$SLURM_JOBID NumNodes=2
. slurm_job_${SLURM_JOBID}_resize.sh
srun -N2 my_small_job
rm slurm_job_${SLURM_JOBID}_resize.*

Increasing a job's size
Directly increasing the size of a running job would adversely effect the scheduling of pending jobs. For the sake of fairness in job scheduling, expanding a running job requires the user to submit a new job, but specify the option --dependency=expand:<jobid>. This option tells SLURM that the job, when scheduled, can be used to expand the specified jobid. Other job options would be used to identify the required resources (e.g. task count, node count, node features, etc.). This new job's time limit will be automatically set to reflect the end time of the job being expanded. This new job's generic resources specification will be automatically set equal to that of the job being merged to. This is due to the current SLURM restriction of all nodes associated with a job needing to have the same generic resource specification (i.e. a job can not have one GPU on one node and two GPUs on another node), although this restriction may be removed in the future. This restriction can pose some problems when both jobs can be allocated resources on the same node, in which case the generic resources allocated to the new job will be released. If the jobs are allocated resources on different nodes, the generic resources associated with the resulting job allocation after the merge will be consistent as expected. Any licenses associated with the new job will be added to those available in the job being merged to. Note that partition and Quality Of Service (QOS) limits will be applied independently to the new job allocation so the expanded job may exceed size limits configured for an individual job.

After the new job is allocated resources, merge that job's allocation into that of the original job by executing:
scontrol update jobid=<jobid> NumNodes=0
The jobid above is that of the job to relinquish it's resources. To provides more control over when the job expansion occurs, the resources are not merged into the original job until explicitly requested. These resources will be transfered to the original job and the scontrol command will generate a script to reset variables in the second job's environment to reflect it's modified resource allocation (which would be no resources). One would normally exit this second job at this point, since it has no associated resources. In order to generate a script to modify the environment variables for the expanded job, execute:
scontrol update jobid=<jobid> NumNodes=ALL
Then execute the script generated. Note that this command does not change the original job's size, but only generates the script to change its environment variables. Until the environment variables are modified (e.g. the job's node count, CPU count, hostlist, etc.), any srun command will only consider the resources in the original resource allocation. Note that the original job may have active job steps at the time of it's expansion, but they will not be effected by the change. An example of the proceedure is shown below in which the original job allocation waits until the second resource allocation request can be satisfied. The job requesting additional resources could also use the sbatch command and permit the original job to continue execution at its initial size. Note that the development of additional user tools to manage SLURM resource allocations is planned in the future to make this process both simpler and more flexible.

$ salloc -N4 bash
salloc: Granted job allocation 65542
$ srun hostname
icrm1
icrm2
icrm3
icrm4

$ salloc -N4 --dependency=expand:$SLURM_JOBID bash
salloc: Granted job allocation 65543
$ scontrol update jobid=$SLURM_JOBID NumNodes=0
To reset SLURM environment variables, execute
  For bash or sh shells:  . ./slurm_job_65543_resize.sh
  For csh shells:         source ./slurm_job_65543_resize.csh
$ exit
exit
salloc: Relinquishing job allocation 65543

$ scontrol update jobid=$SLURM_JOBID NumNodes=ALL
To reset SLURM environment variables, execute
  For bash or sh shells:  . ./slurm_job_65542_resize.sh
  For csh shells:         source ./slurm_job_65542_resize.csh
$ . ./slurm_job_$SLURM_JOBID_resize.sh

$ srun hostname
icrm1
icrm2
icrm3
icrm4
icrm5
icrm6
icrm7
icrm8
$ exit
exit
salloc: Relinquishing job allocation 65542

25. Why is my MPIHCH2 or MVAPICH2 job not running with SLURM? Why does the DAKOTA program not run with SLURM?
The SLURM library used to support MPIHCH2 or MVAPICH2 references a variety of symbols. If those symbols resolve to functions or variables in your program rather than the appropriate library, the application will fail. In the case of DAKOTA, it contains a function named regcomp, which will get used rather than the POSIX regex functions. Rename DAKOTA's function and references from regcomp to something else to make it work properly.

For Administrators

1. How is job suspend/resume useful?
Job suspend/resume is most useful to get particularly large jobs initiated in a timely fashion with minimal overhead. Say you want to get a full-system job initiated. Normally you would need to either cancel all running jobs or wait for them to terminate. Canceling jobs results in the loss of their work to that point from either their beginning or last checkpoint. Waiting for the jobs to terminate can take hours, depending upon your system configuration. A more attractive alternative is to suspend the running jobs, run the full-system job, then resume the suspended jobs. This can easily be accomplished by configuring a special queue for full-system jobs and using a script to control the process. The script would stop the other partitions, suspend running jobs in those partitions, and start the full-system partition. The process can be reversed when desired. One can effectively gang schedule (time-slice) multiple jobs using this mechanism, although the algorithms to do so can get quite complex. Suspending and resuming a job makes use of the SIGSTOP and SIGCONT signals respectively, so swap and disk space should be sufficient to accommodate all jobs allocated to a node, either running or suspended.

2. How can I configure SLURM to use the resources actually found on a node rather than what is defined in slurm.conf?
SLURM can either base its scheduling decisions upon the node configuration defined in slurm.conf or what each node actually returns as available resources. This is controlled using the configuration parameter FastSchedule. Set its value to zero in order to use the resources actually found on each node, but with a higher overhead for scheduling. A value of one is the default and results in the node configuration defined in slurm.conf being used. See "man slurm.conf" for more details.

3. Why is a node shown in state DOWN when the node has registered for service?
The configuration parameter ReturnToService in slurm.conf controls how DOWN nodes are handled. Set its value to one in order for DOWN nodes to automatically be returned to service once the slurmd daemon registers with a valid node configuration. A value of zero is the default and results in a node staying DOWN until an administrator explicitly returns it to service using the command "scontrol update NodeName=whatever State=RESUME". See "man slurm.conf" and "man scontrol" for more details.

4. What happens when a node crashes?
A node is set DOWN when the slurmd daemon on it stops responding for SlurmdTimeout as defined in slurm.conf. The node can also be set DOWN when certain errors occur or the node's configuration is inconsistent with that defined in slurm.conf. Any active job on that node will be killed unless it was submitted with the srun option --no-kill. Any active job step on that node will be killed. See the slurm.conf and srun man pages for more information.

5. How can I control the execution of multiple jobs per node?
There are two mechanisms to control this. If you want to allocate individual processors on a node to jobs, configure SelectType=select/cons_res. See Consumable Resources in SLURM for details about this configuration. If you want to allocate whole nodes to jobs, configure configure SelectType=select/linear. Each partition also has a configuration parameter Shared that enables more than one job to execute on each node. See man slurm.conf for more information about these configuration parameters.

6. When the SLURM daemon starts, it prints "cannot resolve X plugin operations" and exits. What does this mean?
This means that symbols expected in the plugin were not found by the daemon. This typically happens when the plugin was built or installed improperly or the configuration file is telling the plugin to use an old plugin (say from the previous version of SLURM). Restart the daemon in verbose mode for more information (e.g. "slurmctld -Dvvvvv").

7. Why are user tasks intermittently dying at launch with SIGPIPE error messages?
If you are using LDAP or some other remote name service for username and groups lookup, chances are that the underlying libc library functions are triggering the SIGPIPE. You can likely work around this problem by setting CacheGroups=1 in your slurm.conf file. However, be aware that you will need to run "scontrol reconfigure " any time your groups database is updated.

8. How can I dry up the workload for a maintenance period?
Create a resource reservation as described by SLURM's Resource Reservation Guide.

9. How can PAM be used to control a user's limits on or access to compute nodes?
First, enable SLURM's use of PAM by setting UsePAM=1 in slurm.conf.
Second, establish a PAM configuration file for slurm in /etc/pam.d/slurm. A basic configuration you might use is:

auth     required  pam_localuser.so
account  required  pam_unix.so
session  required  pam_limits.so

Third, set the desired limits in /etc/security/limits.conf. For example, to set the locked memory limit to unlimited for all users:

*   hard   memlock   unlimited
*   soft   memlock   unlimited

Finally, you need to disable SLURM's forwarding of the limits from the session from which the srun initiating the job ran. By default all resource limits are propagated from that session. For example, adding the following line to slurm.conf will prevent the locked memory limit from being propagated:PropagateResourceLimitsExcept=MEMLOCK.

We also have a PAM module for SLURM that prevents users from logging into nodes that they have not been allocated (except for user root, which can always login. pam_slurm is available for download from https://sourceforge.net/projects/slurm/ or use the Debian package named libpam-slurm. The use of pam_slurm does not require UsePAM being set. The two uses of PAM are independent.

10. Why are jobs allocated nodes and then unable to initiate programs on some nodes?
This typically indicates that the time on some nodes is not consistent with the node on which the slurmctld daemon executes. In order to initiate a job step (or batch job), the slurmctld daemon generates a credential containing a time stamp. If the slurmd daemon receives a credential containing a time stamp later than the current time or more than a few minutes in the past, it will be rejected. If you check in the SlurmdLog on the nodes of interest, you will likely see messages of this sort: "Invalid job credential from <some IP address>: Job credential expired." Make the times consistent across all of the nodes and all should be well.

11. Why does slurmctld log that some nodes are not responding even if they are not in any partition?
The slurmctld daemon periodically pings the slurmd daemon on every configured node, even if not associated with any partition. You can control the frequency of this ping with the SlurmdTimeout configuration parameter in slurm.conf.

12. How should I relocated the primary or backup controller?
If the cluster's computers used for the primary or backup controller will be out of service for an extended period of time, it may be desirable to relocate them. In order to do so, follow this procedure:

  1. Stop all SLURM daemons
  2. Modify the ControlMachine, ControlAddr, BackupController, and/or BackupAddr in the slurm.conf file
  3. Distribute the updated slurm.conf file to all nodes
  4. Restart all SLURM daemons

There should be no loss of any running or pending jobs. Insure that any nodes added to the cluster have a current slurm.conf file installed. CAUTION: If two nodes are simultaneously configured as the primary controller (two nodes on which ControlMachine specify the local host and the slurmctld daemon is executing on each), system behavior will be destructive. If a compute node has an incorrect ControlMachine or BackupController parameter, that node may be rendered unusable, but no other harm will result.

13. Can multiple SLURM systems be run in parallel for testing purposes?
Yes, this is a great way to test new versions of SLURM. Just install the test version in a different location with a different slurm.conf. The test system's slurm.conf should specify different pathnames and port numbers to avoid conflicts. The only problem is if more than one version of SLURM is configured with switch/elan or switch/federation. In that case, there can be conflicting switch window requests from the different SLURM systems. This can be avoided by configuring the test system with switch/none. MPI jobs started on an Elan or Federation switch system without the switch windows configured will not execute properly, but other jobs will run fine. Another option for testing on Elan or Federation systems is to use a different set of nodes for the different SLURM systems. That will permit both systems to allocate switch windows without conflicts.

14. Can slurm emulate a larger cluster?
Yes, this can be useful for testing purposes. It has also been used to partition "fat" nodes into multiple SLURM nodes. There are two ways to do this. The best method for most conditions is to run one slurmd daemon per emulated node in the cluster as follows.

  1. When executing the configure program, use the option --enable-multiple-slurmd (or add that option to your ~/.rpmmacros file).
  2. Build and install SLURM in the usual manner.
  3. In slurm.conf define the desired node names (arbitrary names used only by SLURM) as NodeName along with the actual address of the physical node in NodeHostname. Multiple NodeName values can be mapped to a single NodeHostname. Note that each NodeName on a single physical node needs to be configured to use a different port number. You will also want to use the "%n" symbol in slurmd related path options in slurm.conf.
  4. When starting the slurmd daemon, include the NodeName of the node that it is supposed to serve on the execute line (e.g. "slurmd -N hostname").

It is strongly recommended that SLURM version 1.2 or higher be used for this due to its improved support for multiple slurmd daemons. See the Programmers Guide for more details about configuring multiple slurmd support.

In order to emulate a really large cluster, it can be more convenient to use a single slurmd daemon. That daemon will not be able to launch many tasks, but can suffice for developing or testing scheduling software. Do not run job steps with more than a couple of tasks each or execute more than a few jobs at any given time. Doing so may result in the slurmd daemon exhausting its memory and failing. Use this method with caution.

  1. Execute the configure program with your normal options plus --enable-front-end (this will define HAVE_FRONT_END in the resulting config.h file.
  2. Build and install SLURM in the usual manner.
  3. In slurm.conf define the desired node names (arbitrary names used only by SLURM) as NodeName along with the actual name and address of the one physical node in NodeHostName and NodeAddr. Up to 64k nodes can be configured in this virtual cluster.
  4. Start your slurmctld and one slurmd daemon. It is advisable to use the "-c" option to start the daemons without trying to preserve any state files from previous executions. Be sure to use the "-c" option when switch from this mode too.
  5. Create job allocations as desired, but do not run job steps with more than a couple of tasks.
$ ./configure --enable-debug --enable-front-end --prefix=... --sysconfdir=...
$ make install
$ grep NodeHostName slurm.conf
NodeName=dummy[1-1200] NodeHostName=localhost NodeAddr=127.0.0.1
$ slurmctld -c
$ slurmd -c
$ sinfo
PARTITION AVAIL  TIMELIMIT NODES  STATE NODELIST
pdebug*      up      30:00  1200   idle dummy[1-1200]
$ cat tmp
#!/bin/bash
sleep 30
$ srun -N200 -b tmp
srun: jobid 65537 submitted
$ srun -N200 -b tmp
srun: jobid 65538 submitted
$ srun -N800 -b tmp
srun: jobid 65539 submitted
$ squeue
JOBID PARTITION  NAME   USER  ST  TIME  NODES NODELIST(REASON)
65537    pdebug   tmp  jette   R  0:03    200 dummy[1-200]
65538    pdebug   tmp  jette   R  0:03    200 dummy[201-400]
65539    pdebug   tmp  jette   R  0:02    800 dummy[401-1200]

15. Can SLURM emulate nodes with more resources than physically exist on the node?
Yes in SLURM version 1.2 or higher. In the slurm.conf file, set FastSchedule=2 and specify any desired node resource specifications (CPUs, Sockets, CoresPerSocket, ThreadsPerCore, and/or TmpDisk). SLURM will use the resource specification for each node that is given in slurm.conf and will not check these specifications against those actually found on the node.

16. What does a "credential replayed" error in the SlurmdLogFile indicate?
This error is indicative of the slurmd daemon not being able to respond to job initiation requests from the srun command in a timely fashion (a few seconds). Srun responds by resending the job initiation request. When the slurmd daemon finally starts to respond, it processes both requests. The second request is rejected and the event is logged with the "credential replayed" error. If you check the SlurmdLogFile and SlurmctldLogFile, you should see signs of the slurmd daemon's non-responsiveness. A variety of factors can be responsible for this problem including

In Slurm version 1.2, this can be addressed with the MessageTimeout configuration parameter by setting a value higher than the default 5 seconds. In earlier versions of Slurm, the --msg-timeout option of srun serves a similar purpose.

17. What does "Warning: Note very large processing time" in the SlurmctldLogFile indicate?
This error is indicative of some operation taking an unexpectedly long time to complete, over one second to be specific. Setting the value of SlurmctldDebug configuration parameter a value of six or higher should identify which operation(s) are experiencing long delays. This message typically indicates long delays in file system access (writing state information or getting user information). Another possibility is that the node on which the slurmctld daemon executes has exhausted memory and is paging. Try running the program top to check for this possibility.

18. How can I add support for lightweight core files?
SLURM supports lightweight core files by setting environment variables based upon the srun --core option. Of particular note, it sets the LD_PRELOAD environment variable to load new functions used to process a core dump. >First you will need to acquire and install a shared object library with the appropriate functions. Then edit the SLURM code in src/srun/core-format.c to specify a name for the core file type, add a test for the existence of the library, and set environment variables appropriately when it is used.

19. Is resource limit propagation useful on a homogeneous cluster?
Resource limit propagation permits a user to modify resource limits and submit a job with those limits. By default, SLURM automatically propagates all resource limits in effect at the time of job submission to the tasks spawned as part of that job. System administrators can utilize the PropagateResourceLimits and PropagateResourceLimitsExcept configuration parameters to change this behavior. Users can override defaults using the srun --propagate option. See "man slurm.conf" and "man srun" for more information about these options.

20. Do I need to maintain synchronized clocks on the cluster?
In general, yes. Having inconsistent clocks may cause nodes to be unusable. SLURM log files should contain references to expired credentials. For example:

error: Munge decode failed: Expired credential
ENCODED: Wed May 12 12:34:56 2008
DECODED: Wed May 12 12:01:12 2008

21. Why are "Invalid job credential" errors generated?
This error is indicative of SLURM's job credential files being inconsistent across the cluster. All nodes in the cluster must have the matching public and private keys as defined by JobCredPrivateKey and JobCredPublicKey in the slurm configuration file slurm.conf.

22. Why are "Task launch failed on node ... Job credential replayed" errors generated?
This error indicates that a job credential generated by the slurmctld daemon corresponds to a job that the slurmd daemon has already revoked. The slurmctld daemon selects job ID values based upon the configured value of FirstJobId (the default value is 1) and each job gets a value one larger than the previous job. On job termination, the slurmctld daemon notifies the slurmd on each allocated node that all processes associated with that job should be terminated. The slurmd daemon maintains a list of the jobs which have already been terminated to avoid replay of task launch requests. If the slurmctld daemon is cold-started (with the "-c" option or "/etc/init.d/slurm startclean"), it starts job ID values over based upon FirstJobId. If the slurmd is not also cold-started, it will reject job launch requests for jobs that it considers terminated. This solution to this problem is to cold-start all slurmd daemons whenever the slurmctld daemon is cold-started.

23. Can SLURM be used with Globus?
Yes. Build and install SLURM's Torque/PBS command wrappers along with the Perl APIs from SLURM's contribs directory and configure Globus to use those PBS commands. Note there are RPMs available for both of these packages, named torque and perlapi respectively.

24. Can SLURM time output format include the year?
The default SLURM time format output is MM/DD-HH:MM:SS. Define "ISO8601" at SLURM build time to get the time format YYYY-MM-DDTHH:MM:SS. Note that this change in format will break anything that parses SLURM output expecting the old format (e.g. LSF, Maui or Moab).

25. What causes the error "Unable to accept new connection: Too many open files"?
The srun command automatically increases its open file limit to the hard limit in order to process all of the standard input and output connections to the launched tasks. It is recommended that you set the open file hard limit to 8192 across the cluster.

26. Why does the setting of SlurmdDebug fail to log job step information at the appropriate level?
There are two programs involved here. One is slurmd, which is a persistent daemon running at the desired debug level. The second program is slurmstep, which executed the user job and its debug level is controlled by the user. Submitting the job with an option of --debug=# will result in the desired level of detail being logged in the SlurmdLogFile plus the output of the program.

27. Why isn't the auth_none.so (or other file) in a SLURM RPM?
The auth_none plugin is in a separate RPM and not built by default. Using the auth_none plugin means that SLURM communications are not authenticated, so you probably do not want to run in this mode of operation except for testing purposes. If you want to build the auth_none RPM then add --with auth_none on the rpmbuild command line or add %_with_auth_none to your ~/rpmmacros file. See the file slurm.spec in the SLURM distribution for a list of other options.

28. Why should I use the slurmdbd instead of the regular database plugins?
While the normal storage plugins will work fine without the added layer of the slurmdbd there are some great benefits to using the slurmdbd.

  1. Added security. Using the slurmdbd you can have an authenticated connection to the database.
  2. Off loading processing from the controller. With the slurmdbd there is no slow down to the controller due to a slow or overloaded database.
  3. Keeping enterprise wide accounting from all slurm clusters in one database. The slurmdbd is multi-threaded and designed to handle all the accounting for the entire enterprise.
  4. With the new database plugins 1.3+ you can query with sacct accounting stats from any node slurm is installed on. With the slurmdbd you can also query any cluster using the slurmdbd from any other cluster's nodes.

29. How can I build SLURM with debugging symbols?
Set your CFLAGS environment variable before building. You want the "-g" option to produce debugging information and "-O0" to set the optimization level to zero (off). For example:
CFLAGS="-g -O0" ./configure ...

30. How can I easily preserve drained node information between major SLURM updates?
Major SLURM updates generally have changes in the state save files and communication protocols, so a cold-start (without state) is generally required. If you have nodes in a DRAIN state and want to preserve that information, you can easily build a script to preserve that information using the sinfo command. The following command line will report the Reason field for every node in a DRAIN state and write the output in a form that can be executed later to restore state.

sinfo -t drain -h -o "scontrol update nodename='%N' state=drain reason='%E'"

31. Why doesn't the HealthCheckProgram execute on DOWN nodes?
Hierarchical communications are used for sending this message. If there are DOWN nodes in the communications hierarchy, messages will need to be re-routed. This limits SLURM's ability to tightly synchronize the execution of the HealthCheckProgram across the cluster, which could adversely impact performance of parallel applications. The use of CRON or node startup scripts may be better suited to insure that HealthCheckProgram gets executed on nodes that are DOWN in SLURM. If you still want to have SLURM try to execute HealthCheckProgram on DOWN nodes, apply the following patch:

Index: src/slurmctld/ping_nodes.c
===================================================================
--- src/slurmctld/ping_nodes.c  (revision 15166)
+++ src/slurmctld/ping_nodes.c  (working copy)
@@ -283,9 +283,6 @@
		node_ptr   = &node_record_table_ptr[i];
		base_state = node_ptr->node_state & NODE_STATE_BASE;

-               if (base_state == NODE_STATE_DOWN)
-                       continue;
-
 #ifdef HAVE_FRONT_END          /* Operate only on front-end */
		if (i > 0)
			continue;

32. What is the meaning of the error "Batch JobId=# missing from master node, killing it"?
A shell is launched on node zero of a job's allocation to execute the submitted program. The slurmd daemon executing on each compute node will periodically report to the slurmctld what programs it is executing. If a batch program is expected to be running on some node (i.e. node zero of the job's allocation) and is not found, the message above will be logged and the job cancelled. This typically is associated with exhausting memory on the node or some other critical failure that cannot be recovered from. The equivalent message in earlier releases of slurm is "Master node lost JobId=#, killing it".

33. What does the messsage "srun: error: Unable to accept connection: Resources temporarily unavailable" indicate?
This has been reported on some larger clusters running SUSE Linux when a user's resource limits are reached. You may need to increase limits for locked memory and stack size to resolve this problem.

34. How could I automatically print a job's SLURM job ID to its standard output?
The configured TaskProlog is the only thing that can write to the job's standard output or set extra environment variables for a job or job step. To write to the job's standard output, precede the message with "print ". To export environment variables, output a line of this form "export name=value". The example below will print a job's SLURM job ID and allocated hosts for a batch job only.

#!/bin/sh
#
# Sample TaskProlog script that will print a batch job's
# job ID and node list to the job's stdout
#

if [ X"$SLURM_STEP_ID" = "X" -a X"$SLURM_PROCID" = "X"0 ]
then
  echo "print =========================================="
  echo "print SLURM_JOB_ID = $SLURM_JOB_ID"
  echo "print SLURM_NODELIST = $SLURM_NODELIST"
  echo "print =========================================="
fi

35. I run SLURM with the Moab or Maui scheduler. How can I start a job under SLURM without the scheduler?
When SLURM is configured to use the Moab or Maui scheduler, all submitted jobs have their priority initialized to zero, which SLURM treats as a held job. The job only begins when Moab or Maui decide where and when to start the job, setting the required node list and setting the job priority to a non-zero value. To circumvent this, submit your job using a SLURM or Moab command then manually set its priority to a non-zero value (must be done by user root). For example:

$ scontrol update jobid=1234 priority=1000000

Note that changes in the configured value of SchedulerType only take effect when the slurmctld daemon is restarted (reconfiguring SLURM will not change this parameter. You will also manually need to modify the priority of every pending job. When changing to Moab or Maui scheduling, set every job priority to zero. When changing from Moab or Maui scheduling, set every job priority to a non-zero value (preferably fairly large, say 1000000).

36. Why are user processes and srun running even though the job is supposed to be completed?
SLURM relies upon a configurable process tracking plugin to determine when all of the processes associated with a job or job step have completed. Those plugins relying upon a kernel patch can reliably identify every process. Those plugins dependent upon process group IDs or parent process IDs are not reliable. See the ProctrackType description in the slurm.conf man page for details. We rely upon the sgi_job for most systems.

37. How can I prevent the slurmd and slurmstepd daemons from being killed when a node's memory is exhausted?
You can the value set in the /proc/self/oom_adj for slurmd and slurmstepd by initiating the slurmd daemon with the SLURMD_OOM_ADJ and/or SLURMSTEPD_OOM_ADJ environment variables set to the desired values. A value of -17 typically will disable killing.

38. I see my host of my calling node as 127.0.1.1 instead of the correct IB address. Why is that?
Some systems by default will put your host in the /etc/hosts file as something like

127.0.1.1	snowflake.llnl.gov	snowflake
This will cause srun and other things to grab 127.0.1.1 as it's address instead of the correct address and make it so the communication doesn't work. Solution is to either remove this line or set a different nodeaddr that is known by your other nodes.

39. How can I stop SLURM from scheduling jobs?
You can stop SLURM from scheduling jobs on a per partition basis by setting that partition's state to DOWN. Set its state UP to resume scheduling. For example:

$ scontrol update PartitionName=foo State=DOWN
$ scontrol update PartitionName=bar State=UP

40. Can I update multiple jobs with a single scontrol command?
No, but you can probably use squeue to build the script taking advantage of its filtering and formatting options. For example:

$ squeue -tpd -h -o "scontrol update jobid=%i priority=1000" >my.script

41. Can SLURM be used to run jobs on Amazon's EC2?

Yes, here is a description of use SLURM use with Amazon's EC2 courtesy of Ashley Pittman:

I do this regularly and have no problem with it, the approach I take is to start as many instances as I want and have a wrapper around ec2-describe-instances that builds a /etc/hosts file with fixed hostnames and the actual IP addresses that have been allocated. The only other step then is to generate a slurm.conf based on how many node you've chosen to boot that day. I run this wrapper script on my laptop and it generates the files and they rsyncs them to all the instances automatically.

One thing I found is that SLURM refuses to start if any nodes specified in the slurm.conf file aren't resolvable, I initially tried to specify cloud[0-15] in slurm.conf, but then if I configure less than 16 nodes in /etc/hosts this doesn't work so I dynamically generate the slurm.conf as well as the hosts file.

As a comment about EC2 I run just run generic AMIs and have a persistent EBS storage device which I attach to the first instance when I start up. This contains a /usr/local which has my software like SLURM, pdsh and MPI installed which I then copy over the /usr/local on the first instance and NFS export to all other instances. This way I have persistent home directories and a very simple first-login script that configures the virtual cluster for me.

42. If a SLURM daemon core dumps, where can I find the core file?

For slurmctld the core file will be in the same directory as its log files (SlurmctldLogFile) iif configured using an fully qualified pathname (starting with "/"). Otherwise it will be found in directory used for saving state (StateSaveLocation).

For slurmd the core file will be in the same directory as its log files (SlurmdLogFile) if configured using an fully qualified pathname (starting with "/"). Otherwise it will be found in directory used for saving state (SlurmdSpoolDir).

For slurmstepd the core file will depend upon when the failure occurs. It will either be in spawned job's working directory on the same location as that described above for the slurmd daemon.

43. How can TotalView be configured to operate with SLURM?

The following lines should also be added to the global .tvdrc file for TotalView to operate with SLURM:

dset TV::parallel_configs {
	name: SLURM;
	description: SLURM;
	starter: srun %s %p %a;
	style: manager_process;
	tasks_option: -n;
	nodes_option: -N;
	env: ;
	force_env: false;
}

Last modified 4 September 2011