Resource Limits
SLURM scheduling policy support was significantly changed in version 2.0 in order to take advantage of the database integration used for storing accounting information. This document describes the capabilities available in SLURM version 2.0. New features are under active development. Familiarity with SLURM's Accounting web page is strongly recommended before use of this document.
Note for users of Maui or Moab schedulers:
Maui and Moab are not integrated with SLURM's resource limits,
but should use their own resource limits mechanisms.
Configuration
Scheduling policy information must be stored in a database as specified by the AccountingStorageType configuration parameter in the slurm.conf configuration file. Information can be recorded in either MySQL or PostgreSQL. For security and performance reasons, the use of SlurmDBD (SLURM Database Daemon) as a front-end to the database is strongly recommended. SlurmDBD uses a SLURM authentication plugin (e.g. MUNGE). SlurmDBD also uses an existing SLURM accounting storage plugin to maximize code reuse. SlurmDBD uses data caching and prioritization of pending requests in order to optimize performance. While SlurmDBD relies upon existing SLURM plugins for authentication and database use, the other SLURM commands and daemons are not required on the host where SlurmDBD is installed. Only the slurmdbd and slurm-plugins RPMs are required for SlurmDBD execution.
Both accounting and scheduling policy are configured based upon an association. An association is a 4-tuple consisting of the cluster name, bank account, user and (optionally) the SLURM partition. In order to enforce scheduling policy, set the value of AccountingStorageEnforce: This option contains a comma separated list of options you may want to enforce. The valid options are
- associations - This will prevent users from running jobs if their association is not in the database. This option will prevent users from accessing invalid accounts.
- limits - This will enforce limits set to associations. By setting this option, the 'associations' option is also set.
- qos - This will require all jobs to specify (either overtly or by default) a valid qos (Quality of Service). QOS values are defined for each association in the database. By setting this option, the 'associations' option is also set.
- wckeys - This will prevent users from running jobs under a wckey that they don't have access to. By using this option, the 'associations' option is also set. The 'TrackWCKey' option is also set to true.
(NOTE: The association is a combination of cluster, account,
user names and optional partition name.)
Without AccountingStorageEnforce being set (the default behavior)
jobs will be executed based upon policies configured in SLURM on each
cluster.
It is advisable to run without the option 'limits' set when running a
scheduler on top of SLURM, like Moab, that does not update in real
time their limits per association.
Tools
The tool used to manage accounting policy is sacctmgr. It can be used to create and delete cluster, user, bank account, and partition records plus their combined association record. See man sacctmgr for details on this tools and examples of its use.
A web interface with graphical output is currently under development.
Changes made to the scheduling policy are uploaded to the SLURM control daemons on the various clusters and take effect immediately. When an association is deleted, all running or pending jobs which belong to that association are immediately canceled. When limits are lowered, running jobs will not be canceled to satisfy the new limits, but the new lower limits will be enforced.
Policies supported
A limited subset of scheduling policy options are currently supported. The available options are expected to increase as development continues. Most of these scheduling policy options are available not only for a user association, but also for each cluster and account. If a new association is created for some user and a scheduling policy option is not specified the default will be: the option for the cluster/account pair, and if both are not specified then the option for the cluster, and if that also is not specified then no limit will apply.
Currently available scheduling policy options:
- Fairshare= Integer value used for determining priority. Essentially this is the amount of claim this association and it's children have to the above system. Can also be the string "parent", this means that the parent association is used for fairshare.
- GrpCPUMins= A hard limit of cpu minutes to be used by jobs running from this association and its children. If this limit is reached all jobs running in this group will be killed, and no new jobs will be allowed to run.
- GrpCPUs= The total count of cpus able to be used at any given time from jobs running from this association and its children. If this limit is reached new jobs will be queued but only allowed to run after resources have been relinquished from this group.
- GrpJobs= The total number of jobs able to run at any given time from this association and its children. If this limit is reached new jobs will be queued but only allowed to run after previous jobs complete from this group.
- GrpNodes= The total count of nodes able to be used at any given time from jobs running from this association and its children. If this limit is reached new jobs will be queued but only allowed to run after resources have been relinquished from this group.
- GrpSubmitJobs= The total number of jobs able to be submitted to the system at any given time from this association and its children. If this limit is reached new submission requests will be denied until previous jobs complete from this group.
- GrpWall= The maximum wall clock time any job submitted to this group can run for. If this limit is reached submission requests will be denied.
- MaxCPUsPerJob= The maximum size in cpus any given job can have from this association. If this limit is reached the job will be denied at submission.
- MaxJobs= The total number of jobs able to run at any given time from this association. If this limit is reached new jobs will be queued but only allowed to run after previous jobs complete from this association.
- MaxNodesPerJob= The maximum size in nodes any given job can have from this association. If this limit is reached the job will be denied at submission.
- MaxSubmitJobs= The maximum number of jobs able to be submitted to the system at any given time from this association. If this limit is reached new submission requests will be denied until previous jobs complete from this association.
- MaxWallDurationPerJob= The maximum wall clock time any job submitted to this association can run for. If this limit is reached the job will be denied at submission.
- QOS= comma separated list of QOS's this association is able to run.
The MaxNodes and MaxWall options already exist in SLURM's configuration on a per-partition basis, but the above options provide the ability to impose limits on a per-user basis. The MaxJobs option provides an entirely new mechanism for SLURM to control the workload any individual may place on a cluster in order to achieve some balance between users.
Fair-share scheduling is based upon the hierarchical bank account data maintained in the SLURM database. More information can be found in the priority/multifactor plugin description.
Last modified 10 June 2011