Quality of Service (QOS)

One can specify a Quality of Service (QOS) for each job submitted to Slurm. The quality of service associated with a job will affect the job in three ways:

The QOS's are defined in the Slurm database using the sacctmgr utility.

Jobs request a QOS using the "--qos=" option to the sbatch, salloc, and srun commands.

Job Scheduling Priority

Job scheduling priority is made up of a number of factors as described in the priority/multifactor plugin. One of the factors is the QOS priority. Each QOS is defined in the SLURM database and includes an associated priority. Jobs that request and are permitted a QOS will incorporate the priority associated with that QOS in the job's multi-factor priority calculation.

To enable the QOS priority component of the multi-factor priority calculation, the "PriorityWeightQOS" configuration parameter must be defined in the slurm.conf file and assigned an integer value greater than zero.

A job's QOS only affects is scheduling priority when the multi-factor plugin is loaded.

Job Preemption

Slurm offers two ways for a queued job to preempt a running job, free-up the running job's resources and allocate them to the queued job. See the Preemption description for details.

The preemption method is determined by the "PreemptType" configuration parameter defined in slurm.conf. When the "PreemptType" is set to "preempt/qos", a queued job's QOS will be used to determine whether it can preempt a running job.

The QOS can be assigned (using sacctmgr) a list of other QOS's that it can preempt. When there is a queued job with a QOS that is allowed to preempt a running job of another QOS, the SLURM scheduler will preempt the running job.

Job Limits

Each QOS is assigned a set of limits which will be applied to the job. The limits mirror the limits imposed by the user/account/cluster/partition association defined in the SLURM database and described in the Resource Limits section. When limits for a QOS have been defined, they will take precedence over the association's limits.

Partition QOS

Beginning in 15.08 you can attach a QOS to a partition. This means a partition will have all the same limits as a QOS. This also gives the ability of a true 'floating' partition, meaning if you assign all the nodes to a partition and then in the Partition's QOS limit the number of GrpCPUs or GrpNodes the partition will have access to all the nodes, but only be able to run on the number of resources in it.

The Partition QOS will override the job's QOS. If the opposite is desired you need to have the job's QOS have the 'OverPartQOS' flag which will reverse the order of precedence.

Other QOS Options

  • Flags Used by the slurmctld to override or enforce certain characteristics. Valid options are:
    • DenyOnLimit If set, jobs using this QOS will be rejected at submission time if they do not conform to the QOS 'Max' limits. Group limits will also be treated like 'Max' limits as well and will be denied if they go over. By default jobs that go over these limits will pend until they conform. This currently only applies to QOS and Association limits.
    • EnforceUsageThreshold If set, and the QOS also has a UsageThreshold, any jobs submitted with this QOS that fall below the UsageThreshold will be held until their Fairshare Usage goes above the Threshold.
    • NoReserve If this flag is set and backfill scheduling is used, jobs using this QOS will not reserve resources in the backfill schedule's map of resources allocated through time. This flag is intended for use with a QOS that may be preempted by jobs associated with all other QOS (e.g use with a "standby" QOS). If this flag is used with a QOS which can not be preempted by all other QOS, it could result in starvation of larger jobs.
    • PartitionMaxNodes If set, jobs using this QOS will be able to override the requested partition's MaxNodes limit.
    • PartitionMinNodes If set, jobs using this QOS will be able to override the requested partition's MinNodes limit.
    • OverPartQOS If set, jobs using this QOS will be able to override any limits used by the requested partition's QOS limits.
    • PartitionTimeLimit If set, jobs using this QOS will be able to override the requested partition's TimeLimit.
    • RequiresReservaton If set, jobs using this QOS must designate a reservation when submitting a job. This option can be useful in restricting usage of a QOS that may have greater preemptive capability or additional resources to be allowed only within a reservation.
  • GraceTime Preemption grace time to be extended to a job which has been selected for preemption.
  • UsageFactor Usage factor when running with this QOS (i.e. .5 would make it use only half the time as normal in accounting and 2 would make it use twice as much.) This only applies to the usage used in determining priority and doesn't modify the real usage in the database.
  • UsageThreshold A float representing the lowest fairshare of an association allowable to run a job. If an association falls below this threshold and has pending jobs or submits new jobs those jobs will be held until the usage goes back above the threshold. Use sshare to see current shares on the system.

Configuration

To summarize the above, the QOS's and their associated limits are defined in the Slurm database using the sacctmgr utility. The QOS will only influence job scheduling priority when the multi-factor priority plugin is loaded and a non-zero "PriorityWeightQOS" has been defined in the slurm.conf file. The QOS will only determine job preemption when the "PreemptType" is defined as "preempt/qos" in the slurm.conf file. Limits defined for a QOS (and described above) will override the limits of the user/account/cluster/partition association.

QOS examples

QOS manipulation examples. All QOS operations are done using the sacctmgr command. The default output of 'sacctmgr show qos' is very long given the large number of limits and options available so it is best to use the format option which filters the display.

By default when a cluster is added to the database a default qos named normal is created.

$ sacctmgr show qos format=name,priority
      Name   Priority
---------- ----------
    normal          0

Add a new qos

$ sacctmgr add qos zebra
 Adding QOS(s)
  zebra
 Settings
  Description    = QOS Name

$ sacctmgr show qos format=name,priority
      Name   Priority
---------- ----------
    normal          0
     zebra          0

Set QOS priority

$ sacctmgr modify qos zebra set priority=10
 Modified qos...
  zebra

$ sacctmgr show qos format=name,priority
      Name   Priority
---------- ----------
    normal          0
     zebra         10

Set some other limits:

$ sacctmgr modify qos zebra set GrpCPUs=24
 Modified qos...
  zebra

$ sacctmgr show qos format=name,priority,GrpCPUs
      Name   Priority  GrpCPUs
---------- ---------- --------
    normal          0
     zebra         10       24

Add a qos to a user account

$ sacctmgr modify user crock set qos=zebra

$ sacctmgr show assoc format=cluster,user,qos
   Cluster       User                  QOS
---------- ---------- --------------------
canis_major                          normal
canis_major      root                normal
canis_major                          normal
canis_major     crock                zebra

Users can belong to multiple qos

$ sacctmgr modify user crock set qos+=alligator
$ sacctmgr show assoc format=cluster,user,qos
   Cluster       User                  QOS
---------- ---------- --------------------
canis_major                          normal
canis_major      root                normal
canis_major                          normal
canis_major     crock       alligator,zebra

Finally delete a qos

$ sacctmgr delete qos alligator
 Deleting QOS(s)...
  alligator

Last modified 15 March 2016