This form can be used to create a Slurm configuration file with
you controlling many of the important configuration parameters.
This is a simplified version of the Slurm configuration tool. This version
has fewer options for creating a Slurm configuration file. The full version
of the Slurm configuration tool is available at
This tool supports Slurm version 18.08 only.
Configuration files for other versions of Slurm should be built
using the tool distributed with it in doc/html/configurator.html.
Some parameters will be set to default values, but you can
manually edit the resulting slurm.conf as desired
for greater flexibility. See man slurm.conf for more
details about the configuration parameters.
Note the while Slurm daemons create log files and other files as needed,
it treats the lack of parent directories as a fatal error.
This prevents the daemons from running if critical file systems are
not mounted and will minimize the risk of cold-starting (starting
without preserving jobs).
Note that this configuration file must be installed on all nodes
in your cluster.
After you have filled in the fields of interest, use the
"Submit" button on the bottom of the page to build the slurm.conf
file. It will appear on your web browser. Save the file in text format
as slurm.conf for use by Slurm.
For more information about Slurm, see
Master Controller Hostname
NodeAddr: Compute node addresses
Name of the one partition to be created
Maximum time limit of jobs in minutes or INFINITE
The following parameters describe a node's configuration.
Set a value for CPUs.
The other parameters are optional, but provide more control over scheduled resources:
CPUs: Count of processors
on each compute node.
If CPUs is omitted, it will be inferred from:
Sockets, CoresPerSocket, and ThreadsPerCore.
Number of physical processor sockets/chips on the node.
If Sockets is omitted, it will be inferred from:
CPUs, CoresPerSocket, and ThreadsPerCore.
Number of cores in a single physical processor socket.
The CoresPerSocket value describes physical cores, not
the logical number of processors per socket.
Number of logical threads in a single physical core.
of real memory. This parameter is required when specifying Memory as a
consumable resource with the select/cons_res plug-in. See below
under Resource Selection.
Slurmctld state save directory
Slurmd state save directory
Define when a non-responding (DOWN) node is returned to service.
Select one value for ReturnToService:
0: When explicitly restored to service by an administrator.
1:Upon registration with a valid configuration only if it was set DOWN
due to being non-responsive.
2:Upon registration with a valid configuration.
Define what node configuration (sockets, cores, memory, etc.) should be used.
Using values defined in the configuration file will provide faster scheduling.
Select one value for FastSchedule:
0: Use node configuration values actually found on each node
(if configured with with gang scheduling or allocation of individual
processors to jobs rather than only whole node allocations, the processor
count on the node should match the configured value to avoid having extra
processors left idle)
1: Use node configuration values defined in configuration file.
Mark node DOWN if registers with fewer resources
SlurmctldLogFile (default is none, log goes to syslog)
SlurmdLogFile (default is none,
log goes to syslog, string "%h" in name gets replaced with hostname)
Options below are for use with a database to specify where the database is running and how to connect to it
Name to be recorded in database for jobs from this cluster.
This is important if a single database is used to record information
from multiple Slurm-managed clusters.