slotd is a Rust-built single-node, single-user Slurm-style job scheduler for a personal workstation.
It keeps the familiar Slurm command names and many common flags, but runs everything on one local machine with a small Rust codebase and:
- one daemon
- one SQLite database
- one execution host
Start here: Installation
Current user-facing commands:
sbatchsrunsallocsqueuesacctscontrolscancelsinfo
Online documentation is Here. (日本語版, 中文版)
slotd is designed for local batch and interactive workloads such as:
- long-running experiments
- GPU jobs on a single workstation
- local resource reservation
- queueing work without a full Slurm cluster
It is not a multi-node scheduler, and it does not implement account/QoS/fairshare/federation features from full Slurm.
The implementation is intentionally Rust-first:
- one compiled Rust binary
- no Python runtime dependency
- SQLite for local persistent state
- direct process and signal handling from native code
- Slurm-style command aliases through
argv[0] - local daemon and Unix socket IPC
- SQLite-backed durable job state
- CPU, memory, and GPU reservation-based scheduling
- host-detected CPU and memory capacity with GPU autodetection
- true single-node multi-task execution for
--ntasks - optional cgroup v2 CPU/memory enforcement when
SLOTD_CGROUP_BASEis set - batch jobs, interactive runs, allocations, and steps
- dependencies and job arrays
--constraint,--begin,--exclusive,--requeuesbatch --export,--export-file,--open-mode,--signalsrun --cpu-bind,--label,--unbufferedsqueue --start,squeue --arraysinfo -l- lightweight completion hooks with
SLOTD_NOTIFY_CMD
- Linux or WSL
- Rust toolchain with
cargo systemd --userif you want managed background startupnvidia-smiif you want automatic GPU detection
git clone https://github.com/ymgaq/slotd.git
cd slotdThe repository includes a Rust-oriented installer that builds the project and installs the resulting binary:
./scripts/install.shBy default it will:
- build
slotdinreleasemode - install binaries under
~/.local/bin - create Slurm-style aliases such as
sbatchandsqueue - create a runtime root under
~/.local/share/slotd - write configuration to
~/.config/slotd/slotd.env - install and start a
systemd --userservice
| Option | Description | Default |
|---|---|---|
--repo-root PATH |
Build from a different repository root | current repo |
--profile NAME |
Cargo profile to build | release |
--install-bin-dir PATH |
Install location for slotd and aliases |
~/.local/bin |
--runtime-root PATH |
Runtime root used as SLOTD_ROOT |
~/.local/share/slotd |
--config-dir PATH |
Configuration directory for slotd.env |
~/.config/slotd |
--systemd-user-dir PATH |
systemd --user unit directory |
~/.config/systemd/user |
--cpu-partitions VALUE |
Value for SLOTD_CPU_PARTITIONS |
cpu |
--gpu-partitions VALUE |
Value for SLOTD_GPU_PARTITIONS |
gpu |
--features VALUE |
Value for SLOTD_FEATURES |
unset |
--notify-cmd VALUE |
Value for SLOTD_NOTIFY_CMD |
unset |
--cgroup-base PATH |
Value for SLOTD_CGROUP_BASE |
unset |
--skip-build |
Reuse an existing cargo build output | off |
--skip-systemd |
Do not install or start a user service | off |
--uninstall |
Remove the installed setup | off |
--purge-runtime |
With --uninstall, also remove persisted state |
off |
Example:
./scripts/install.sh \
--runtime-root "$HOME/.local/share/slotd" \
--cpu-partitions cpu \
--gpu-partitions gpu \
--features cpu,gpu \
--notify-cmd 'notify-send "slotd" "$SLOTD_JOB_ID $SLOTD_JOB_STATE"'If --cgroup-base is left unset, CPU and memory remain reservation-only. If it
is set, it must point at a writable cgroup v2 subtree or job launch fails
clearly.
Remove binaries, aliases, config, and the user service:
./scripts/install.sh --uninstallAlso remove persisted jobs and runtime state:
./scripts/install.sh --uninstall --purge-runtimeIf you installed through scripts/install.sh, the daemon should already be running under systemd --user.
Basic checks:
sinfo
squeue
sacctTypical output:
sinfoshows one row per configured partition, for examplecpuandgpu- CPU partitions show only
cpuinFEATURES - GPU partitions show
cpuplus detected GPU model features such asrtx3090 - CPU and GPU partitions are virtual convenience views over the same local host
- CPU capacity and memory are shared across those partitions; they are not separate resource pools
squeueis usually empty immediately after a fresh installsacctis usually empty until you submit jobs
Submit a simple batch job:
sbatch --wrap 'echo hello from slotd'Typical output:
Submitted batch job 1
Watch the queue:
squeueTypical output while the job is waiting or running:
JOBID | PARTITION | NAME | USER | ST | TIME | NODELIST(REASON)
1 | cpu | wrap | ... | R | 0:00 | localhost
See completed jobs:
sacctTypical output after completion:
JobID | Partition | JobName | User | State | ExitCode
1 | cpu | wrap | ... | COMPLETED | 0:0
Show detailed job info:
scontrol show job 1Typical output:
- job identity such as
JobId=1andJobName=wrap - current or final state such as
JobState=COMPLETED - requested resources, working directory, command, and output paths
sbatch \
-J hello \
-p cpu \
-c 1 \
--mem 512M \
-t 00:05:00 \
-o logs/%j.out \
--wrap 'echo hello'Typical output:
Submitted batch job 2
Expected result:
logs/2.outis created- the file contains
hello
sbatch \
-J gpu-demo \
-p gpu \
-c 4 \
--mem 8G \
-G 1 \
-t 01:00:00 \
-o logs/%j.out \
--wrap 'nvidia-smi'Typical output:
Submitted batch job 3
Expected result:
- the job is scheduled on the
gpupartition logs/3.outcontainsnvidia-smioutput
Create a batch script:
cat > /tmp/slotd-demo.sh <<'EOF'
#!/usr/bin/env bash
#SBATCH -J script-demo
#SBATCH -p cpu
#SBATCH -c 2
#SBATCH --mem 1G
#SBATCH -t 00:05:00
#SBATCH -o logs/%j.out
echo "hello from script mode"
echo "job=$SLURM_JOB_ID cpus=$SLURM_CPUS_PER_TASK"
EOFSubmit it:
sbatch /tmp/slotd-demo.shTypical output:
Submitted batch job 4
Expected result:
- the script header is parsed for resource settings such as job name, partition, CPUs, memory, and output path
logs/4.outcontains the echoed lines from the script body
srun \
-p cpu \
-c 2 \
--mem 1G \
--label \
--unbuffered \
-- echo helloTypical output:
0: hello
salloc \
-p gpu \
-c 4 \
--mem 8G \
-G 1 \
-t 00:30:00Typical output:
Granted job allocation 4
Expected result:
- your shell starts inside the allocation
- follow-up
sruncommands run as steps under that allocation
sbatch \
-J array-demo \
-a 0-9%2 \
-o logs/%A_%a.out \
--wrap 'echo task=$SLURM_ARRAY_TASK_ID'Typical output:
Submitted batch job 5
Expected result:
- multiple task records are created
- files such as
logs/5_0.out,logs/5_1.out, and so on are written
sbatch \
-J flaky \
--requeue \
--wrap 'exit 1'Typical output:
Submitted batch job 6
Expected result:
- the first failed run returns to
PENDING - after the second failure,
sacctshows the final state asFAILED
sbatch \
-J later \
--begin now+00:10:00 \
--wrap 'echo delayed'Typical output:
Submitted batch job 7
Expected result:
squeueshows the job inPENDINGsqueue --startshows an estimated future start time
sbatch \
--export FOO=bar,HELLO=world \
--wrap 'echo "$FOO $HELLO"'Typical output:
Submitted batch job 8
Expected result:
- the job output contains
bar world
| Command | Purpose |
|---|---|
slotd daemon |
Start the local scheduler daemon |
sbatch |
Submit a batch job or wrapped command |
srun |
Run a foreground command, or submit a daemon-managed run with --no-wait |
salloc |
Request an allocation, then run a command inside it |
squeue |
Show queued and running top-level jobs |
sacct |
Show accounting data, including completed jobs and steps |
scontrol |
Show, hold, release, or update a job |
scancel |
Cancel a job or send a signal |
sinfo |
Show local partition and resource state |
| Option | Meaning |
|---|---|
--wrap <command> |
Submit an inline shell command instead of a script file |
-J, --job-name |
Set the job name |
-p, --partition |
Choose a configured partition |
-c, --cpus-per-task |
CPUs per task |
-n, --ntasks |
Number of concurrently launched local tasks |
--mem |
Requested memory, such as 512M or 8G |
-t, --time |
Time limit |
-G, --gpus |
Requested GPU slots |
-o, --output |
Stdout path pattern |
-e, --error |
Stderr path pattern |
-D, --chdir |
Working directory |
--constraint |
Require local features such as cpu or gpu |
-d, --dependency |
Dependency expression |
-a, --array |
Array specification |
--export |
Export environment values into the job |
--export-file |
Load environment variables from a file |
--open-mode append|truncate |
Control output file append/truncate behavior |
--signal |
Configure a warning signal before the time limit |
--begin |
Delay job eligibility |
--exclusive |
Do not share the host with other top-level jobs |
--requeue |
Requeue once after FAILED, TIMEOUT, or OUT_OF_MEMORY |
--parsable |
Print only the job ID |
-W, --wait |
Wait for completion |
| Option | Meaning |
|---|---|
-J, --job-name |
Set the job name |
-p, --partition |
Choose a partition |
-c, --cpus-per-task |
CPUs per task |
-n, --ntasks |
Number of concurrently launched local tasks |
--mem |
Requested memory |
-t, --time |
Time limit |
-G, --gpus |
Requested GPU slots |
-o, --output |
Foreground stdout path |
-e, --error |
Foreground stderr path |
-D, --chdir |
Working directory |
--immediate |
Fail if resources are not available immediately |
--pty |
Reserved for PTY support; currently rejected with a clear error |
--constraint |
Require matching local features |
--cpu-bind |
CPU binding mode: none, cores, map_cpu:<ids> |
--label |
Prefix output lines with <task_id>: |
--unbuffered |
Flush forwarded output eagerly |
--no-wait |
Submit a daemon-managed run job instead of waiting |
| Option | Meaning |
|---|---|
-J, --job-name |
Set the allocation name |
-p, --partition |
Choose a partition |
-c, --cpus-per-task |
CPUs per task |
-n, --ntasks |
Number of concurrently launched local tasks |
--mem |
Requested memory |
-t, --time |
Time limit |
-G, --gpus |
Requested GPU slots |
-D, --chdir |
Working directory |
--constraint |
Require matching local features |
--immediate |
Fail if the allocation cannot start immediately |
| Option | Meaning |
|---|---|
--all |
Show all job states instead of only PENDING and RUNNING |
-t, --states |
Filter by state |
-j, --jobs |
Filter by job IDs |
-u, --user |
Filter by user |
-p, --partition |
Filter by partition |
-o, --format |
Choose output fields |
-S, --sort |
Sort rows |
-l, --long |
Use the long default view |
--start |
Show estimated start times |
--array |
Show array-style job IDs |
--noheader |
Omit the table header |
| Option | Meaning |
|---|---|
-j, --jobs |
Filter by job IDs |
-s, --state |
Filter by state |
-S, --starttime |
Filter by start time |
-E, --endtime |
Filter by end time |
-u, --user |
Filter by user |
-p, --partition |
Filter by partition |
-o, --format |
Choose output fields |
-P, --parsable2 |
Use ` |
-n, --noheader |
Omit the table header |
| Option | Meaning |
|---|---|
--signal, -s |
Send a specific signal instead of cancelling normally |
| Key | Meaning |
|---|---|
JobName / Name |
Change the job name while PENDING |
Partition |
Change the partition while PENDING |
TimeLimit / Time |
Change the time limit before the job is terminal |
Priority |
Change local pending-job priority |
Implemented states:
PENDINGRUNNINGCOMPLETINGCOMPLETEDFAILEDCANCELLEDTIMEOUTOUT_OF_MEMORY
- single-node only
- reservation-based CPU, memory, and GPU admission
ntaskslaunches one local process per task rank forsbatch, foregroundsrun, andsalloccommands- pending jobs are ordered primarily by submission order
- explicit local
Prioritycan override that order - array tasks are interleaved by array group
If SLOTD_GPU_COUNT is not set, slotd tries to detect GPUs automatically from nvidia-smi.
The current implementation checks common locations including:
nvidia-smifromPATH/usr/bin/nvidia-smi/usr/lib/wsl/lib/nvidia-smi/bin/nvidia-smi
If SLOTD_NOTIFY_CMD is set, slotd runs it on terminal top-level job completion and exports:
SLOTD_JOB_IDSLOTD_JOB_NAMESLOTD_JOB_STATESLOTD_JOB_PARTITIONSLOTD_JOB_REASON
Example:
./scripts/install.sh \
--notify-cmd 'notify-send "slotd" "$SLOTD_JOB_ID $SLOTD_JOB_STATE"'Expected result:
- when a top-level job reaches a terminal state, the configured notification command is executed
If you do not want to use systemd --user, run the daemon yourself:
cargo build --release
SLOTD_ROOT="$HOME/.local/share/slotd" ./target/release/slotd daemonThen, in another shell:
SLOTD_ROOT="$HOME/.local/share/slotd" ./target/release/slotd sbatch --wrap 'echo hello'slotd is primarily covered by Rust integration tests under tests/.
Each test boots an isolated runtime under a temporary SLOTD_ROOT, starts its own daemon, and exercises the public Slurm-style commands without touching your normal local state.
Run the full suite:
cargo testRun one integration test file while iterating on a feature:
cargo test --test schedulingRun one named test case:
cargo test dependency_job_waits_for_prerequisite_before_running --test schedulingMain areas covered by the current suite:
- command basics and CLI output such as
sbatch,srun,salloc,sinfo,squeue,sacct, andscontrol - scheduling behavior including dependencies, arrays, delayed start, resource flags, constraints, and requeue handling
- interactive and foreground execution paths such as
srun,--label,--unbuffered, and allocation/step flows - persistence and lifecycle behavior including cancellation, recovery, update processing, warning signals, and output file handling
- notification and accounting related behavior such as
SLOTD_NOTIFY_CMDhooks and parsable query output
For a quick manual smoke test, run the daemon in one shell:
cargo run -- daemonThen submit a simple job from another shell that uses the same SLOTD_ROOT:
cargo run -- sbatch --wrap 'echo hello'slotd intentionally does not try to be full Slurm.
Notable limits:
- no multi-node support
- no accounts, QoS, reservations, or fairshare
- no federation or cluster administration features
scontrolis limited to job operations- no full
sstatorsattach - only a subset of Slurm formatting tokens is implemented
