WorkloadHandle
WorkloadHandle is the RAII handle to spawned worker processes. It
manages the lifecycle of forked workers: spawning, start signaling,
stop/collection, and cleanup.
use ktstr::prelude::*;
#[must_use = "dropping a WorkloadHandle immediately kills all worker processes"]
pub struct WorkloadHandle { /* ... */ }
Spawning
let config = WorkloadConfig {
num_workers: 4,
work_type: WorkType::Mixed,
..Default::default()
};
let mut handle = WorkloadHandle::spawn(&config)?;
Set only the fields that matter for the test and let
..Default::default() fill in the rest. The spread-default form is the
canonical style in the ktstr codebase — it keeps examples pinned to
intent (num_workers, work_type) and has already absorbed additions
to WorkloadConfig (the NUMA memory-policy fields) without rotting.
Consult the WorkloadConfig rustdoc for the current field list.
spawn() forks num_workers child processes. Each child installs a
SIGUSR1 handler, then blocks on a pipe waiting for the start signal.
Workers do not begin their workload until start() is called.
For grouped work types (PipeIo, CachePipe, FutexPingPong,
FutexFanOut), spawn() validates that num_workers is divisible by
the group size and sets up inter-worker communication (pipes for
PipeIo/CachePipe, shared mmap pages for FutexPingPong/FutexFanOut).
Methods
worker_pids() -> Vec<libc::pid_t> – PIDs of all worker
processes. Used with CgroupManager::move_task() or move_tasks()
to place workers in cgroups before starting them.
start() – signals all workers to begin their workload by writing
to their start pipes. Idempotent: calling it twice has no effect.
Call this after moving workers into their target cgroups.
set_affinity(idx, cpus) -> Result<()> – sets CPU affinity for
the worker at index idx via sched_setaffinity. Use this for
per-worker pinning outside any cgroup, or when you need to change one
worker’s affinity without disturbing the rest. When all workers in a
cgroup should share the same CPU set, prefer
CgroupGroup::add_cgroup — it creates the cgroup,
writes cpuset.cpus once for the whole cgroup, and RAII-removes the
cgroup on drop (including error paths). Reach for
CgroupManager::set_cpuset directly only when
the cgroup’s lifetime must outlive the current scope; the RAII
wrapper is the default because it cleans up on every error path.
snapshot_iterations() -> Vec<u64> – reads all workers’ current
iteration counts from a shared memory region (MAP_SHARED). Each count
is monotonically increasing, read with relaxed ordering. Returns an
empty vec if no workers were spawned. Call periodically during the
workload’s run window to sample forward progress (e.g. to detect stalls
or compute instantaneous rates); the final per-worker totals come back
through stop_and_collect().
stop_and_collect(self) -> Vec<WorkerReport> – sends SIGUSR1 to
all workers, reads their serialized WorkerReport from report pipes,
and waits for exit. Auto-starts workers if start() was not called.
Workers that do not respond within a shared 5-second deadline are
killed with SIGKILL. Consumes the handle.
Typical usage
// 1. Spawn workers (blocked, waiting for start signal)
let mut handle = WorkloadHandle::spawn(&config)?;
// 2. Move workers into their target cgroup. `cgroup.procs` is
// tgid-scoped, so use `worker_pids_for_cgroup_procs()` — it
// bails for Thread-mode workers (whose pids share the harness's
// tgid) and points at `cgroup.threads` instead. Plain
// `worker_pids()` returns the raw pid set without the
// cgroup-procs safety check.
ctx.cgroups.move_tasks("cg_0", &handle.worker_pids_for_cgroup_procs()?)?;
// 3. Signal workers to start
handle.start();
// 4. Wait for workload duration
std::thread::sleep(ctx.duration);
// 5. Stop workers and collect telemetry
let reports: Vec<WorkerReport> = handle.stop_and_collect();
Drop behavior
Dropping a WorkloadHandle without calling stop_and_collect() sends
SIGKILL to all child processes and waits for them. This prevents
orphaned worker processes on error paths. Shared mmap regions (futex
pages and iteration counters) are unmapped on drop.
See also: CgroupManager for cgroup operations, CgroupGroup for RAII cleanup, TestTopology for cpuset generation, Worker Processes for the two-phase start protocol and telemetry details.