CiS submits fix for SLURM memory allocation >4TB

April 2016

In early 2016, CiS specified, procured and installed two SGI UV UV300TM systems for one of its customers (the Earlham Institute, previously known as TGAC), and implemented the open source SLURM job scheduler, instead of the closed-source commercial software that is normally supported on UV systems.  Each of theses systems has 256 cores and 12TB RAM, and were purchased for running multi-TB large memory jobs, e.g. for DNA assembly, as well as more traditional HPC workloads.

During the integration testing we encountered a scenario where jobs requesting memory resources greater than 4TB would result in unpredictable memory allocations, and usually resulting in the job being automatically killed.  The implication of this discovery was that, at the time, no-one else in the SLURM user community was attempting to allocate more than 4TB of RAM on a single host to a SLURM processing job.

CiS traced the problem in the source code to an incorrect variable sizing for jobacct_gather_set_mem_limit (in slurm_jobacct_gather.c).  We then successfully patched, tested and applied to our own installation, before then submitting the update to SchedMD for inclusion in future community releases of SLURM.

The patch/contribution was accepted by SchedMD in April 2016 and was included in releases 14.11.12, 15.08.11 and 16.05.0-pre3, and is now subsumed into v17.  Details are available here: https://bugs.schedmd.com/show_bug.cgi?id=2660

Sam Gallop, Paul Fretter (CiS)

CiS is responsible for developing and managing the advanced research computing infrastructure, including High Performance Computing (HPC), storage, middleware and application platforms at the Norwich Bioscience Institutes, enabling World-class research in biological science.

Posted in Uncategorized.