Affected
Degraded performance from 5:57 PM to 4:30 PM
Degraded performance from 5:57 PM to 4:30 PM
Degraded performance from 5:57 PM to 4:30 PM
Degraded performance from 5:57 PM to 4:30 PM
- ResolvedResolved
An emergency patch of the scheduler has resolved the Multiple Partition issue
- InvestigatingInvestigating
Since mid-January we've been seeing some strange issues with the scheduler which caused periodic stalls or unresponsiveness in the scheduler. We had hoped that the Slurm upgrade to 24.11.1 would resolve those issues due to various architecture changes in the communications backend. Unfortunately they did not; we have since opened an issue with SchedMD (our service vendor for the scheduler). This has since spiraled into finding several other issues with the scheduler which we are working to remediate. Below is a status report regarding these issues:
1. High Agent Load Stall (RESOLVED): This was reported in https://support.schedmd.com/show_bug.cgi?id=21975 The scheduler would stall due to being oversaturated with blocking requests. This turned out to be due to a new Slurm feature called stepmgr which we had enabled to handle jobs with many steps. Unfortunately this feature also increased the load on the scheduler for array jobs exiting at the same time which caused the stall. Since we tend not to have many users that use many steps we opted to disable the stepmgr function. This resolved the High Agent Load issue. Users that have many steps in their job may still turn on the stepmgr for their specific job by adding #SBATCH --stepmgr (https://slurm.schedmd.com/sbatch.html#OPT_stepmgr)
2. Scheduler Thrashing (MONITORING): We discovered this while working on the previous bug and continued to work on it in the same bug report: https://support.schedmd.com/show_bug.cgi?id=21975 Under high load, the scheduler would get into a thrashing state where the scheduler would effectively go heads down and ignore incoming requests in order to focus on scheduling jobs. To users this would look like the scheduler was unresponsive as the scheduler was ignoring their requests to deal with higher priority traffic. To remediate this we increased the thread count for the scheduler and implemented a throttle to slow things down so that the scheduler could respond to all the requests with out impacting scheduler throughput. This is in place now and appears to have resolved the issue. We are continuing to monitor the scheduler to tune this throttle.
3. --test-only requeue crash (RESOLVED): During this investigation we also ran into another bug reported by another group related to jobs that were submitted using --test-only that would in theory preempt other jobs (see: https://support.schedmd.com/show_bug.cgi?id=21997). This caused the scheduler to crash. Given the severity of the bug we emergency patched the scheduler on Feb 12th to resolve this issue.
4. Multiple Partition Jobs Labelled with Wrong Partition (IN PROGRESS): This is a new issue identified on 2/13 related to jobs that submit to multiple partitions at once (https://support.schedmd.com/show_bug.cgi?id=22076). When the job schedules it may run in one partition but be labelled as being in another. This can lead to job preemption issues as the jobs are labelled as being in partitions that cannot be preempted even though they were originally scheduled in partitions that could be. This was identified earlier by another group and SchedMD is working on a patch. Depending on the timing FASRC will either emergency patch the scheduler for this issue or wait for the formal release of 24.11.2. Note that this issue really only impacts preemption and the scheduler is working fine otherwise. If you see jobs that you think should be preempted but are not and are blocking your work please let us know and we will investigate.
Thank you for your patience as we work through these issues.