FAS Research Computing - Slurm security patch causing node unavailability – Incident details

Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
https://docs.rc.fas.harvard.edu | https://portal.rc.fas.harvard.edu | Email: rchelp@rc.fas.harvard.edu


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

Slurm security patch causing node unavailability

Resolved
Degraded performance
Started about 3 years agoLasted about 3 hours

Affected

Cannon Cluster

Degraded performance from 1:47 PM to 4:17 PM

SLURM Scheduler - Cannon

Degraded performance from 1:47 PM to 4:17 PM

Cannon Compute Cluster (Holyoke)

Degraded performance from 1:47 PM to 4:17 PM

Boston Compute Nodes

Degraded performance from 1:47 PM to 4:17 PM

VDI/OpenOnDemand

Degraded performance from 1:47 PM to 4:17 PM

Cannon Open OnDemand/VDI

Degraded performance from 1:47 PM to 4:17 PM

Updates
  • Resolved
    Resolved

    The scheduler and node states appear to be stable. Thank you for your patience and understanding.

    Please note that the intermittent deadlock issue is still not resolved, but we are actively monitoring that and intervening as necessary until we receive a solution.

  • Monitoring
    Monitoring

    The patch has been deployed and the scheduler restarted. Paused jobs are resuming.

    Any jobs which did not start or were stuck may have been flushed. So please check any pending jobs you might have had.

    Related doc: https://docs.rc.fas.harvard.edu/kb/running-jobs/.

  • Identified
    Identified

    We have tested the patch on our test cluster before releasing. We are now proceeding to deploy to the cluster. Thanks for your patience.

    UPDATE: jobs suspended, scheduler down for patching, nodes are updating.

  • Investigating
    Investigating

    The Slurm emergency security patch introduced a bug which is causing many of our nodes to be set to 'not responding'. The vendor has already identified the issue and issued another patch.

    We are deploying this patch after testing. Jobs will be paused and the scheduler and cluster will be unavailable while deploying the patch. Watch here for updates.