FAS Research Computing - A cooling incident at the MGHPCC Datacenter has effected a number of systems – Incident details

Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
https://docs.rc.fas.harvard.edu | https://portal.rc.fas.harvard.edu | Email: rchelp@rc.fas.harvard.edu


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

A cooling incident at the MGHPCC Datacenter has effected a number of systems

Resolved
Major outage
Started almost 3 years agoLasted about 3 hours

Affected

Cannon Cluster

Major outage from 7:13 PM to 10:33 PM

SLURM Scheduler - Cannon

Major outage from 7:13 PM to 10:33 PM

Cannon Compute Cluster (Holyoke)

Major outage from 7:13 PM to 10:33 PM

Login Nodes

Major outage from 7:13 PM to 10:33 PM

Login Nodes - Holyoke

Major outage from 7:13 PM to 10:33 PM

VDI/OpenOnDemand

Major outage from 7:13 PM to 10:33 PM

Updates
  • Resolved
    Resolved

    All cooling, including the water cooling for water-cooled compute nodes, is back online. All partitions are open for jobs. Some compute nodes in various partitions may still require individual attention, so not every compute node is back online, but we will work to bring them all online in the coming hours.

  • Monitoring
    Monitoring

    Most storage in Holyoke is back up.

    The Slurm scheduler is back up and accepting jobs. However, most public partitions are down as the water cooling systems for those compute racks require in-person attention. RC staff are already en route to the datacenter to address this.

    The Academic Cluster is back up.

  • Investigating
    Investigating

    A cooling failure caused temperatures in the MGHPCC datacenter to exceed the safe range of operation for many systems, causing them to power down to prevent permanent damage.

    The cooling issue has been resolved and we are beginning to power systems back on. Expect outage on various systems until the issue is resolved.

  • Monitoring
    Monitoring

    A cooling failure caused temperatures in the MGHPCC datacenter to exceed the safe range of operation for many systems, causing them to power down to prevent permanent damage.

    The cooling issue has been resolved and we are beginning to power systems back on. Expect outage on various systems until the issue is resolved.