FAS Research Computing - Notice history

Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
https://docs.rc.fas.harvard.edu | https://portal.rc.fas.harvard.edu | Email: rchelp@rc.fas.harvard.edu


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

Operational

SLURM Scheduler - Cannon - Operational

Cannon Compute Cluster (Holyoke) - Operational

Boston Compute Nodes - Operational

GPU nodes (Holyoke) - Operational

seas_compute - Operational

Operational

SLURM Scheduler - FASSE - Operational

FASSE Compute Cluster (Holyoke) - Operational

Operational

Kempner Cluster CPU - Operational

Kempner Cluster GPU - Operational

Operational

Login Nodes - Boston - Operational

Login Nodes - Holyoke - Operational

FASSE login nodes - Operational

Operational

Cannon Open OnDemand/VDI - Operational

FASSE Open OnDemand/VDI - Operational

Operational

Netscratch (Global Scratch) - Operational

Home Directory Storage - Boston - Operational

Tape - (Tier 3) - Operational

Holylabs - Operational

Isilon Storage Holyoke (Tier 1) - Operational

Holystore01 (Tier 0) - Operational

HolyLFS04 (Tier 0) - Operational

HolyLFS05 (Tier 0) - Operational

HolyLFS06 (Tier 0) - Operational

Holyoke Tier 2 NFS (new) - Operational

Holyoke Specialty Storage - Operational

holECS - Operational

Isilon Storage Boston (Tier 1) - Operational

BosLFS02 (Tier 0) - Operational

Boston Tier 2 NFS (new) - Operational

CEPH Storage Boston (Tier 2) - Operational

Boston Specialty Storage - Operational

bosECS - Operational

Samba Cluster - Operational

Globus Data Transfer - Operational

Notice history

Apr 2022

A cooling incident at the MGHPCC Datacenter has effected a number of systems
  • Resolved
    Resolved

    All cooling, including the water cooling for water-cooled compute nodes, is back online. All partitions are open for jobs. Some compute nodes in various partitions may still require individual attention, so not every compute node is back online, but we will work to bring them all online in the coming hours.

  • Monitoring
    Monitoring

    Most storage in Holyoke is back up.

    The Slurm scheduler is back up and accepting jobs. However, most public partitions are down as the water cooling systems for those compute racks require in-person attention. RC staff are already en route to the datacenter to address this.

    The Academic Cluster is back up.

  • Investigating
    Investigating

    A cooling failure caused temperatures in the MGHPCC datacenter to exceed the safe range of operation for many systems, causing them to power down to prevent permanent damage.

    The cooling issue has been resolved and we are beginning to power systems back on. Expect outage on various systems until the issue is resolved.

  • Monitoring
    Monitoring

    A cooling failure caused temperatures in the MGHPCC datacenter to exceed the safe range of operation for many systems, causing them to power down to prevent permanent damage.

    The cooling issue has been resolved and we are beginning to power systems back on. Expect outage on various systems until the issue is resolved.

Mar 2022

Feb 2022

Feb 2022 to Apr 2022

Next