FAS Research Computing - Notice history

Experiencing partial outage

Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
Documentation: https://docs.rc.fas.harvard.edu | Account Portal https://portal.rc.fas.harvard.edu
Email: rchelp@rc.fas.harvard.edu | Support Hours


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

Major outage

SLURM Scheduler - Cannon - Major outage

Cannon Compute Cluster (Holyoke) - Major outage

Boston Compute Nodes - Major outage

GPU nodes (Holyoke) - Major outage

seas_compute - Major outage

Major outage

SLURM Scheduler - FASSE - Major outage

FASSE Compute Cluster (Holyoke) - Major outage

Major outage

Kempner Cluster CPU - Major outage

Kempner Cluster GPU - Major outage

Major outage

FASSE login nodes - Major outage

Major outage

Cannon Open OnDemand/VDI - Major outage

FASSE Open OnDemand/VDI - Major outage

Major outage

Netscratch (Global Scratch) - Major outage

Home Directory Storage - Boston - Operational

Tape - (Tier 3) - Major outage

Holylabs - Major outage

Isilon Storage Holyoke (Tier 1) - Major outage

Holystore01 (Tier 0) - Major outage

HolyLFS04 (Tier 0) - Major outage

HolyLFS05 (Tier 0) - Major outage

HolyLFS06 (Tier 0) - Major outage

Holyoke Tier 2 NFS (new) - Major outage

Holyoke Specialty Storage - Major outage

holECS - Major outage

Isilon Storage Boston (Tier 1) - Operational

BosLFS02 (Tier 0) - Operational

Boston Tier 2 NFS (new) - Operational

CEPH Storage Boston (Tier 2) - Operational

Boston Specialty Storage - Operational

bosECS - Operational

Samba Cluster - Operational

Globus Data Transfer - Operational

Notice history

Apr 2024

Mar 2024

FASRC maintenance update - All jobs requeued (Cannon and FASSE)
  • Resolved
    Resolved
    This incident has been resolved.
  • Monitoring
    Monitoring

    Informational Notice

    The Slurm upgrade to 23.11.4 was completed successfully during maintenance. However a complication with the automation of Slurm's cryptographic keys occurred during the upgrade which caused nodes to lose the ability to talk to the Slurm master. The Slurm master therefore viewed those nodes as down and requeued their jobs.

    All jobs on Cannon and FASSE were requeued.

    This is deeply regrettable but the chain of events which caused this could not be foreseen.

    To check the status of your jobs, see the common Slurm commands at:

    https://docs.rc.fas.harvard.edu/kb/convenient-slurm-commands/#Information_on_jobs

    FAS Research Computing

    https://docs.rc.fas.harvard.edu/

    rchelp@rc.fas.harvard.edu

Feb 2024

Ceph instability - Affects Boston VMs (Virtual Machines) and Tier2 Ceph shares
  • Resolved
    Resolved

    The Ceph instability has been resolved. Ceph Tier2 shares, VDI, and VMs should be back to their normal state.

    If your VM, /net/fs-[labname] share, or VDI session is still impacted, please contact rchelp@rc.fas.harvard.edu

  • Identified
    Identified

    The infrastructure behind Tier2 Ceph shares and VMs is unstable.
    This also affects VDI/OOD which relies on virtual machines.

    /net/fs-[labname] shares, new OOD/VDI sessions, and VMs are affected and may will be inaccessible until this is resolved.

    Thanks for your patience.

Feb 2024 to Apr 2024

Next