Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE | Academic


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
https://docs.rc.fas.harvard.edu | https://portal.rc.fas.harvard.edu | Email: rchelp@rc.fas.harvard.edu


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

Cannon Cluster

Operational

SLURM Scheduler - Cannon

Operational

Cannon Compute Cluster (Holyoke)

Operational

Boston Compute Nodes

Operational

GPU nodes (Holyoke)

Operational

SEAS compute partition

Operational

Academic Cluster Compute

Operational

FASSE Cluster

Operational

SLURM Scheduler - FASSE

Operational

FASSE Compute Cluster (Holyoke)

Operational

Kempner Cluster

Operational

Kempner Cluster CPU

Operational

Kempner Cluster GPU

Operational

Login Nodes

Operational

Login Nodes - Boston

Operational

Login Nodes - Holyoke

Operational

FASSE login nodes

Operational

VDI/OpenOnDemand

Operational

Cannon VDI (Open OnDemand)

Operational

FASSE VDI (Open OnDemand)

Operational

Storage

Operational

Holyscratch01 (Global Scratch)

Operational

Home Directory Storage - Boston

Operational

HolyLFS03 (Tier 0)

Operational

HolyLFS04 (Tier 0)

Operational

HolyLFS05 (Tier 0)

Operational

Holystore01 (Tier 0)

Operational

Holylabs

Operational

BosLFS02 (Tier 0)

Operational

Isilon Storage Boston (Tier 1)

Operational

Isilon Storage Holyoke (Tier 1)

Operational

CEPH Storage Boston (Tier 2)

Operational

Tape - (Tier 3)

Operational

Boston Specialty Storage

Operational

Holyoke Specialty Storage

Operational

Samba Cluster

Operational

Globus Data Transfer

Operational

bosECS

Operational

holECS

Operational

Notice history

Mar 2024

FASRC maintenance update - All jobs requeued (Cannon and FASSE)
  • Resolved
    Resolved
    This incident has been resolved.
  • Monitoring
    Monitoring

    Informational Notice

    The Slurm upgrade to 23.11.4 was completed successfully during maintenance. However a complication with the automation of Slurm's cryptographic keys occurred during the upgrade which caused nodes to lose the ability to talk to the Slurm master. The Slurm master therefore viewed those nodes as down and requeued their jobs.

    All jobs on Cannon and FASSE were requeued.

    This is deeply regrettable but the chain of events which caused this could not be foreseen.

    To check the status of your jobs, see the common Slurm commands at:

    https://docs.rc.fas.harvard.edu/kb/convenient-slurm-commands/#Information_on_jobs

    FAS Research Computing

    https://docs.rc.fas.harvard.edu/

    rchelp@rc.fas.harvard.edu

Feb 2024

Ceph instability - Affects Boston VMs (Virtual Machines) and Tier2 Ceph shares
  • Resolved
    Resolved

    The Ceph instability has been resolved. Ceph Tier2 shares, VDI, and VMs should be back to their normal state.

    If your VM, /net/fs-[labname] share, or VDI session is still impacted, please contact rchelp@rc.fas.harvard.edu

  • Identified
    Identified

    The infrastructure behind Tier2 Ceph shares and VMs is unstable.
    This also affects VDI/OOD which relies on virtual machines.

    /net/fs-[labname] shares, new OOD/VDI sessions, and VMs are affected and may will be inaccessible until this is resolved.

    Thanks for your patience.

Jan 2024

Ceph instability - Affects Boston VMs (Virtual Machines) and Tier2 Ceph shares
  • Resolved
    Resolved

    The Ceph instability has been resolved. Ceph Tier2 shares, VDI, and VMs should be back to their normal state.

    If your VM, /net/fs-[labname] share, or VDI session is still impacted, please contact rchelp@rc.fas.harvard.edu

  • Identified
    Identified

    The infrastructure behind Tier2 Ceph shares and VMs is unstable.
    This also affects VDI/OOD which relies on virtual machines.

    /net/fs-[labname] shares, new OOD/VDI sessions, and VMs are affected and may will be inaccessible until this is resolved.

    Thanks for your patience.

Ceph instability - Affects Boston VMs (Virtual Machines) and Tier2 Ceph shares
  • Resolved
    Resolved

    The Ceph instability has been resolved. Ceph Tier2 shares, VDI, and VMs should be back to their normal state.

    If your VM, /net/fs-[labname] share, or VDI session is still impacted, please contact rchelp@rc.fas.harvard.edu

  • Identified
    Identified

    The infrastructure behind Tier2 Ceph shares and VMs is unstable.
    This also affects VDI/OOD which relies on virtual machines.

    /net/fs-[labname] shares, new OOD/VDI sessions, and VMs are affected and may will be inaccessible until this is resolved.

    Thanks for your patience.

Ceph instability - Affects Boston VMs (Virtual Machines) and Tier2 Ceph shares
  • Resolved
    Resolved

    The Ceph instability has been resolved. Ceph Tier2 shares, VDI, and VMs should be back to their normal state.

    If your VM, /net/fs-[labname] share, or VDI session is still impacted, please contact rchelp@rc.fas.harvard.edu

  • Identified
    Identified

    The infrastructure behind Tier2 Ceph shares and VMs is unstable.
    This also affects VDI/OOD which relies on virtual machines.

    /net/fs-[labname] shares, new OOD/VDI sessions, and VMs are affected and may will be inaccessible until this is resolved.

    Thanks for your patience.

Jan 2024 to Mar 2024