FAS Research Computing - Status Page

Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
https://docs.rc.fas.harvard.edu | https://portal.rc.fas.harvard.edu | Email: rchelp@rc.fas.harvard.edu


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

Rolling cluster OS upgrades July 7 - 10
Scheduled for July 07, 2025 at 1:00 PM – July 10, 2025 at 9:01 PM 3 days
  • Update
    July 07, 2025 at 1:00 PM
    In progress
    July 07, 2025 at 1:00 PM

    UPDATE: 7/7/25 6M FASSE is operational.

    Please be aware that FASSE jobs cannot be launched at this time due to the upgrades.
    We will return all FASSE nodes to normal services as soon as possible.

    https://www.rc.fas.harvard.edu/blog/2025-compute-os-upgrade/

  • In progress
    July 07, 2025 at 1:00 PM
    In progress
    July 07, 2025 at 1:00 PM

    Cannon rolling upgrades are in progress. Not all nodes are available.

    https://www.rc.fas.harvard.edu/blog/2025-compute-os-upgrade/

  • Planned
    July 07, 2025 at 1:00 PM
    Planned
    July 07, 2025 at 1:00 PM

    Cluster OS upgrades - July 7 -10

    • Audience: All cluster users

    • Impact: Over 4 days, July 7 through 10, we will upgrade the OS on 25% of the cluster each day.
      During that time, total capacity will be reduced across the cluster by 1/4 each day.
      This will require draining each sub-set of nodes ahead of time. 

    Work begins during the July 7th maintenance (login nogdes will be upgraded during the 7/7 maintenance window) and will continue through July 10th.

    Additional details and a breakdown of each phase: 2025 Compute OS Upgrade

Under maintenance

SLURM Scheduler - Cannon - Operational

Cannon Compute Cluster (Holyoke) - Under maintenance

Boston Compute Nodes - Under maintenance

GPU nodes (Holyoke) - Under maintenance

seas_compute - Under maintenance

Operational

SLURM Scheduler - FASSE - Operational

FASSE Compute Cluster (Holyoke) - Operational

Under maintenance

Kempner Cluster CPU - Under maintenance

Kempner Cluster GPU - Under maintenance

Operational

Login Nodes - Boston - Operational

Login Nodes - Holyoke - Operational

FASSE login nodes - Operational

Operational

Cannon Open OnDemand/VDI - Operational

FASSE Open OnDemand/VDI - Operational

Operational

Netscratch (Global Scratch) - Operational

Home Directory Storage - Boston - Operational

Tape - (Tier 3) - Operational

Holylabs - Operational

Isilon Storage Holyoke (Tier 1) - Operational

Holystore01 (Tier 0) - Operational

HolyLFS04 (Tier 0) - Operational

HolyLFS05 (Tier 0) - Operational

HolyLFS06 (Tier 0) - Operational

Holyoke Tier 2 NFS (new) - Operational

Holyoke Specialty Storage - Operational

holECS - Operational

Isilon Storage Boston (Tier 1) - Operational

BosLFS02 (Tier 0) - Operational

Boston Tier 2 NFS (new) - Operational

CEPH Storage Boston (Tier 2) - Operational

Boston Specialty Storage - Operational

bosECS - Operational

Samba Cluster - Operational

Globus Data Transfer - Operational

Recent notices

Show notice history