FAS Research Computing - Status Page

Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
Documentation: https://docs.rc.fas.harvard.edu | Account Portal https://portal.rc.fas.harvard.edu
Email: rchelp@rc.fas.harvard.edu | Support Hours


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

FASRC monthly maintenance Monday February 2nd, 2026 9am-1pm
Scheduled for February 02, 2026 at 2:00 PM – 6:00 PM about 4 hours
  • Planned
    February 02, 2026 at 2:00 PM
    Planned
    February 02, 2026 at 2:00 PM

    Monthly maintenance will take place on Monday February 2nd, 2026. Our maintenance tasks should be completed between 9am-1pm.

    NOTICES:

    MAINTENANCE TASKS

    Cannon cluster will be paused during this maintenance?: YES
    FASSE cluster will be paused during this maintenance?: YES

    • MaxTime change

      • Audience: Cluster users

      • Impact: In order to improve scheduling efficiency and stability, we will be setting a maximum run time on all partitions that have MaxTime set to UNLIMITED to a MaxTime of 3 days. The unrestricted partition will be set to 365 days. Partitions that already have MaxTime set will retain their current setting. Partition owners wishing to set a different MaxTime for their partition should contact FASRC. Note that we do no guarantee uptime and so users should utilize checkpointing to save state in case of node failure.

    • Slurm upgrade to 25.11.2

      • Audience: All cluster users

      • Impact: Jobs will be paused during maintenance

    • OOD node reboots

      • Audience; All Open OnDemand users

      • Impact: OOD nodes will reboot during the maintenance window

    • Login node reboots

      • Audience; All login node users

      • Impact: Login nodes will reboot during the maintenance window

    Thank you,
    FAS Research Computing
    https://docs.rc.fas.harvard.edu/
    https://www.rc.fas.harvard.edu/

Operational

SLURM Scheduler - Cannon - Operational

Cannon Compute Cluster (Holyoke) - Operational

Boston Compute Nodes - Operational

GPU nodes (Holyoke) - Operational

seas_compute - Operational

Operational

SLURM Scheduler - FASSE - Operational

FASSE Compute Cluster (Holyoke) - Operational

Operational

Kempner Cluster CPU - Operational

Kempner Cluster GPU - Operational

Operational

FASSE login nodes - Operational

Operational

Cannon Open OnDemand/VDI - Operational

FASSE Open OnDemand/VDI - Operational

Operational

Netscratch (Global Scratch) - Operational

Home Directory Storage - Boston - Operational

Tape - (Tier 3) - Operational

Holylabs - Operational

Isilon Storage Holyoke (Tier 1) - Operational

Holystore01 (Tier 0) - Operational

HolyLFS04 (Tier 0) - Operational

HolyLFS05 (Tier 0) - Operational

HolyLFS06 (Tier 0) - Operational

Holyoke Tier 2 NFS (new) - Operational

Holyoke Specialty Storage - Operational

holECS - Operational

Isilon Storage Boston (Tier 1) - Operational

BosLFS02 (Tier 0) - Operational

Boston Tier 2 NFS (new) - Operational

CEPH Storage Boston (Tier 2) - Operational

Boston Specialty Storage - Operational

bosECS - Operational

Samba Cluster - Operational

Globus Data Transfer - Operational

Recent notices

No notices reported for the past 7 days

Show notice history