FAS Research Computing - Status Page

Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
Documentation: https://docs.rc.fas.harvard.edu | Account Portal https://portal.rc.fas.harvard.edu
Email: rchelp@rc.fas.harvard.edu | Support Hours


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

Monthly Maintenance and MGHPCC Power Work - Nov. 3, 2025 6am-6pm
Scheduled for November 03, 2025 at 11:00 AM – 11:00 PM about 12 hours
  • Planned
    November 03, 2025 at 11:00 AM
    Planned
    November 03, 2025 at 11:00 AM

    Monthly maintenance will take place on November 3rd. Additionally, MGHPCC will be performing power upgrades on the even side of Row 8A where much of our computer resides. A further upgrade will take place Dec. 8th on the odd side.

    A list of the affected partitions is provided at the bottom of this notice. The nodes in those partitions will be drained prior to the work and will be powered down. Once the work is completed, those nodes will be returned to service. Current estimate is a 12 hour window. We will adjust as we know more.

    MAINTENANCE TASKS
    Cannon cluster will be paused during this maintenance?: PARTIAL OUTAGE/YES
    FASSE cluster will be paused during this maintenance?: PARTIAL OUTAGE/YES

    • Power work on Row 8A Even

      • Audience: Users of the partitions listed below

      • Impact: These nodes and partitions will be fully or partially down all day

    • Slurm upgrade to 25.05.4

      • Audience: All cluster users

      • Impact: Jobs will be paused during maintenance

    • Block repo.anaconda.com cluster wide

      • Audience: Anyone attempting to use repo.anaconda.com

      • Impact: This change should not impact your Python workflow on the cluster. But if it does, consider using the open-source channel, conda-forge, through Miniforge distribution to install Python packages. This can be done by following our instructions on https://docs.rc.fas.harvard.edu/kb/python-package-installation/

    • Change Slurm User to Local User

      • Audience: All cluster users

      • Impact: Behind the scenes. No impact to users

    • Login node reboots (morning)

      • Audience: Anyone logged into a FASRC Cannon or FASSE login node

      • Impact: All login nodes will rebooted during this maintenance window

    • Netscratch cleanup ( https://docs.rc.fas.harvard.edu/kb/policy-scratch/ )

      • Audience: Cluster users

      • Impact: Files older than 90 days will be removed. Please note that retention cleanup can and does run at any time, not just during the maintenance window.

    AFFECTED PARTITIONS
    Nov. 3, 2025 - All Day Power Work
    Partial or Full Outage Apples to:

    arguelles_delgado_h100

    bigmem

    dvorkin

    eddy

    enos

    gpu

    gpu_h200

    gpu_requeue

    hsph

    hsph_gpu

    intermediate

    itc_cluster

    joonholee

    jshapiro

    kempner_dev

    kemkpner_eng

    kempner_requeue

    mweber_compute

    mweber_gpu

    olveczky_sapphire

    sapphire

    seas_compute

    seas_gpu

    serial_requeue

    yao

    yao_gpu

    yao_priority

    test

Operational

SLURM Scheduler - Cannon - Operational

Cannon Compute Cluster (Holyoke) - Operational

Boston Compute Nodes - Operational

GPU nodes (Holyoke) - Operational

seas_compute - Operational

Operational

SLURM Scheduler - FASSE - Operational

FASSE Compute Cluster (Holyoke) - Operational

Operational

Kempner Cluster CPU - Operational

Kempner Cluster GPU - Operational

Operational

FASSE login nodes - Operational

Operational

Cannon Open OnDemand/VDI - Operational

FASSE Open OnDemand/VDI - Operational

Operational

Netscratch (Global Scratch) - Operational

Home Directory Storage - Boston - Operational

Tape - (Tier 3) - Operational

Holylabs - Operational

Isilon Storage Holyoke (Tier 1) - Operational

Holystore01 (Tier 0) - Operational

HolyLFS04 (Tier 0) - Operational

HolyLFS05 (Tier 0) - Operational

HolyLFS06 (Tier 0) - Operational

Holyoke Tier 2 NFS (new) - Operational

Holyoke Specialty Storage - Operational

holECS - Operational

Isilon Storage Boston (Tier 1) - Operational

BosLFS02 (Tier 0) - Operational

Boston Tier 2 NFS (new) - Operational

CEPH Storage Boston (Tier 2) - Operational

Boston Specialty Storage - Operational

bosECS - Operational

Samba Cluster - Operational

Globus Data Transfer - Operational

Recent notices

No notices reported for the past 7 days

Show notice history