FAS Research Computing - Status Page

Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
https://docs.rc.fas.harvard.edu | https://portal.rc.fas.harvard.edu | Email: rchelp@rc.fas.harvard.edu


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

2025 MGHPCC power downtime June 2-4, 2025
Scheduled for June 02, 2025 at 1:00 PM – June 05, 2025 at 1:00 PM 3 days
  • Planned
    June 02, 2025 at 1:00 PM
    Planned
    June 02, 2025 at 1:00 PM

    The yearly power downtime at our Holyoke data center, MGHPCC, has been scheduled. 
    This year's power downtime will take place on Tuesday June 3, 2025. 

    This will require FASRC to begin shutdown of our systems beginning at 9AM on Monday, June 2nd.
    We have worked to reduce the total outage time this year.
    We will begin power-up on Wednesday June 4th with an expected return to full service by 9AM Thursday June 5th.

    • Monday June 2nd -  Power-down begins at 9AM

    • Tuesday June 3rd - Power out at MGHPCC

    • Wednesday June 4th - Maintenance tasks and then power-up begins

    • Thursday June 5th - Expected return to full service by 9AM

    Maintenance:
    During this downtime, Holylabs (/n/holylabs) will move to new hardware.
    Starfish, Coldfront, and the Portal will be unavailable during the downtime.

    For more details including a graphical timeline, please see: https://www.rc.fas.harvard.edu/events/2025-mghpcc-power-downtime/

    Updates will be posted here on our status page: https://status.rc.fas.harvard.edu/
    Note that you can subscribe to receive updates as they happen. On the status page, click Get Updates.

    Notices and reminders will also be sent to all users via our mailing lists.

Operational

SLURM Scheduler - Cannon - Operational

Cannon Compute Cluster (Holyoke) - Operational

Boston Compute Nodes - Operational

GPU nodes (Holyoke) - Operational

seas_compute - Operational

Operational

SLURM Scheduler - FASSE - Operational

FASSE Compute Cluster (Holyoke) - Operational

Operational

Kempner Cluster CPU - Operational

Kempner Cluster GPU - Operational

Operational

Login Nodes - Boston - Operational

Login Nodes - Holyoke - Operational

FASSE login nodes - Operational

Operational

Cannon Open OnDemand/VDI - Operational

FASSE Open OnDemand/VDI - Operational

Operational

Netscratch (Global Scratch) - Operational

Home Directory Storage - Boston - Operational

Tape - (Tier 3) - Operational

Holylabs - Operational

Isilon Storage Holyoke (Tier 1) - Operational

Holystore01 (Tier 0) - Operational

HolyLFS04 (Tier 0) - Operational

HolyLFS05 (Tier 0) - Operational

HolyLFS06 (Tier 0) - Operational

Holyoke Tier 2 NFS (new) - Operational

Holyoke Specialty Storage - Operational

holECS - Operational

Isilon Storage Boston (Tier 1) - Operational

BosLFS02 (Tier 0) - Operational

Boston Tier 2 NFS (new) - Operational

CEPH Storage Boston (Tier 2) - Operational

Boston Specialty Storage - Operational

bosECS - Operational

Samba Cluster - Operational

Globus Data Transfer - Operational

Recent notices

Show notice history