Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE | Academic


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
https://docs.rc.fas.harvard.edu | https://portal.rc.fas.harvard.edu | Email: rchelp@rc.fas.harvard.edu


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

Cannon Cluster

Operational

SLURM Scheduler - Cannon

Operational

Cannon Compute Cluster (Holyoke)

Operational

Boston Compute Nodes

Operational

GPU nodes (Holyoke)

Operational

SEAS compute partition

Operational

FASSE Cluster

Operational

SLURM Scheduler - FASSE

Operational

FASSE Compute Cluster (Holyoke)

Operational

Kempner Cluster

Operational

Kempner Cluster CPU

Operational

Kempner Cluster GPU

Operational

Login Nodes

Operational

Login Nodes - Boston

Operational

Login Nodes - Holyoke

Operational

FASSE login nodes

Operational

VDI/OpenOnDemand

Operational

Cannon VDI (Open OnDemand)

Operational

FASSE VDI (Open OnDemand)

Operational

Storage

Operational

Holyscratch01 (Global Scratch)

Operational

Home Directory Storage - Boston

Operational

HolyLFS03 (Tier 0)

Operational

HolyLFS04 (Tier 0)

Operational

HolyLFS05 (Tier 0)

Operational

Holystore01 (Tier 0)

Operational

Holylabs

Operational

BosLFS02 (Tier 0)

Operational

Isilon Storage Boston (Tier 1)

Operational

Isilon Storage Holyoke (Tier 1)

Operational

CEPH Storage Boston (Tier 2)

Operational

Tape - (Tier 3)

Operational

Boston Specialty Storage

Operational

Holyoke Specialty Storage

Operational

Samba Cluster

Operational

Globus Data Transfer

Operational

bosECS

Operational

holECS

Operational

Notice history

Apr 2024

Monthly Maintenance Monday April 1st, 2024 from 7am-11am
  • Completed
    April 01, 2024 at 3:00 PM
    Completed
    April 01, 2024 at 3:00 PM
    Maintenance has completed successfully
  • In progress
    April 01, 2024 at 11:00 AM
    In progress
    April 01, 2024 at 11:00 AM
    Maintenance is now in progress
  • Planned
    April 01, 2024 at 11:00 AM
    Planned
    April 01, 2024 at 11:00 AM

    NOTICES

    • SURVEY:  If you have not yet, we invite you to fill out our 2024 user survey *approx. 15 minutes) and give us your feedback.  The survey is anonymous and asks questions about all of our services including cluster, storage, and support. The survey will be available  until April 12th.

      https://harvard.az1.qualtrics.com/jfe/form/SV_e3AmuOrrmBOHTCu

    • STATUS PAGE:  You can subscribe to our status to receive notifications of any issues and their resolution at https://status.rc.fas.harvard.edu/ (click Get Updates for options).

    MAINTENANCE TASKS

    Cannon cluster will be paused during this maintenance?: NO

    FASSE cluster will be paused during this maintenance?: NO

    Ticket System host move
    -- Audience: All  users
    -- Impact: The FASRC ticket system will be unavailable during maintenance while we move it to a new host. Emails sent during this time should pend but still reach the ticket system once it is back online.

    Login node and Open OnDemand (OOD/VDI) reboots
    -- Audience: Anyone logged into a login node or VDI/OOD node
    -- Impact: Login and VDI/OOD nodes will rebooted during this maintenance window  

    Scratch cleanup ( https://docs.rc.fas.harvard.edu/kb/policy-scratch/ )
    -- Audience: Cluster users
    -- Impact: Files older than 90 days will be removed. Please note that retention cleanup can run at any time, not just during the maintenance window.  

    Thanks,  

    FAS Research Computing
    Dept. Website: https://www.rc.fas.harvard.edu/
    Documentation: https://docs.rc.fas.harvard.edu/
    Status Page: https://status.rc.fas.harvard.edu/

Mar 2024

FASRC maintenance update - All jobs requeued (Cannon and FASSE)
  • Resolved
    Resolved
    This incident has been resolved.
  • Monitoring
    Monitoring

    Informational Notice

    The Slurm upgrade to 23.11.4 was completed successfully during maintenance. However a complication with the automation of Slurm's cryptographic keys occurred during the upgrade which caused nodes to lose the ability to talk to the Slurm master. The Slurm master therefore viewed those nodes as down and requeued their jobs.

    All jobs on Cannon and FASSE were requeued.

    This is deeply regrettable but the chain of events which caused this could not be foreseen.

    To check the status of your jobs, see the common Slurm commands at:

    https://docs.rc.fas.harvard.edu/kb/convenient-slurm-commands/#Information_on_jobs

    FAS Research Computing

    https://docs.rc.fas.harvard.edu/

    rchelp@rc.fas.harvard.edu

Feb 2024

Ceph instability - Affects Boston VMs (Virtual Machines) and Tier2 Ceph shares
  • Resolved
    Resolved

    The Ceph instability has been resolved. Ceph Tier2 shares, VDI, and VMs should be back to their normal state.

    If your VM, /net/fs-[labname] share, or VDI session is still impacted, please contact rchelp@rc.fas.harvard.edu

  • Identified
    Identified

    The infrastructure behind Tier2 Ceph shares and VMs is unstable.
    This also affects VDI/OOD which relies on virtual machines.

    /net/fs-[labname] shares, new OOD/VDI sessions, and VMs are affected and may will be inaccessible until this is resolved.

    Thanks for your patience.

Feb 2024 to Apr 2024