Holyoke Specialty Storage experiencing degraded performance

Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE | Academic


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
https://docs.rc.fas.harvard.edu | https://portal.rc.fas.harvard.edu | Email: rchelp@rc.fas.harvard.edu


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

Cannon Cluster

Operational

SLURM Scheduler - Cannon

Operational

Cannon Compute Cluster (Holyoke)

Operational

Boston Compute Nodes

Operational

GPU nodes (Holyoke)

Operational

SEAS compute partition

Operational

FASSE Cluster

Operational

SLURM Scheduler - FASSE

Operational

FASSE Compute Cluster (Holyoke)

Operational

Kempner Cluster

Operational

Kempner Cluster CPU

Operational

Kempner Cluster GPU

Operational

Login Nodes

Operational

Login Nodes - Boston

Operational

Login Nodes - Holyoke

Operational

FASSE login nodes

Operational

VDI/OpenOnDemand

Operational

Cannon VDI (Open OnDemand)

Operational

FASSE VDI (Open OnDemand)

Operational

Storage

Degraded performance

Holyscratch01 (Global Scratch)

Operational

Home Directory Storage - Boston

Operational

HolyLFS03 (Tier 0)

Operational

HolyLFS04 (Tier 0)

Operational

HolyLFS05 (Tier 0)

Operational

Holystore01 (Tier 0)

Operational

Holylabs

Operational

BosLFS02 (Tier 0)

Operational

Isilon Storage Boston (Tier 1)

Operational

Isilon Storage Holyoke (Tier 1)

Operational

CEPH Storage Boston (Tier 2)

Operational

Tape - (Tier 3)

Operational

Boston Specialty Storage

Operational

Holyoke Specialty Storage

Degraded performance

Samba Cluster

Operational

Globus Data Transfer

Operational

bosECS

Operational

holECS

Operational

Notice history

Jan 2024

Ceph instability - Affects Boston VMs (Virtual Machines) and Tier2 Ceph shares
  • Resolved
    Resolved

    The Ceph instability has been resolved. Ceph Tier2 shares, VDI, and VMs should be back to their normal state.

    If your VM, /net/fs-[labname] share, or VDI session is still impacted, please contact rchelp@rc.fas.harvard.edu

  • Identified
    Identified

    The infrastructure behind Tier2 Ceph shares and VMs is unstable.
    This also affects VDI/OOD which relies on virtual machines.

    /net/fs-[labname] shares, new OOD/VDI sessions, and VMs are affected and may will be inaccessible until this is resolved.

    Thanks for your patience.

Ceph instability - Affects Boston VMs (Virtual Machines) and Tier2 Ceph shares
  • Resolved
    Resolved

    The Ceph instability has been resolved. Ceph Tier2 shares, VDI, and VMs should be back to their normal state.

    If your VM, /net/fs-[labname] share, or VDI session is still impacted, please contact rchelp@rc.fas.harvard.edu

  • Identified
    Identified

    The infrastructure behind Tier2 Ceph shares and VMs is unstable.
    This also affects VDI/OOD which relies on virtual machines.

    /net/fs-[labname] shares, new OOD/VDI sessions, and VMs are affected and may will be inaccessible until this is resolved.

    Thanks for your patience.

Ceph instability - Affects Boston VMs (Virtual Machines) and Tier2 Ceph shares
  • Resolved
    Resolved

    The Ceph instability has been resolved. Ceph Tier2 shares, VDI, and VMs should be back to their normal state.

    If your VM, /net/fs-[labname] share, or VDI session is still impacted, please contact rchelp@rc.fas.harvard.edu

  • Identified
    Identified

    The infrastructure behind Tier2 Ceph shares and VMs is unstable.
    This also affects VDI/OOD which relies on virtual machines.

    /net/fs-[labname] shares, new OOD/VDI sessions, and VMs are affected and may will be inaccessible until this is resolved.

    Thanks for your patience.

Dec 2023

Ceph instability - Affects Boston VMs (Virtual Machines) and Tier2 Ceph shares
  • Resolved
    Resolved

    The Ceph instability has been resolved. Ceph Tier2 shares, VDI, and VMs should be back to their normal state.

    If your VM, /net/fs-[labname] share, or VDI session is still impacted, please contact rchelp@rc.fas.harvard.edu

  • Identified
    Identified

    The infrastructure behind Tier2 Ceph shares and VMs is unstable.
    This also affects VDI/OOD which relies on virtual machines.

    /net/fs-[labname] shares, new OOD/VDI sessions, and VMs are affected and may will be inaccessible until this is resolved.

    Thanks for your patience.

Ceph instability - Affects Boston VMs (Virtual Machines) and Tier2 Ceph shares
  • Resolved
    Resolved

    The Ceph instability has been resolved. Ceph Tier2 shares, VDI, and VMs should be back to their normal state.

    If your VM, /net/fs-[labname] share, or VDI session is still impacted, please contact rchelp@rc.fas.harvard.edu

  • Identified
    Identified

    The infrastructure behind Tier2 Ceph shares and VMs is unstable.
    This also affects VDI/OOD which relies on virtual machines.

    /net/fs-[labname] shares, new OOD/VDI sessions, and VMs are affected and may will be inaccessible until this is resolved.

    Thanks for your patience.

Ceph instability - Affects Boston VMs (Virtual Machines) and Tier2 Ceph shares
  • Resolved
    Resolved

    The Ceph instability has been resolved. Ceph Tier2 shares, VDI, and VMs should be back to their normal state.

    If your VM, /net/fs-[labname] share, or VDI session is still impacted, please contact rchelp@rc.fas.harvard.edu

  • Identified
    Identified

    The infrastructure behind Tier2 Ceph shares and VMs is unstable.
    This also affects VDI/OOD which relies on virtual machines.

    /net/fs-[labname] shares, new OOD/VDI sessions, and VMs are affected and may will be inaccessible until this is resolved.

    Thanks for your patience.

Nov 2023

Ceph instability - Affects Boston VMs (Virtual Machines) and Tier2 Ceph shares
  • Resolved
    Resolved

    The Ceph instability has been resolved. Ceph Tier2 shares, VDI, and VMs should be back to their normal state.

    If your VM, /net/fs-[labname] share, or VDI session is still impacted, please contact rchelp@rc.fas.harvard.edu

  • Identified
    Identified

    The infrastructure behind Tier2 Ceph shares and VMs is unstable.
    This also affects VDI/OOD which relies on virtual machines.

    /net/fs-[labname] shares, new OOD/VDI sessions, and VMs are affected and may will be inaccessible until this is resolved.

    Thanks for your patience.

Ceph instability - Affects Boston VMs (Virtual Machines) and Tier2 Ceph shares
  • Resolved
    Resolved

    The Ceph instability has been resolved. Caeph Tier2 shares, VDI, and VMs should be back to their normal state.

    If your VM, /net/fs-[labname] share, or VDI session is still impacted, please contact rchelp@rc.fas.harvard.edu

  • Identified
    Identified

    The infrastructure behind Tier2 Ceph shares and VMs is unstable.
    This also affects VDI/OOD which relies on virtual machines.

    /net/fs-[labname] shares, new OOD/VDI sessions, and VMs are affected and may will be inaccessible until this is resolved.

    Thanks for your patience.

Partial Cannon outage
  • Resolved
    Resolved

    Cooling and power have been restored to the affected racks. The compute nodes have been resumed in Slurm and are now accepting jobs again.

    This incident has been resolved.

  • Identified
    Identified

    We have identified the partitions that are impacted due to loss of cooling in holy7c[02-12] compute nodes. Some of these partitions are fully down, others are partially down.

    blackhole
    blackholepriority davies desai hucecascade
    hucecascadepriority
    huttenhower
    janson
    jansoncascade joonholee lukin seascompute
    shared
    tambe
    test
    vishwanath
    whipple

    Please submit to other partitions in order to run jobs.

    The spart command will show you all partitions you have access to, and our Running Jobs page provides a list of publicly available partitions for all cluster users. Please see our docs page for other helpful Slurm commands.

    The Holyoke MGHPCC data center is working to restore cooling, and FASRC staff are onsite to assist. No ETA.

  • Investigating
    Investigating

    Compute nodes in holy7c[02-12] have experienced a power loss and are currently down. GPUs are not impacted at this time.

    The 'shared' partition is significantly impacted. Other public partitions and lab-owned partitions may be down or running at reduced capacity. Jobs are still being accepted/running, but may need to wait longer in the queue due to fewer resources being available.

    We are in contact with the Holyoke MGHPCC data center to investigate further. Updates to come. No ETA at this time.

Ceph instability - Affects Boston VMs (Virtual Machines) and Tier2 Ceph shares
  • Resolved
    Resolved

    The Ceph instability has been resolved. Caeph Tier2 shares, VDI, and VMs should be back to their normal state.

    If your VM, /net/fs-[labname] share, or VDI session is still impacted, please contact rchelp@rc.fas.harvard.edu

  • Identified
    Identified

    The infrastructure behind Tier2 Ceph shares and VMs is unstable.
    This also affects VDI/OOD which relies on virtual machines.

    /net/fs-[labname] shares, new OOD/VDI sessions, and VMs are affected and may will be inaccessible until this is resolved.

    Thanks for your patience.

Ceph instability - Affects Boston VMs (Virtual Machines) and Tier2 Ceph shares
  • Resolved
    Resolved

    The Ceph instability has been resolved. Caeph Tier2 shares, VDI, and VMs should be back to their normal state.

    If your VM, /net/fs-[labname] share, or VDI session is still impacted, please contact rchelp@rc.fas.harvard.edu

  • Identified
    Identified

    The infrastructure behind Tier2 Ceph shares and VMs is unstable.
    This also affects VDI/OOD which relies on virtual machines.

    /net/fs-[labname] shares, new OOD/VDI sessions, and VMs are affected and may will be inaccessible until this is resolved.

    Thanks for your patience.

Nov 2023 to Jan 2024