Holyoke Specialty Storage experiencing degraded performance

Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE | Academic


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
https://docs.rc.fas.harvard.edu | https://portal.rc.fas.harvard.edu | Email: rchelp@rc.fas.harvard.edu


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

Cannon Cluster

Operational

SLURM Scheduler - Cannon

Operational

Cannon Compute Cluster (Holyoke)

Operational

Boston Compute Nodes

Operational

GPU nodes (Holyoke)

Operational

SEAS compute partition

Operational

FASSE Cluster

Operational

SLURM Scheduler - FASSE

Operational

FASSE Compute Cluster (Holyoke)

Operational

Kempner Cluster

Operational

Kempner Cluster CPU

Operational

Kempner Cluster GPU

Operational

Login Nodes

Operational

Login Nodes - Boston

Operational

Login Nodes - Holyoke

Operational

FASSE login nodes

Operational

VDI/OpenOnDemand

Operational

Cannon VDI (Open OnDemand)

Operational

FASSE VDI (Open OnDemand)

Operational

Storage

Degraded performance

Holyscratch01 (Global Scratch)

Operational

Home Directory Storage - Boston

Operational

HolyLFS03 (Tier 0)

Operational

HolyLFS04 (Tier 0)

Operational

HolyLFS05 (Tier 0)

Operational

Holystore01 (Tier 0)

Operational

Holylabs

Operational

BosLFS02 (Tier 0)

Operational

Isilon Storage Boston (Tier 1)

Operational

Isilon Storage Holyoke (Tier 1)

Operational

CEPH Storage Boston (Tier 2)

Operational

Tape - (Tier 3)

Operational

Boston Specialty Storage

Operational

Holyoke Specialty Storage

Degraded performance

Samba Cluster

Operational

Globus Data Transfer

Operational

bosECS

Operational

holECS

Operational

Notice history

Oct 2022

FASRC Monthly maintenance Oct 3rd, 2022 7am-11am + Important Notices
  • Completed
    October 03, 2022 at 3:00 PM
    Completed
    October 03, 2022 at 3:00 PM

    Maintenance has completed successfully

  • In progress
    October 03, 2022 at 11:00 AM
    In progress
    October 03, 2022 at 11:00 AM

    Maintenance is now in progress

  • Planned
    October 03, 2022 at 11:00 AM
    Planned
    October 03, 2022 at 11:00 AM

    FASRC regular monthly maintenance will occur on Monday October 3rd, 2022 from 7am-11am. https://www.rc.fas.harvard.edu/monthly-maintenance IMPORTANT: There are several important notices below.

    NETWORK MAINTENANCE TUES 9/27 6pm-8pm

    Network maintenance on the MGHPCC/Holyoke fibre links and VPN will take place Tuesday Sept. 27th from 6pm until 8 pm.

    • VPN updates will occur during 6pm-7pm VPN connections may drop during this time.
    • Maintenance on the fibre links will take place 7pm-8pm. Brief disconnects may occur, especially with login and access nodes.
    • See maintenance event: https://status.rc.fas.harvard.edu/cl898lt5b25823hkojo3eslk1x

    NOTICES

    GPU_TEST and REMOTEVIZ PARTITIONS

    Due to failed nodes, the gpu_test partition is down to 2 nodes at the moment. We are working with the vendor to revive these nodes, but no ETA at this time. We are also investigating the root cause for multiple failures.  

    • We had already planned to modify the QoS (job limitations) on the gputest partition due to misuse of the partition, but the failures forced us to implement this yesterday.   Going forward, gputest is limited to 1 job per user. That job is limited to a maximum 16 cores, 90GB memory, and 1 GPU.

    • Please note: gputest should not be used to avoid the GPU queues and scheduling. See Running Jobs (https://docs.rc.fas.harvard.edu/kb/running-jobs/#Slurmpartitions) for a list of available partitions, including the gpu partition(s).

    • The remoteviz node/partition is also down for the same reason. The remoteviz partition is only one node, QoS remains the same.  

    Updates on our status page:  https://status.rc.fas.harvard.edu/cl8a94kcf17664hvoj8oksxanx

    GLOBUS PERSONAL CLIENT - UPDATE BY DEC 17

    • If you are using the Globus Connect Personal client on your machine, please ensure you have updated and are running version 3.2 or greater by December 17th, 2022. You will not be able to use version 3.1 or below after that date. https://docs.globus.org/ca-update-2022/#globusconnectpersonal

    TRAINING

    • New training sessions, including monthly new user training, are available.   You can find a list and links to sign up here: https://www.rc.fas.harvard.edu/upcoming-training/

    REGULAR MAINTENANCE

    • Login node and VDI node reboots
      -- Audience: Anyone logged into a a login node or VDI/OOD node
      -- Impact: Login and VDI/OOD nodes will be unavailable while updating and rebooting
       
    • Scratch cleanup ( https://docs.rc.fas.harvard.edu/kb/policy-scratch/ )
      -- Audience: Cluster users
      -- Impact: Files older than 90 days will be removed.
      -- Reminder: Scratch 90-day file retention purging runs occur regularly not just during maintenance periods.

    Thanks!
    FAS Research Computing
    https://www.rc.fas.harvard.edu
    https://status.rc.fas.harvard.edu

Sep 2022

NERC API maintenance Sept 21, 2022 -9am-5pm
  • Completed
    September 21, 2022 at 9:00 PM
    Completed
    September 21, 2022 at 9:00 PM

    Maintenance has completed successfully

  • In progress
    September 21, 2022 at 1:00 PM
    In progress
    September 21, 2022 at 1:00 PM

    Maintenance is now in progress

  • Planned
    September 21, 2022 at 1:00 PM
    Planned
    September 21, 2022 at 1:00 PM

    Our service partner New England Research Cloud (NERC) has a planned maintenance event on Wednesday 9/21/22 from 9am to 5pm.

    If you are a NERC user, please see: https://nerc.instatus.com/cl6qt8sbr2993aemv5algl4ih Updates will be shown on their status page.

    You can also subscribe to their status page by following that link and clicking Get Updates and choosing your notification method.

gpu_test queue slow - note on use
  • Resolved
    Resolved

    This issue has become one of supply and availability. As such, we are changing this issue to Maintenance as there is nothing further we can do until the vendor replaces the hardware. Please continue to use other GPU resources. We will notify the community once we have new hardware to replenish the gpu_test queue.

    Please see Running Jobs for other available resources.

  • Identified
    Update

    Difficulty obtaining replacement mainboards has this issue stalled. More information when we have it.

  • Identified
    Update

    Correction to first part of previous update:

    Currently only 2 of the nodes in gpu_test are working and the single remoteviz node is down. These will require help from the vendor to revive. We are also investigating with them the root cause for multiple node failures. No ETA

  • Identified
    Update

    Currently only 2 of the nodes in gpu_test are working and the single remoteviz node is down. These will require help from the vendor to revive. We are also investigating with them the root cause for multiple node failures. No ETA

    As a result of this, we are pre-emptively modifying the QoS for gpu_test now. We had planned to make a change on Thursday after announcing it in tomorrow's maintenance email, but this forces our hand.

    QoS changes for gpu_test effective now: Limited to 1 job per user. That job is limited to a maximum 16 cores and 90GB memory.

    See Running Jobs for a list of available partitions, including the gpu partition.

  • Identified
    Identified

    Please note that this issue also affects the remoteviz partition.

    Updates as we have them.

  • Investigating
    Investigating

    Several hosts in the gpu_test queue have become unresponsive and will require a physical visit to reset. Staff are en route to the data center.

    On a related note, please do not use gputest as a workaround for the regular gpuqueues.
    This is unfair to other users and ties up the gpu_test partition. This partition is not for general job use.

    We will be addressing this issue later this week by reducing the number of jobs allowed per user. This will be noted in tomorrow's maintenance email and then implemented Thursday.

Aug 2022

NESE tape (Tier 3) upgrades
  • Completed
    August 26, 2022 at 8:43 PM
    Completed
    August 26, 2022 at 8:43 PM

    Maintenance has completed successfully.

  • In progress
    August 22, 2022 at 10:00 AM
    In progress
    August 22, 2022 at 10:00 AM

    Maintenance is now in progress

  • Planned
    August 22, 2022 at 10:00 AM
    Planned
    August 22, 2022 at 10:00 AM

    Our tier 3 tape system is part of and run by NESE (the NorthEast Storage Exchange).

    We have been informed that they will be performing a significant upgrade of their Spectrum Scale archive system starting August 15th. This is a multi-day upgrade and will take approximately 3 days (potential for longer). Tier 3 tape allocations will be unavailable during this upgrade.

    NESE has informed us that this maintenance will be deferred to next week (9/22/22).

Aug 2022 to Oct 2022