Experiencing partially degraded performance

Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE | Academic


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
https://docs.rc.fas.harvard.edu | https://portal.rc.fas.harvard.edu | Email: rchelp@rc.fas.harvard.edu


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

gpu_test queue slow - note on use

Resolved
Under maintenance
Started over 1 year ago Lasted 2 months

Affected

Cannon Cluster
GPU nodes (Holyoke)
Updates
  • Resolved
    Resolved

    This issue has become one of supply and availability. As such, we are changing this issue to Maintenance as there is nothing further we can do until the vendor replaces the hardware. Please continue to use other GPU resources. We will notify the community once we have new hardware to replenish the gpu_test queue.

    Please see Running Jobs for other available resources.

  • Identified
    Update

    Difficulty obtaining replacement mainboards has this issue stalled. More information when we have it.

  • Identified
    Update

    Correction to first part of previous update:

    Currently only 2 of the nodes in gpu_test are working and the single remoteviz node is down. These will require help from the vendor to revive. We are also investigating with them the root cause for multiple node failures. No ETA

  • Identified
    Update

    Currently only 2 of the nodes in gpu_test are working and the single remoteviz node is down. These will require help from the vendor to revive. We are also investigating with them the root cause for multiple node failures. No ETA

    As a result of this, we are pre-emptively modifying the QoS for gpu_test now. We had planned to make a change on Thursday after announcing it in tomorrow's maintenance email, but this forces our hand.

    QoS changes for gpu_test effective now: Limited to 1 job per user. That job is limited to a maximum 16 cores and 90GB memory.

    See Running Jobs for a list of available partitions, including the gpu partition.

  • Identified
    Identified

    Please note that this issue also affects the remoteviz partition.

    Updates as we have them.

  • Investigating
    Investigating

    Several hosts in the gpu_test queue have become unresponsive and will require a physical visit to reset. Staff are en route to the data center.

    On a related note, please do not use gputest as a workaround for the regular gpuqueues.
    This is unfair to other users and ties up the gpu_test partition. This partition is not for general job use.

    We will be addressing this issue later this week by reducing the number of jobs allowed per user. This will be noted in tomorrow's maintenance email and then implemented Thursday.