FAS Research Computing - A100 GPU issue – Incident details
All systems operational
Status page for the Harvard FAS Research Computing cluster and other resources.
Cluster Utilization (VPN and FASRC login required): Cannon | FASSE
Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).
The colors shown in the bars below were chosen to increase visibility for color-blind visitors. For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.
A100 GPU issue
Resolved
Operational
Started over 1 year agoLasted about 1 month
Affected
Cannon Cluster
Operational from 3:00 PM to 5:51 PM
GPU nodes (Holyoke)
Operational from 3:00 PM to 5:51 PM
Updates
Resolved
Resolved
Firmware updates have resolved this issue.
Update
Update
The proscribed driver update did not fix this issue.
We are working with Nvidia to find a fix. As a stop-gap nodes which get stuck/flagged will be marked in Slurm for reboot and rebooted once empty of jobs.
If a A100 GPU host your lab owns is stuck in a bad state, please let us know and we will mark and reboot it.
Identified
Identified
A100s are open but still experiencing some instability. Infrequently, you may hit the issue we noted earlier.
We are continuing to work on a solution.
Investigating
Investigating
An NVIDIA bug may be causing failures in A100 GPUs. nvidia-smi command is slow, or outputs "ERR!" or "No GPUs are found".