FAS Research Computing - Notice history

Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
https://docs.rc.fas.harvard.edu | https://portal.rc.fas.harvard.edu | Email: rchelp@rc.fas.harvard.edu


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

Operational

SLURM Scheduler - Cannon - Operational

Cannon Compute Cluster (Holyoke) - Operational

Boston Compute Nodes - Operational

GPU nodes (Holyoke) - Operational

seas_compute - Operational

Operational

SLURM Scheduler - FASSE - Operational

FASSE Compute Cluster (Holyoke) - Operational

Operational

Kempner Cluster CPU - Operational

Kempner Cluster GPU - Operational

Operational

Login Nodes - Boston - Operational

Login Nodes - Holyoke - Operational

FASSE login nodes - Operational

Operational

Cannon Open OnDemand/VDI - Operational

FASSE Open OnDemand/VDI - Operational

Operational

Netscratch (Global Scratch) - Operational

Home Directory Storage - Boston - Operational

Tape - (Tier 3) - Operational

Holylabs - Operational

Isilon Storage Holyoke (Tier 1) - Operational

Holystore01 (Tier 0) - Operational

HolyLFS04 (Tier 0) - Operational

HolyLFS05 (Tier 0) - Operational

HolyLFS06 (Tier 0) - Operational

Holyoke Tier 2 NFS (new) - Operational

Holyoke Specialty Storage - Operational

holECS - Operational

Isilon Storage Boston (Tier 1) - Operational

BosLFS02 (Tier 0) - Operational

Boston Tier 2 NFS (new) - Operational

CEPH Storage Boston (Tier 2) - Operational

Boston Specialty Storage - Operational

bosECS - Operational

Samba Cluster - Operational

Globus Data Transfer - Operational

Notice history

Jul 2022

NESE tape (Tier 3) hardware maintenance/install
  • Completed
    July 21, 2022 at 10:00 PM
    Completed
    July 21, 2022 at 10:00 PM

    Maintenance has completed successfully

  • In progress
    July 21, 2022 at 10:00 AM
    In progress
    July 21, 2022 at 10:00 AM

    Maintenance is now in progress

  • Planned
    July 21, 2022 at 10:00 AM
    Planned
    July 21, 2022 at 10:00 AM

    Our tier 3 tape system is part of and run by NESE (the NorthEast Storage Exchange).

    We have been informed that they will be installing additional tape drives to the system on July 21st and this will require a whole day downtime to accomplish. Tier 3 tape archives will not be available to our users on that date.

holylfs02 performance issues
  • Resolved
    Resolved

    The unrepairable volume on holylfs02 is isolated to two labs and they have been informed of next steps. This issue does not affect other areas on holylfs02, so we are closing this incident.

  • Update
    Update

    Recently we noticed an uptick in bad blocks on a RAID6 disk volume that is part of the entire filesystem. Generally speaking, the operating system will vector these out so no data is written there. During this period, we had to replace a number of drives due to failures; they are part of RAID 6 multi-disk set with dual parity and will rebuild with minimal impact on performance.

    What we believe happened is that, during the rebuild process, bad data was copied to the replacement disks and the filesystem got corrupted. One of the staff ran a read-only, non-destructive repair on the volume in question and noted quite a few errors.

    So far, we have 1) contacted the vendor (who gave us a command to clear additional bad blocks)
    2) NOT run the actual repair command to "fix" the filesystem (which would delete data) 3) contacted the vendor to see if they had any partners that might be able to assist. We are planning to meet with one of those partners early next week, but are not confident they will have a solution. At this point (and depending on how that meeting goes), the next step would be to run the repair, note the extent of loss, and attempt to get the volume remounted.

    Updates to follow when we have more information.

    Note that these shelves are covered under warranty support for another year. We see no need at present to replace the 3PB allocation with new hardware (this is planned to take place Q3-4 FY23)

  • Identified
    Identified

    Users may experience issues connecting to holylfs02, or slow samba performance.

    This is due to a hardware issue with holylfs02, and we have opened a ticket with the vendor. Updates to follow.

    No ETA.

Jun 2022

May 2022

Slurm security patch causing node unavailability
  • Resolved
    Resolved

    The scheduler and node states appear to be stable. Thank you for your patience and understanding.

    Please note that the intermittent deadlock issue is still not resolved, but we are actively monitoring that and intervening as necessary until we receive a solution.

  • Monitoring
    Monitoring

    The patch has been deployed and the scheduler restarted. Paused jobs are resuming.

    Any jobs which did not start or were stuck may have been flushed. So please check any pending jobs you might have had.

    Related doc: https://docs.rc.fas.harvard.edu/kb/running-jobs/.

  • Identified
    Identified

    We have tested the patch on our test cluster before releasing. We are now proceeding to deploy to the cluster. Thanks for your patience.

    UPDATE: jobs suspended, scheduler down for patching, nodes are updating.

  • Investigating
    Investigating

    The Slurm emergency security patch introduced a bug which is causing many of our nodes to be set to 'not responding'. The vendor has already identified the issue and issued another patch.

    We are deploying this patch after testing. Jobs will be paused and the scheduler and cluster will be unavailable while deploying the patch. Watch here for updates.

May 2022 to Jul 2022

Next