FAS Research Computing - Notice history

Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
https://docs.rc.fas.harvard.edu | https://portal.rc.fas.harvard.edu | Email: rchelp@rc.fas.harvard.edu


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

Under maintenance

SLURM Scheduler - Cannon - Under maintenance

Cannon Compute Cluster (Holyoke) - Under maintenance

Boston Compute Nodes - Under maintenance

GPU nodes (Holyoke) - Under maintenance

seas_compute - Under maintenance

Under maintenance

SLURM Scheduler - FASSE - Under maintenance

FASSE Compute Cluster (Holyoke) - Under maintenance

Under maintenance

Kempner Cluster CPU - Under maintenance

Kempner Cluster GPU - Under maintenance

Under maintenance

Login Nodes - Boston - Under maintenance

Login Nodes - Holyoke - Under maintenance

FASSE login nodes - Under maintenance

Under maintenance

Cannon Open OnDemand/VDI - Under maintenance

FASSE Open OnDemand/VDI - Under maintenance

Under maintenance

Netscratch (Global Scratch) - Under maintenance

Home Directory Storage - Boston - Operational

Tape - (Tier 3) - Operational

Holylabs - Under maintenance

Isilon Storage Holyoke (Tier 1) - Under maintenance

Holystore01 (Tier 0) - Under maintenance

HolyLFS04 (Tier 0) - Under maintenance

HolyLFS05 (Tier 0) - Under maintenance

HolyLFS06 (Tier 0) - Under maintenance

Holyoke Tier 2 NFS (new) - Under maintenance

Holyoke Specialty Storage - Under maintenance

holECS - Under maintenance

Isilon Storage Boston (Tier 1) - Operational

BosLFS02 (Tier 0) - Operational

Boston Tier 2 NFS (new) - Operational

CEPH Storage Boston (Tier 2) - Operational

Boston Specialty Storage - Operational

bosECS - Operational

Samba Cluster - Operational

Globus Data Transfer - Operational

Notice history

Aug 2022

NESE tape (Tier 3) upgrades
  • Completed
    August 26, 2022 at 8:43 PM
    Completed
    August 26, 2022 at 8:43 PM

    Maintenance has completed successfully.

  • In progress
    August 22, 2022 at 10:00 AM
    In progress
    August 22, 2022 at 10:00 AM

    Maintenance is now in progress

  • Planned
    August 22, 2022 at 10:00 AM
    Planned
    August 22, 2022 at 10:00 AM

    Our tier 3 tape system is part of and run by NESE (the NorthEast Storage Exchange).

    We have been informed that they will be performing a significant upgrade of their Spectrum Scale archive system starting August 15th. This is a multi-day upgrade and will take approximately 3 days (potential for longer). Tier 3 tape allocations will be unavailable during this upgrade.

    NESE has informed us that this maintenance will be deferred to next week (9/22/22).

Jul 2022

NESE tape (Tier 3) hardware maintenance/install
  • Completed
    July 21, 2022 at 10:00 PM
    Completed
    July 21, 2022 at 10:00 PM

    Maintenance has completed successfully

  • In progress
    July 21, 2022 at 10:00 AM
    In progress
    July 21, 2022 at 10:00 AM

    Maintenance is now in progress

  • Planned
    July 21, 2022 at 10:00 AM
    Planned
    July 21, 2022 at 10:00 AM

    Our tier 3 tape system is part of and run by NESE (the NorthEast Storage Exchange).

    We have been informed that they will be installing additional tape drives to the system on July 21st and this will require a whole day downtime to accomplish. Tier 3 tape archives will not be available to our users on that date.

holylfs02 performance issues
  • Resolved
    Resolved

    The unrepairable volume on holylfs02 is isolated to two labs and they have been informed of next steps. This issue does not affect other areas on holylfs02, so we are closing this incident.

  • Update
    Update

    Recently we noticed an uptick in bad blocks on a RAID6 disk volume that is part of the entire filesystem. Generally speaking, the operating system will vector these out so no data is written there. During this period, we had to replace a number of drives due to failures; they are part of RAID 6 multi-disk set with dual parity and will rebuild with minimal impact on performance.

    What we believe happened is that, during the rebuild process, bad data was copied to the replacement disks and the filesystem got corrupted. One of the staff ran a read-only, non-destructive repair on the volume in question and noted quite a few errors.

    So far, we have 1) contacted the vendor (who gave us a command to clear additional bad blocks)
    2) NOT run the actual repair command to "fix" the filesystem (which would delete data) 3) contacted the vendor to see if they had any partners that might be able to assist. We are planning to meet with one of those partners early next week, but are not confident they will have a solution. At this point (and depending on how that meeting goes), the next step would be to run the repair, note the extent of loss, and attempt to get the volume remounted.

    Updates to follow when we have more information.

    Note that these shelves are covered under warranty support for another year. We see no need at present to replace the 3PB allocation with new hardware (this is planned to take place Q3-4 FY23)

  • Identified
    Identified

    Users may experience issues connecting to holylfs02, or slow samba performance.

    This is due to a hardware issue with holylfs02, and we have opened a ticket with the vendor. Updates to follow.

    No ETA.

Jun 2022

Jun 2022 to Aug 2022

Next