FAS Research Computing - Notice history

Cannon Compute Cluster (Holyoke) experiencing degraded performance

Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
https://docs.rc.fas.harvard.edu | https://portal.rc.fas.harvard.edu | Email: rchelp@rc.fas.harvard.edu


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

Degraded performance

SLURM Scheduler - Cannon - Operational

Cannon Compute Cluster (Holyoke) - Degraded performance

Boston Compute Nodes - Operational

GPU nodes (Holyoke) - Operational

seas_compute - Operational

Operational

SLURM Scheduler - FASSE - Operational

FASSE Compute Cluster (Holyoke) - Operational

Operational

Kempner Cluster CPU - Operational

Kempner Cluster GPU - Operational

Operational

Login Nodes - Boston - Operational

Login Nodes - Holyoke - Operational

FASSE login nodes - Operational

Operational

Cannon Open OnDemand/VDI - Operational

FASSE Open OnDemand/VDI - Operational

Operational

Netscratch (Global Scratch) - Operational

Home Directory Storage - Boston - Operational

Holylabs - Operational

HolyLFS06 (Tier 0) - Operational

HolyLFS04 (Tier 0) - Operational

HolyLFS05 (Tier 0) - Operational

Holystore01 (Tier 0) - Operational

Isilon Storage Holyoke (Tier 1) - Operational

Holyoke Tier 2 NFS (new) - Operational

100% - uptime
Dec 2024 · 100.0%Jan 2025 · 100.0%Feb · 100.0%
Dec 2024
Jan 2025
Feb 2025

Holyoke Specialty Storage - Operational

holECS - Operational

BosLFS02 (Tier 0) - Operational

Isilon Storage Boston (Tier 1) - Operational

Boston Specialty Storage - Operational

Boston Tier 2 NFS (new) - Operational

100% - uptime
Dec 2024 · 100.0%Jan 2025 · 100.0%Feb · 100.0%
Dec 2024
Jan 2025
Feb 2025

CEPH Storage Boston (Tier 2) - Operational

bosECS - Operational

Tape - (Tier 3) - Operational

Samba Cluster - Operational

Globus Data Transfer - Operational

Notice history

Feb 2025

Slurm performance issues - detailed report
  • Resolved
    Resolved

    An emergency patch of the scheduler has resolved the Multiple Partition issue

  • Investigating
    Investigating

    Since mid-January we've been seeing some strange issues with the scheduler which caused periodic stalls or unresponsiveness in the scheduler. We had hoped that the Slurm upgrade to 24.11.1 would resolve those issues due to various architecture changes in the communications backend. Unfortunately they did not; we have since opened an issue with SchedMD (our service vendor for the scheduler). This has since spiraled into finding several other issues with the scheduler which we are working to remediate. Below is a status report regarding these issues:

    1. High Agent Load Stall (RESOLVED): This was reported in https://support.schedmd.com/show_bug.cgi?id=21975 The scheduler would stall due to being oversaturated with blocking requests. This turned out to be due to a new Slurm feature called stepmgr which we had enabled to handle jobs with many steps. Unfortunately this feature also increased the load on the scheduler for array jobs exiting at the same time which caused the stall. Since we tend not to have many users that use many steps we opted to disable the stepmgr function. This resolved the High Agent Load issue. Users that have many steps in their job may still turn on the stepmgr for their specific job by adding #SBATCH --stepmgr (https://slurm.schedmd.com/sbatch.html#OPT_stepmgr)

    2. Scheduler Thrashing (MONITORING): We discovered this while working on the previous bug and continued to work on it in the same bug report: https://support.schedmd.com/show_bug.cgi?id=21975 Under high load, the scheduler would get into a thrashing state where the scheduler would effectively go heads down and ignore incoming requests in order to focus on scheduling jobs. To users this would look like the scheduler was unresponsive as the scheduler was ignoring their requests to deal with higher priority traffic. To remediate this we increased the thread count for the scheduler and implemented a throttle to slow things down so that the scheduler could respond to all the requests with out impacting scheduler throughput. This is in place now and appears to have resolved the issue. We are continuing to monitor the scheduler to tune this throttle.

    3. --test-only requeue crash (RESOLVED): During this investigation we also ran into another bug reported by another group related to jobs that were submitted using --test-only that would in theory preempt other jobs (see: https://support.schedmd.com/show_bug.cgi?id=21997). This caused the scheduler to crash. Given the severity of the bug we emergency patched the scheduler on Feb 12th to resolve this issue.

    4. Multiple Partition Jobs Labelled with Wrong Partition (IN PROGRESS): This is a new issue identified on 2/13 related to jobs that submit to multiple partitions at once (https://support.schedmd.com/show_bug.cgi?id=22076). When the job schedules it may run in one partition but be labelled as being in another. This can lead to job preemption issues as the jobs are labelled as being in partitions that cannot be preempted even though they were originally scheduled in partitions that could be. This was identified earlier by another group and SchedMD is working on a patch. Depending on the timing FASRC will either emergency patch the scheduler for this issue or wait for the formal release of 24.11.2. Note that this issue really only impacts preemption and the scheduler is working fine otherwise. If you see jobs that you think should be preempted but are not and are blocking your work please let us know and we will investigate.


    Thank you for your patience as we work through these issues.

Jan 2025

Network issues affecting VPN, portal, ond potentially other services
  • Resolved
    Resolved
    The network issues have been resolved.
  • Identified
    Identified

    Most services restored. Some VPN connectivity or lag may still exist for some user.

    Networking expects to have this fully resolved very soon.

  • Investigating
    Investigating

    We are currently investigating this issue.

    We've identified some are unable to connect to VPN.

    OOD/OpenOnDemand access is affected.

    Other symptoms are the FASRC websites, portal.rc.fas.harvard.edu and other internet-facing sites (coldfront, spinal, minilims, etc.) are not accessible


    SSH to/from nodes or login may be affected or laggy.

    Networking is investigating.

Portal is partially unavailable
  • Resolved
    Resolved
    Portal is operating normally.
  • Monitoring
    Monitoring

    Portal is online, but requires brief maintenance before approvers can use.

  • Investigating
    Investigating

    portal.rc.fas.harvard.edu is unavailable. We are currently investigating this issue.

Dec 2024

Cluster Partially Degraded
  • Resolved
    Resolved
    Jobs have cleared overnight and a fix for the high load appears to be working. We will monitor for any recurrence, but all appears well at this time.
  • Investigating
    Investigating

    Low priority jobs are not getting scheduled despite being at the top of the queue. We are currently investigating this incident and have reached out to SchedMD regarding this.

    See https://support.schedmd.com/show_bug.cgi?id=21627 

FASRC monthly maintenance - Monday December 2nd, 2024 7am-11am
  • Completed
    December 02, 2024 at 4:00 PM
    Completed
    December 02, 2024 at 4:00 PM
    Maintenance has completed successfully
  • Update
    December 02, 2024 at 12:52 PM
    In progress
    December 02, 2024 at 12:52 PM

    Due to an urgent network issue which requires a restart of some network hardware, all jobs will need to be paused.

    Interactive jobs and the ability to write to some storage may be interrupted.

  • In progress
    December 02, 2024 at 12:00 PM
    In progress
    December 02, 2024 at 12:00 PM
    Maintenance is now in progress
  • Planned
    December 02, 2024 at 12:00 PM
    Planned
    December 02, 2024 at 12:00 PM

    FASRC monthly maintenance will occur Monday December 2nd, 2024 from 7am-11am

    IMPORTANT NOTICES

    • holyscratch01 will be set to read-only during this maintenance and will be decommissioned February 1, 2025. Please move any needed scratch data to netscratch and begin using it instead if you have not done so already. The global $SCRATCH variable will be changed to /n/netscratch

    • FASRC will be switching to the Harvard ServiceNow ticket system on Dec. 2nd. Our email addresses remain the same and no action is required on your part.
      Please do not re-open old/closed tickets after Dec. 2nd and instead create a new ticket.

    • Cannon cluster: serial_requeue and gpu_requeue will be set to allow MPI/multinode jobs. Such jobs need to be able to handle preemption/being requeued. 

    Training: Upcoming training from FASRC and other sources can be found on our Training Calendar. at https://www.rc.fas.harvard.edu/upcoming-training/

    Status Page: You can subscribe to our status to receive notifications of maintenance, incidents, and their resolution at https://status.rc.fas.harvard.edu/ (click Get Updates for options).

    Upcoming holidays: Thanksgiving Nov. 28th and 29th. Winter break Dec. 23rd through January 1st 

    MAINTENANCE TASKS
    Cannon cluster will be paused during this maintenance?: NO
    FASSE cluster will be paused during this maintenance?: NO

    • Set /n/holyscratch01 scratch filesystem to read-only

      • Audience: All cluster users

      • Impact: Please adoptthe new scratch filesystem /n/netscratch prior to Dec. 2nd. The $SCRATCH variable will move to /n/netscratch during this maintenance
        Data on holyscratch01 will still be readable, but not writable, and will be fully decommissioned on Feb. 1, 2025.

    • Switch ticketing system to ServiceNow. Our email addresses remain the same.

      • Audience: All FASRC users

      • Impact: All new tickets will go to Harvard'sServiceNow,our email remains the same. Existing tickets will get moved any time someone replies.

      • NOTE: From Dec. 2nd on, please do not re-open any old tickets. Create a new one instead by emailing rchelp@rc.fas.harvard.edu

    • Login node reboots

      • Audience: Anyone logged into a FASRC Cannon or FASSE login node

      • Impact: Login nodes will rebooted during this maintenance window

    • Scratch cleanup ( https://docs.rc.fas.harvard.edu/kb/policy-scratch/ )

      • Audience: Cluster users

      • Impact: Files older than 90 days will be removed. Please note that retention cleanup can and does run at any time, not just during the maintenance window.

    Thank you,
    FAS Research Computing
    https://docs.rc.fas.harvard.edu/
    https://www.rc.fas.harvard.edu/upcoming-training/

Dec 2024 to Feb 2025

Next