FAS Research Computing - Notice history

Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE | Academic


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
https://docs.rc.fas.harvard.edu | https://portal.rc.fas.harvard.edu | Email: rchelp@rc.fas.harvard.edu


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

Operational

SLURM Scheduler - Cannon - Operational

Cannon Compute Cluster (Holyoke) - Operational

Boston Compute Nodes - Operational

GPU nodes (Holyoke) - Operational

seas_compute - Operational

Operational

SLURM Scheduler - FASSE - Operational

FASSE Compute Cluster (Holyoke) - Operational

Operational

Kempner Cluster CPU - Operational

Kempner Cluster GPU - Operational

Operational

Login Nodes - Boston - Operational

Login Nodes - Holyoke - Operational

FASSE login nodes - Operational

Operational

Cannon Open OnDemand/VDI - Operational

FASSE Open OnDemand/VDI - Operational

Operational

Netscratch (Global Scratch) - Operational

Holyscratch01 (Pending Retirement) - Operational

Home Directory Storage - Boston - Operational

HolyLFS06 (Tier 0) - Operational

HolyLFS04 (Tier 0) - Operational

HolyLFS05 (Tier 0) - Operational

Holystore01 (Tier 0) - Operational

Holylabs - Operational

BosLFS02 (Tier 0) - Operational

Isilon Storage Boston (Tier 1) - Operational

Isilon Storage Holyoke (Tier 1) - Operational

CEPH Storage Boston (Tier 2) - Operational

Tape - (Tier 3) - Operational

Boston Specialty Storage - Operational

Holyoke Specialty Storage - Operational

Samba Cluster - Operational

Globus Data Transfer - Operational

bosECS - Operational

holECS - Operational

Notice history

May 2024

NESE Tape unavailable due to maintenance, ETA now Monday
  • Resolved
    Resolved

    NESE maintenance has resolved and is now back in service

  • Monitoring
    Monitoring

    A note from NESE. Their maintenance has been delayed by hardware issues. ETA now Monday 6/3

    Dear All,

    NESE Tape system upgrade is currently in progress. While IBM hardware
    team works on TS4500 library tape frame expansion and IBM software team
    works on ESS and Archive software and firmware upgrades, the work
    progress has been slowed down due to unforeseen hardware issues.We now expect to bring the tape service back into production this Monday
    morning. We apologize for any inconvenience caused by the delay.

  • Identified
    Identified

    Due to maintenance at our tape partner, NESE (Northeast Storage Exchange), access to tape allocations will be unavailable until at least late Thursday (5/30). Normal operations will resume by Friday (5/31).

    If you continue to have issues with a Globus tape endpoint on Friday, please contact FASRC or NESE

Annual MGHPCC/Holyoke data center power downtime - May 21-24 2024
  • Completed
    May 24, 2024 at 9:50 PM
    Completed
    May 24, 2024 at 9:50 PM

    2024 MGHPCC downtime complete

    DOWNTIME COMPLETE

    The annual multi-day power downtime at MGHPCC (https://www.rc.fas.harvard.edu/blog/2024-mghpcc-power-downtime/) is complete (with any exceptions noted below). Normal service resumes today (Friday May 24th) at 5pm.

    The cluster has been updated to Rocky Linux 8.9. Several network, InfiniBand, computer, and storage firmware updates were installed. Available security updates were also installed.

    CANNON NODES

    More than 90% of nodes are up and all partitions are enabled. If your specialty partition has a downed node, we will attend to this on Tuesday.

    FASSE OOD

    Some updates are still propagating. If your FASSE Open OnDemand/VDI session does not work initially, please wait or retry your job/session.

    POST-DOWNTIME SUPPORT

    If you have any further concerns or unanswered questions please submit a help ticket (https://portal.rc.fas.harvard.edu/rcrt/submit_ticket) and we will do our best to respond quickly. Please bear in mind it is a long weekend, so lingering issues may not be dealt with until Tuesday.

    Also, have a good long Memorial Day weekend!

    Thanks,

    FAS Research Computing

    https://www.rc.fas.harvard.edu/

    https://docs.rc.fas.harvard.edu/

    https://status.rc.fas.harvard.edu/

    rchelp@rc.fas.harvard.edu  

  • Update
    May 24, 2024 at 9:08 PM
    In progress
    May 24, 2024 at 9:08 PM

    We are currently delayed opening the cluster due to some lingering issues.

    We will re-open as soon as possible or update again at 6pm.

  • Update
    May 24, 2024 at 1:47 PM
    In progress
    May 24, 2024 at 1:47 PM

    Power work completed by facility. Currently on schedule for powerup and return to service. ETA 5pm.

  • In progress
    May 21, 2024 at 1:00 PM
    In progress
    May 21, 2024 at 1:00 PM
    Maintenance is now in progress
  • Planned
    May 21, 2024 at 1:00 PM
    Planned
    May 21, 2024 at 1:00 PM

    The 2024 MGHPCC data center annual power downtime will take place May 21-24, 2024.

    We will begin our shutdown on Tuesday May 21st and expect a return to service by 5PM Friday May 24th.

    - Jobs: Please plan ahead as all still running jobs on the morning of May 21st will be stopped and canceled and will need to be resubmitted after the downtime. Pending jobs will remain in the queue until the cluster returns to regular service on May 24th.

    - Access: The cluster, scheduler, login, and OoD nodes will be unavailable for the duration of the downtime. New lab and account requests should wait until after the downtime.

    - Storage: All Holyoke storage will be powered down and unavailable for the duration of the downtime. Boston storage will remain online, but your ability to access it may be impacted and network changes may briefly affect its availability.

    Further details, an explanation for this year's change in scheduling, a visual timeline, and a list of maintenance tasks overview can be found at:

    https://www.rc.fas.harvard.edu/blog/2024-mghpcc-power-downtime/

    Progress of the downtime will be posted here on our status page during the event. Note that you can subscribe to receive updates as they happen. Click Get Updates in the upper right.

    MAJOR TASK OVERVIEW

    • OS upgrade to Rocky 8.9 - Point upgrade, no code rebuilds will be required. Switch from system OFED to Mellanox OFED on nodes for improved performance

    • Infiniband (network) upgrades

    • BIOS updates (various)

    • Storage firmware updates

    • Network Maintenance

    • Decommission old nodes (targets contacted)

    • Additional minor one-off updates and maintenance (cable swap, reboots, etc.)

    Thanks,

    FAS Research Computing

    https://www.rc.fas.harvard.edu/

    https://docs.rc.fas.harvard.edu/

    https://status.rc.fas.harvard.edu/

Many nodes in 8A down - affects sapphire, test, bigmem, and other partitions
  • Resolved
    Resolved
    This incident has been resolved.
  • Investigating
    Investigating

    We are still unable to resolve the issue with these nodes and are working with the facility, networking, and our staff to find a solution. The affected partitions (noted in previous update below) will be resource-constrained and continue to be slow or unable to queue new jobs.


    If you are using a partition that cannot queue new jobs, please consider adding additional partitions to your job: https://docs.rc.fas.harvard.edu/kb/running-jobs/#Slurm_partitions

    Also, a reminder that the data center power downtime will begin Tuesday morning. So any new jobs scheduled for more than 3 days will not complete: https://www.rc.fas.harvard.edu/blog/2024-mghpcc-power-downtime/

  • Identified
    Identified

    We are still working on the root cause and resolution for these downed nodes.

    Partitions with one or more affected nodes [involves multiple nodes unless denoted as (1) ]:

    arguelles_delgado_gpu (1)

    hsph

    joonholee

    jshapiro_sapphire

    lichtmandce01

    bigmem

    gpu_requeue (1)

    intermediate

    sapphire

    serial_requeue (1)

    shared (1)

    test

    yao / yao_priority

    use 'sinfo -p [partition name]' if you wish to see the down nodes in particular queue

  • Investigating
    Investigating

    We are currently investigating this incident. An unknown outage has downed many nodes in row 8A of our data center. More information to follow.

    Includes nodes from the sapphire, test, gpu, and other partitions.

Apr 2024

Mar 2024

FASRC maintenance update - All jobs requeued (Cannon and FASSE)
  • Resolved
    Resolved
    This incident has been resolved.
  • Monitoring
    Monitoring

    Informational Notice

    The Slurm upgrade to 23.11.4 was completed successfully during maintenance. However a complication with the automation of Slurm's cryptographic keys occurred during the upgrade which caused nodes to lose the ability to talk to the Slurm master. The Slurm master therefore viewed those nodes as down and requeued their jobs.

    All jobs on Cannon and FASSE were requeued.

    This is deeply regrettable but the chain of events which caused this could not be foreseen.

    To check the status of your jobs, see the common Slurm commands at:

    https://docs.rc.fas.harvard.edu/kb/convenient-slurm-commands/#Information_on_jobs

    FAS Research Computing

    https://docs.rc.fas.harvard.edu/

    rchelp@rc.fas.harvard.edu

Mar 2024 to May 2024

Next