How to collect diagnostic logs using the NetApp Log Collection Script

How to collect diagnostic logs using the NetApp Log Collection Script

1. Purpose

This document describes the procedure to collect diagnostic logs using the NetApp Log Collection Script in environments running:

  • BeeGFS

  • NetApp E-Series backend storage

  • HA cluster using Pacemaker and Corosync

This script is typically requested by NetApp Support for storage-side or HA-related troubleshooting.


2. When to Use

Use this procedure when:

  • Storage path failures occur

  • Multipath errors are detected

  • Pacemaker resource failures happen

  • Cluster failover behaves unexpectedly

  • NetApp explicitly requests the log bundle


3. What This Script Collects

  • /var/log/beegfs*

  • /var/log/messages*

  • pcs status

  • NVMe device details

  • Multipath configuration

  • IP configuration

  • dmesg output

  • Journald logs

  • Pacemaker & Corosync logs

This focuses on OS + cluster + storage diagnostics.


4. Script

Quote#!/bin/bash
# Set up variables
TS="$(date '+%F_%H-%M-%S-%Z')-$(hostname)"
WORKDIR="/tmp/${TS}-logbundle"
ARCHIVE="${TS}-support-bundle.tar.gz"

# Create working directory
mkdir -p "$WORKDIR"

# Collect log files
tar -zcvf "$WORKDIR/beegfs-logs.tar.gz" /var/log/beegfs* /var/log/messages* 2>/dev/null

# Collect pcs status
pcs status > "$WORKDIR/pcs_status.log"

# Collect NVMe devices
nvme list > "$WORKDIR/nvme_device.log"

# Collect NVMe connections
nvme list-subsys > "$WORKDIR/nvme_connections.log"

# Collect lsblk
lsblk > "$WORKDIR/lsblk.log"

# Collect multipath -ll
multipath -ll > "$WORKDIR/multipath.log"

# Collect IPs
ip a > "$WORKDIR/ip_address.log"

# Collect IP rules
ip rule show > "$WORKDIR/ip_rules.log"

# Collect dmesg output
dmesg -T > "$WORKDIR/dmesg.log"

# Collect system logs from journald
journalctl --no-pager -x > "$WORKDIR/journal.log"

# Collect pacemaker and corosync logs
tar -zcvf "$WORKDIR/pacemaker-logs.tar.gz" /var/log/pacemaker/pacemaker* /var/log/netapp/* /var/log/cluster /corosync* 2>/dev/null

# Bundle everything up
tar -zcvf "$ARCHIVE" -C "$WORKDIR" .

# Clean up
rm -rf "$WORKDIR"
echo "Support bundle created: $ARCHIVE"

5. Procedure

Step 1 – Create Script

Add this script file to the OSS node:

  1. You can manually create a file and paste this script into it
  2. Or you can use any tools like MobaXterm to upload a script file to any servers or nodes

Step 2 – Make Executable

chmod +x netapp-log-collect.sh


Step 3 – Execute

./netapp-log-collect.sh


6. Output

The script generates:

<timestamp>-support-bundle.tar.gz

Location: Current working directory

Example:

2026-02-17_22-45-01-support-bundle.tar.gz


7. Scope

Run on:

  • All affected OSS nodes

  • Metadata nodes (if involved)

  • All cluster nodes in HA setup

    • Related Articles

    • How to use official ThinkParQ script to collect detailed BeeGFS Logs

      1. Purpose This document describes how to collect a full BeeGFS diagnostic bundle using the official ThinkParQ script. Applicable for environments running: BeeGFS This procedure is typically requested by: BeeGFS / ThinkParQ Support NetApp (when ...
    • How to send AutoSupport Dispatch on a NetApp Device via SANtricity System Manager

      Purpose This purpose of this article is to provide detailed instructions on how to manually trigger and send an AutoSupport dispatch from a NetApp E-Series or EF-Series storage System using SANtricity System Manager. AutoSupport is a NetApp feature ...
    • SOS Report collection from NetApp OSS Servers

      Purpose This article details the process of generating and collecting SOS Reports from NetApp OSS Servers. These reports are often required by the NetApp Support Team for detailed analysis and troubleshooting. Scope Applicable to: NetApp OSS Servers ...
    • How to trigger a Support Bundle on NetApp Appliance

      Purpose This article provides the steps to collect and trigger a full support bundle on a NetApp Storage Appliance. Support Bundles are used to collect diagnostics data for troubleshooting performance, connectivity, or I/O issues. Steps Login to the ...
    • BeeGFS Metadata Check Before Large File Ingest

      1. Purpose To validate metadata and inode capacity before ingesting a very large number of small files into a BeeGFS filesystem. Step 1 - Check Metadata Inodes Run beegfs-df on any BeeGFS client This checks if the nodes have enough Inode capacity for ...