How effectively it’s required to use the Health Check Commands on storage array tools?
Large service delivery accounts also find it difficult to conduct health checks and track power on hundreds of arrays and switches scattered across the globe, in various places, in different environments. Which may be cloud distribution accounts that help multiple SAN environments, either customer-dedicated or shared by multiple customers; or multiple SAN environments that result from a merger or acquisition. A health check report is prepared several times a day to guarantee the Service Level Agreement (SLAs) so that the arrays can be closely monitored and to ensure that all faults are treated properly. This is a repetitive and time-consuming job that involves the efforts of several engineers dedicated to this mission. The difficulty is increasing as the fabrics can be spread through environments and locations. Moreover, due to the complexity of the arrays and switches, the purpose of monitoring the entire environment that not be one single device.
The basic health check commands for various arrays are discussed in this section.
HNAS CLI Health check commands (Latest models: HNAS 4060, HNAS4080, HNAS4100)
HNAS CLI (command line interface) can be accessed as follows
1. install putty
2. use SSH to connect to the NAS server and start
CLI command | Summary | |
Hostname :$ evs list |
|
|
Hostname :$ span-mirror-health | output has 5 fields gives::span instance name::span permanent ID::Health&Licensing Primary(HL or FL or MU or..):: Health&Licensing Secondary(XX):: pegging(P or U or M) | |
Hostname :$ cluster-show | Displays Status and Health of the node. | |
Hostname :$ cluster-show –all | Displays overall cluster health information | |
Hostname :$ sd-list | List system drives give health status, capacity, role, Access, Device id. | |
Hostname :$ filesystem-list-stored | List health status of a file system(loaded) | |
Hostname :$ filesystem-notifications | List the last notification activity for each filesystem in the array | |
Hostname :$ chassis-drive-status | Display the chassis drive status | |
Hostname :$ dailystatusreport [Email id to send] | Send the daily status report to the email id mentioned | |
Hostname :$ uptime | Displays how long the system has been running | |
Hostname :$ multi-tenancy-show | show status of multi-tenancy environment | |
Hostname :$ ndmp-status | Displays the current status of NDMP | |
Hostname :$ overallstatus | Displays the health status of all cluster nodes, quorum device | |
Hostname :$ papi-status | Displays PAPI client and server status with server build and API version details |
Read More: Key Performance Items to be Monitored for HDS on Storage Navigator
VPLEX Health check commands:
CLI commands to be run on VPLEXCLI:
VPlexcli:/>vpn status | Displays diagnostic of VPN connection |
VPlexcli:/>cluster status | Cluster health check |
VPlexcli:/>cluster summary | Cluster summary |
VPlexcli:/>storage-volume summary | storage-volume summary |
VPlexcli:/>virtual-volume summary | virtual-volume summary |
VPlexcli:/>ndu pre-check | Can be run as a high-level health check |
VPlexcli:/>validate-system-configuration | Runs high-level health check on cache replication, logging volume, back-end connectivity, and meta-volume |
VPlexcli:/>ll clusters/**/virtual-volumes/* | Displays status, health, and stats on virtual volumes |
VPlexcli:/>ll /clusters/* | High-level cluster information |
VPlexcli:/>version | -a –verbose Full version information |
VPlexcli:/>sessions | Displays all active VPLEX mgmt console sessions |
VPlexcli:/>rebuild status | Displays DM progress/stats |
VPlexcli:/>local-device summary | Summarizes local device information |
VPlexcli:/>extent summary | Summarizes extent information |
VPlexcli:/>export port summary | Summarizes the export ports at the given clusters |
VPlexcli:/>ds summary | Summarizes distributed-devices and WOF groups |
VPlexcli:/>director uptime | Displays director uptime since the last reboot |
VPlexcli:/>director app status | Displays application status running on each director |
VPlexcli:/>ll engines/**/fans/* | Displays fan health |
VPlexcli:/>ll engines/**/power-supplies/* | Displays power supply health |
VPlexcli:/>ll engines/**/stand-by-power-supplies/* | Displays stand by power supply health |
VPlexcli:/>ll engines/**/io-modules/* | Displays io module health |
VPlexcli:/>ll engines/**/internal-disk-* | Displays health of internal disks |
VPlexcli:/>ll engines/**/dimm-* | Displays health of dimms |
VPlexcli:/>ll /engines/**/ports | Displays port status |
VPlexcli:/>ll engines/**/sfps/* | Displays specific sfp information |
VPlexcli:/>ll engines/**/ports/* | Displays specific port information |
VPlexcli:/>health-check | General Health check |
VPlexcli:/>export storage-view summary | Lists each view and the number of virtual volumes and initiators that it contains |
VPlexcli:/>connectivity validate-be | Checks for back end HA and connectivity issues |
VPlexcli:/>connectivity director director-1-1-A (wild cards do not work for this command) | Lists all devices logged into a given director |
VPlexcli:/>connectivity validate-wan-com | Compares expected to actual wan-com visibility |
VPlexcli:/>connectivity show | Shows all (actual) wan-com connectivity |
VPlexcli:/>fc-port-stats | Displays port stats – needs to be run in a directory context |
*** Use ‘ll –ful’ to list out abridged listings of virtual volumes/initiators within certain contexts ***
Centera Health check commands:
Login to Centera viewer -> connect to respective Centera IP -> commands -> cli
Config# show health
Config# show capacity avail
Config# show replication detail
Config# show config notif
Config# show report health
VMAX (Symmetrix Health check):
Product: VMAX (Symmetrix Health check) |
|
Symcfg –sid XXXX list –env_data | Displays the status of Hardware component in the array (-v gives the detailed information) |
Symcfg –sid XXXX list –dir all | Displays the online status of all directors (Frontend+Backend) |
Symevent –sid XXXX list –error –fatal | Displays the critical alerts generated |
Symdisk –sid XXXX list failed | Displays the list of Failed disks |
NAS Health Check:
Products: CELERRA, VNX, VNXe
nas_server –list
/nas/bin/nas_checkup
nas_inventory –list
XtremIO Health Check CLI commands:
xmcli (admin)> show-bricks
xmcli (admin)> show-storage-controllers
xmcli (admin)> show-storage-controllers-inifiniband-counters
xmcli (admin)> show-bbus
xmcli (admin)> show-clusters
xmcli (admin)> show-daes
xmcli (admin)> show-storage-controllers-psus
xmcli (admin)> show-alerts
xmcli (admin)> show-initiators-connectivity
xmcli (admin)> show-discovered-initiators-connectivity
xmcli (admin)> show-targets-fc-error-counters
xmcli (admin)> show-ssds
xmcli (admin)>show-initiators-performance duration= 4000 frequency=5
Avamar Product:
Health check commands:
Product: AVAMAR |
|
[email protected]:~/#:uptime | Check the uptime of the grid to ensure no unexpected reboots have occurred. |
[email protected]:~/#:status.dpn | The status.dpn commands list the status of all the nodes in the Avamar Grid along with their capacity utilization percentage. |
[email protected]:~/#:dpnctl status | Next, check the status of different services, such as gsan, mcs, ems, backup scheduler, dtlt, axionfs, and maintenance windows scheduler. |
Hitachi HDS Arrays
Login to Device Manager CLI
Product: Hitachi HDS Arrays |
|
HiCommandCLI GetStorageArray subtarget=FreeSpace model=HDS9980V serialnum=10001 | Check the capacity utilization of array groups. |
HiCommandCLI get alerts | Check system-generated alerts for any hardware failures or warnings |
IBM XIV Arrays:
The IBM XIV Storage System command-line interface (XCLI) provides a mechanism for issuing commands to manage and maintain the XIV systems. XLI commands are entered on an XCLI client system (or XCLI client) supplied by the customer
Product: IBM XIV Arrays |
|
ats_list ats | Check the status of the ATS (Automatic Transfer Switch) configuration. ATS switches between line cords to allow redundancy of external power. |
cf_list -f all | Check the status of the Compact Flash (CF) cards on the array. |
component_list filter=FAILED|NOTOK | List the failed system components. |
mm_list -f all | Check the status of the maintenance module |
module_temperature_list -f all | Check the status of the internal temperature of modules |
fs_check | Check Filesystems Health State |
fan_list | Check the status of FANs in the System |
HP XP Arrays:
The Command View XP Command Line Interface (CLI) is a text-based interface used to manage and retrieve information about XP disk arrays.
Product: HP XP Arrays |
|
showbattery | show battery status information |
showclienv | show CLI environment parameters |
showflashcache | show the status of the flash cache per node or VV |
shownet | show network configuration and status |
showsched | show scheduled tasks in the system |
showportlesb | show Link Error Status Block information about devices on Fibre Channel port |
showport-i | show Fibre Channel and iSCSI ports in the system |
showpdata | show preserved data status |
showpd–failed-degraded | show failed or degraded physical disks (PDs) in the system with parameters capacity, status, RPM, type, cage position |
checkhealth-svc-detail | checks the overall health of all components and summarises the issues if any |
list array_status | Check array locked status. |
list acp_status | Check the Array Control Processor (ACP) |
list dka_status | Disk Adapter (DKA) status. |
list chip_status | Check Channel Host Interface Processor |
list cha_status | Check Channel Adapter Status |
list chp_status | Check Channel Processor Status |
list cm_status | Check Cache Status |
list csw_status | Check Cache Status |
list dkc_status | Check Backend Disk Status |
list dkp_status | Check Backend Disk Status |
list dku_status | Check Backend Disk Status |
Final Thoughts:
Hope this information helps you to learn how to use the health check commands on different storage arrays. We will try our best to add more value to make it easier for you. Leave your comments, feedback, or quires here enabling us to get you more information.