Monitoring AFM - IBM/ibm-spectrum-scale-bridge-for-grafana GitHub Wiki
Active File Management (AFM) enables sharing of data across clusters, even if the networks are unreliable or have high latency.
An AFM fileset can be enabled on a GPFS-independent fileset. Each fileset has a distinct set of AFM attributes. IBM Storage Scale cluster that contains AFM filesets is called a cache cluster. A cache cluster has a relationship with another remote site called the home, where either the cache or the home can be the data source or destination.
AFM constantly maintains an active relationship between the cache and the home. Changes are managed per fileset results in modular, scalable architecture capable of supporting billions of files and peta bytes of data. Each AFM-enabled fileset is associated with a single home path.
AFM uses an NFSv3 or NSD (GPFS multi-cluster) protocol for the communication between the home and cache sites. A home export path is either an NFSv3 exported path or a multi-cluster or remote file system, which is mounted at the IBM Storage Scale cache cluster. This path is used by an AFM or AFM-DR fileset as a target path for data synchronization between sites. For AFM RO-mode filesets, the target path at the home NFS server can be exported as 'Read-Only' or 'Read/Write'. However, for AFM LU/SW/IW and AFM-DR mode filesets, the target NFS export path must be 'Read/Write'.
Please follow the AFM configuration instructions for setting up a home and a cache cluster.
As a result of the setup activities you should have the nfs exports configured on a home cluster
[root@scale-11 ~]# mmnfs export list
Path Delegations Clients
------------------------- ----------- -------
/gpfs/myAFMfs/myAFMfset NONE *
/gpfs/myAFMfs1/myAFMfset1 NONE *
/gpfs/myAFMfs1/myAFMfset2 NONE *
Also make sure the NFS performance sensors are enabled. For more info please read the Activate NFS performance monitoring sensors article in the IBM Storage Scale Knowledge Center.
On a cache cluster side you should have AFM filesets with a NFS Server export path(afmTarget) specified, and a primary Gateway node assigned
[root@scale-21 ~]# mmafmctl myCacheAFMfs getstate
Fileset Name Fileset Target Cache State Gateway Node Queue Length Queue numExec
------------ -------------- ------------- ------------ ------------ -------------
myCacheAFMfset_independent2 nfs://scale-15/gpfs/myAFMfs1/myAFMfset1 Active scale-21 2 43615084
myCacheAFMfset_readOnly nfs://scale-14/gpfs/myAFMfs/myAFMfset Dirty scale-21 2 33961826
myCacheAFMfset_writer nfs://scale-13/gpfs/myAFMfs1/myAFMfset2 Active scale-21 0 91699762
[root@scale-21 ~]# mmafmctl myCacheAFMfs1 getstate
Fileset Name Fileset Target Cache State Gateway Node Queue Length Queue numExec
------------ -------------- ------------- ------------ ------------ -------------
myCacheAFMfset_independent1 nfs://scale-15/gpfs/myAFMfs1/myAFMfset1 Dirty scale-25 10787 2749147
Finally, follow the Enabling AFM sensors instructions to configure the AFM performance data collection on a cache cluster.
Setup the IBM Storage Scale bridge for Grafana on a home and a cache cluster. Having the bridge installed on both clusters can be useful for getting overall picture about the NFS Throughput/IO Rate on a home cluster side and the AFM data transfers on a cache cluster side at the same time. Import the AFM over NFS dashbord. It will give you a good start in the AFM traffic observation. /images/AFM/AFM_oveview.PNG To check NFS throughput/ IO Rate expand the HOME CLUSTER section /images/AFM/AFM_home.PNG To check the total number of bytes written to the remote system as a result of cache updates, the number of messages that are currently enqueued or used memory in bytes by the messages enqueued expand the CACHE CLUSTER section /images/AFM/AFM_cache.PNG
For more detailed reports please try out the AFM FS view and AFM FSET view dashboards.
Helpful links: