manila gateway writeup - csabahenk/manila GitHub Wiki
#MANILA NETWORKING
##Hypervisor mediated storage access
It is one of the approaches to achieve multi-tenancy, where the control to access a share lies in the hypervisor. Each hypervisor with multiple instances, which belong to different tenants, has an active NFS-Ganesha server. The NFS server supports differents plugins that can talk to filesystems such as GlusterFS, CephFS, etc. It enforces the access rules for a share. So when an instance requests access to a share, NFS server checks whether the instance belongs to the authorized list of tenants (currently ips of the instances) before passing down the request to the storage backend.
####Advantages:
-
NFS server on the hypervisor performs caching. So not all requests from the instances need to be served by the storage layer. Thereby data access is sometimes faster, and the storage layer is less burdened.
-
Since the NFS server (hypervisor) and the NFS client (guest instance) reside in the same subnet, the route to the share is more easily established, than in a Flat Networking model, where the NFS server running in the storage layer and the guests belong to different subnets. More easily established because of lower frequency of inter-subnet data traffic?
-
Having NFS servers running on every hypervisor ensures better load handling of data traffic, rather than having a single NFS server running on the storage layer. This makes the NFS server less of a bottle neck for file sharing. Is this approach better than sharing load across multiple NFS servers running at the storage layer?
####Disadvantages:
-
NFS-Ganesha running on the hypervisor can have non-trivial resource consumption. This may affect the performance of the Hypervisor, which already has to perform memory instensive operations such as hosting instances among others.
-
Additional high availability needs to be configured for Ganesha and configuration.
-
When the hypervisor's NFS server goes down, the instances of the hypervisor can no longer access the share. As a backup, the instances would need to quickly start communicating with a NFS server, running on a different machine, which remembers access rules and has the storage access previleges of the failed server. Or should I say, remembers the state of the failed server?