hypervisor mediated vsock qemu - csabahenk/manila GitHub Wiki
Hypervisor mediated multitenancy with Glusterfs and QEMU
Meta-introduction
-
This originally was to be a blueprint
-
However, reasons to keep it instead as an internal draft:
-
I don't find good pointers to the core technology, vsock (@Vijay, can you help out here?)
-
what to do is too vague at this point (what protocol to use over the vsock channel?)
-
when to do is too vauge at this point -- current project focus is on generic driver spinoffs, in particular team focus is on such a spinoff that leverages NFS Ganesha. This idea is completely unschedulabe at the moment, it might seem to be (and might end up to be) hand-wavy brainstorming, which is not the kind of stuff we want to fill public space with.
Synopsis
One of the possible approaches to get at multitenancy in Manila is the hypervisor mediated one. According to it, virtualized guests are assumed and shares are connected to guests by a virtualization specific transport.
- Advantages:
- conceptually and structurally simple
- low overhead
- Disadvantages:
- specific to a certain virtualization technology
What we add here to the general idea is that the virtualized transport would be the one called vsock, and the share backend would be GlusterFS. Basically this implies two components to be implemented:
- a vsock agent on the guest side that talks to the GlusterFS cluster that serves as a backend through the vsock channel;
- a layer (xlator?) or operation mode (volume option?) for GlusterFS that allows us to restrict client access to a volume to a subdirectory (enforcing tenant isolation as a share is a subdirectory of volume. Alternatively to xlator/volume option approaches, this feature can as well be implemented within the aforementioned client.
Thus in this case we can extend
- Disadvantages:
- specific to guests running on vsock-capable virtualization technology
- specific to GlusterFS share backend
Questions
-
which virtualization technologies do support vsock? Googling on it suggests it's originally a VMWare feature, so VMWare should be there, and QEMU is also supposed to support it (eventually?).
-
why do we need a specialzed client? Is it not the case that the kernel presents vsock as a socket to userspace in which case no special tool would be needed?
-
assuming and accepting that we need a special client, how portable would it be? Could it run unaltered on different virtualization technologies' vsock implementation?
-
which gluster protocol would we use? Would we use standard gluster RPC thus allowing a native client to run in the guest? Or use gfapi and some gfapi-based data access technology (like NFS-Ganesha)?