Networking - open-e/JovianDSS-Proxmox GitHub Wiki
Plugin networking
Plugin and Volume DATA
The JovianDSS Proxmox plugin integrates JovianDSS with Proxmox VE, giving you unrestricted Proxmox functionality while natively leveraging JovianDSS storage as part of the Proxmox VE ecosystem.
The plugin manages volumes on the JovianDSS side—allocating, deleting, snapshotting, and reverting them—and exposes those volumes to Proxmox VE over iSCSI.
Volume data travels only over the VIP addresses specified in data_addresses
within your storage.cfg
file.
Since version 0.10, the plugin routes all iSCSI data transfers exclusively through these VIP addresses.
How it works:
- VIP's addresses allocated (e.g.
192.168.29.100
,192.168.30.100
) to the JovianDSSPool
, see the JovianDSS VIPs section for details - The data_addresses property in
storage.cfg
lists the VIP addresses allocated to the JovianDSSPool
as a comma-separated list (e.g., 192.168.29.100,192.168.30.100):
...
data_addresses 192.168.29.100,192.168.30.100
...
-
When a virtual machine is created and started (for example, VM 100 with a single disk
vm-100-disk-0
-
The plugin creates an iSCSI target on JovianDSS —
iqn.2025-04.proxmox.joviandss.iscsi:vm-100-0
— and assigns the VIP addresses192.168.29.100
192.168.30.100
to the targetiqn.2025-04.proxmox.joviandss.iscsi:vm-100-0
. -
The plugin creates the corresponding iSCSI record on the Proxmox VE server that is hosting the virtual machine.
iscsiadm --mode node -p 192.168.41.100 --targetname iqn.2025-04.proxmox.joviandss.iscsi:vm-100-0 -o new iscsiadm --mode node -p 192.168.51.100 --targetname iqn.2025-04.proxmox.joviandss.iscsi:vm-100-0 -o new
- The plugin logs in to the target.
iscsiadm --mode node -p 192.168.41.100 --targetname iqn.2025-04.proxmox.joviandss.iscsi:vm-100-0 --login iscsiadm --mode node -p 192.168.51.100 --targetname iqn.2025-04.proxmox.joviandss.iscsi:vm-100-0 --login
-
The JovianDSS Proxmox plugin does not allocate or assign VIP addresses to the JovianDSS Pool
.
It only assigns VIP addresses already added to the pool to newly created iSCSI targets.
Specifying VIP addresses in the data_addresses property that have not previously been assigned to the JovianDSS pool does not create additional iSCSI targets or data transfer paths; such addresses are ignored.
If no VIP addresses are assigned to the JovianDSS pool, volume provisioning to the Proxmox VE server over iSCSI fails:
TASK ERROR: Unable to identify VIP name for ip's: 192.168.40.100,192.168.40.100. Please make sure that VIP are assigned to the Pool
Virtual IP addresses assigned to the pool must be accessible from the Proxmox VE server; see the Routing example section for details.
JovianDSS VIPs
Pool
Adding a VIP to JovianDSS Adding a virtual IP address to the JovianDSS pool is straightforward.
Navigate to Storage->Pool->Virtual IPs->Add Virtual IP
Specify the VIP properties - such as its address and netmask - and select the network interface to which it is assigned.
For detailed information on JovianDSS network configurations, consult the following resources:
- JovianDSS Advanced Metro High Avability Cluster Step by Step
- JovianDSS Advanced Metro High Avability Cluster Step by Step 2
- Open-E Knowledgebase
- iSCSI Targets Available Through Specific VIPs
- JovianDSS failover mechanism technologies explained
Example
Consider a scenario in which the Proxmox VE cluster and the JovianDSS storage are both attached to three physical networks:
- Net1 172.28.0.0/16 Management/Internet connection
- Net2 172.29.0.0/16 Data network
- Net2 172.30.0.0/16 Data network
There are two JovianDSS storage nodes with Failover enabled, and Pool-2
has three virtual IP addresses assigned:
- VIP0 192.168.28.102 associated with physical interfaces connected to Net1 172.28.0.0/16
- VIP1 192.168.29.102 associated with physical interfaces connected to Net2 172.29.0.0/16
- VIP2 192.168.30.102 associated with physical interfaces connected to Net3 172.30.0.0/16
A three-node Proxmox VE cluster in which each node has three network interfaces connected to physical networks:
- vmbr0 connected to Net1 associated with virtual bridge vmbr0 with ip 172.28.143.11/16
- ens224 connected to Net2 associated with interface ens224 with ip 172.29.143.11/16
- ens256 connected to Net3 associated with interface ens256 with ip 172.30.143.11/16
Data transfers are restricted to the VIPs 192.168.29.102 and 192.168.30.102, while REST commands use only 192.168.28.102.
Example excerpt from the storage pool section for jdss-Pool-2 in the storage.cfg file:
joviandss: jdss-Pool-2
pool_name Pool-2
shared 1
...
control_addresses 192.168.28.102
data_addresses 192.168.29.102,192.168.30.102
...
Such configurations is very complex and it is recommended to check connectivity of each Proxmox VE server in a cluster and JovianDSS VIP's.
root@node1:~# ping -c 5 192.168.28.102
If connectivity is good, you’ll see output similar to:
PING 192.168.28.102 (192.168.28.102) 56(84) bytes of data.
64 bytes from 192.168.28.102: icmp_seq=1 ttl=64 time=0.228 ms
64 bytes from 192.168.28.102: icmp_seq=2 ttl=64 time=0.214 ms
64 bytes from 192.168.28.102: icmp_seq=3 ttl=64 time=0.186 ms
64 bytes from 192.168.28.102: icmp_seq=4 ttl=64 time=0.167 ms
64 bytes from 192.168.28.102: icmp_seq=5 ttl=64 time=0.178 ms
--- 192.168.28.102 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4075ms
rtt min/avg/max/mdev = 0.167/0.194/0.228/0.022 ms
Missing route configuration is a potential cause of connectivity issues.
Static routes in Proxmox VE are defined by creating the file /etc/network/interfaces.d/joviandss_pool_2_vip_routes
iface vmbr0 inet static
up /sbin/ip route add 192.168.28.102 dev vmbr0
down /sbin/ip route add 192.168.28.102 dev vmbr0
iface ens224 inet static
up /sbin/ip route add 192.168.29.102 dev ens224
down /sbin/ip route add 192.168.29.102 dev ens224
iface ens256 inet static
up /sbin/ip route add 192.168.30.102 dev ens256
down /sbin/ip route add 192.168.30.102 dev ens256