Notes on Contagion Migration Calculations - laser-base/laser-core GitHub Wiki

One approach we have used with LASER models is to migrate contagion rather than migrating individuals. In this approach we accumulate the contagion being shed in each patch into a vector (one dimensional array) - let's call this contagion - and use a connectivity matrix (two dimensional array) representing the fraction of the population that moves from patch i to patch j each timestep - let's call this network - to figure out how much contagion from patch i should leave and show up in the force of infection calculation for patch j at each timestep.

Setup

Let's organize the network matrix, as mentioned above, as network[i, j] where the fraction of population/contagion moving from patch i to patch j is in network[i, j].

j=0 j=1 j=2 ... j=N-1
i=0 $m_{0,0}$ $m_{0,1}$ $m_{0,2}$ ... $m_{0,N-1}$ $\sum {row} =$ out of patch 0
i=1 $m_{1,0}$ $m_{1,1}$ $m_{1,2}$ ... $m_{1,N-1}$ $\sum {row} =$ out of patch 1
i=2 $m_{2,0}$ $m_{2,1}$ $m_{2,2}$ ... $m_{2,N-1}$ $\sum {row} =$ out of patch 2
... ... ... ... ... ... ...
i=N-1 $m_{N-1,0}$ $m_{N-1,1}$ $m_{N-1,2}$ ... $m_{N-1,N-1}$ $\sum {row} =$ out of patch N-1
$\sum {col} =$
into patch 0
$\sum {col} =$
into patch 1
$\sum {col} =$
into patch 2
... $\sum {col} =$
into patch N-1

Generally the diagonal is all zeros so the network represents the movement of contagion and "movement" of contagion from patch p to patch p is zero.

Wrong First Cut

The naïve Python+NumPy code for this is

result = contagion * network # Don't use this!

but this gives the wrong result.

NumPy implements vector * matrix (and matrix * vector - order doesn't matter) like this (sample with 3 patches):

result =

j=0 j=1 j=2
i=0 $v_0 m_{0,0}$ $v_1 m_{0,1}$ $v_2 m_{0,2}$
i=1 $v_0 m_{1,0}$ $v_1 m_{1,1}$ $v_2 m_{1,2}$
i=2 $v_0 m_{2,0}$ $v_1 m_{2,1}$ $v_2 m_{2,2}$

How is this wrong? For example, result[0,1] should be the product of contagion[0] (contagion in source patch 0) and the transfer fraction network[0,1] (the fraction going from source patch 0 to destination patch 1) but, checking carefully, the result based on NumPy's implementation is the product of contagion[1] (contagion in patch 1) and the transfer fraction network[0,1] (the fraction going from patch 0 to patch 1).

Not Recommended Fix

One option, not recommended, is to use the transpose of the network matrix, network.T. This works in a sense, but also transposes the sense of the result so that result[i,j] now represents contagion from patch j going to patch i - the opposite of the meaning of [i,j] in the network.

Example:

result = vector * matrix.T # Not recommended.

result =

j=0 j=1 j=2
i=0 $v_0 m_{0,0}$ $v_1 m_{1,0}$ $v_2 m_{2,0}$
i=1 $v_0 m_{0,1}$ $v_1 m_{1,1}$ $v_2 m_{2,1}$
i=2 $v_0 m_{0,2}$ $v_1 m_{1,2}$ $v_2 m_{2,2}$

Now at least we are multiplying the correct values, e.g. result[2,1] is the product of contagion from patch 1 and the transfer fraction from patch 1 to patch 2. However, we find that value in the "wrong" place: network[2,1] represents the transfer fraction from patch 2 to patch 1 but result[2,1] represents the contagion transfered from patch 1 to patch 2.

Proposed Solution

result = (vector * matrix.T).T # Use this formulation.

result =

j=0 j=1 j=2
i=0 $v_0 m_{0,0}$ $v_0 m_{0,1}$ $v_0 m_{0,2}$
i=1 $v_1 m_{1,0}$ $v_1 m_{1,1}$ $v_1 m_{1,2}$
i=2 $v_2 m_{2,0}$ $v_2 m_{2,1}$ $v_2 m_{2,2}$

Now we are multiplying the correct values, e.g. $v_0 \times m_{0,1}$ and $v_1 \times m_{1,2}$ and they are in the expected place in result, i.e. result[0,1] contains $v_0 \times m_{0,1}$ (contagion from patch 0 transfered to patch 1) as expected.

Transfer

To determine incoming contagion, we sum down columns (same destination j):

incoming = result.sum(axis=0) # loop over i (sources) for fixed j (destination)

To determine outgoing contagion, we sum across rows (same source i):

outgoing = result.sum(axis=1) # loop over j (destinations) for fixed i (source)

Python+NumPy Examples

We use prime values, scaled, so one can reverse engineer the calculations.

>>> net = np.array([[ 0.00, 0.01, 0.02 ], [0.03, 0.00, 0.05], [0.07, 0.11, 0.00]])
>>> net
array([[0.  , 0.01, 0.02],
       [0.03, 0.  , 0.05],
       [0.07, 0.11, 0.  ]])
>>> inf = np.array([13, 17, 19])
>>> inf * net # This result is wrong because it computes inf_j * net_ij
array([[0.  , 0.17, 0.38],
       [0.39, 0.  , 0.95],
       [0.91, 1.87, 0.  ]])
>>> inf * net.T # This result is wrong because it computes inf_j * net_ji
array([[0.  , 0.51, 1.33],
       [0.13, 0.  , 2.09],
       [0.26, 0.85, 0.  ]])
>>> (inf * net.T).T # This result is correct because it computes inf_i * net_ij
array([[0.  , 0.13, 0.26],
       [0.51, 0.  , 0.85],
       [1.33, 2.09, 0.  ]])
>>> transfer = inf[:,None] * net
>>> incoming = transfer.sum(axis=0)
>>> # (17 * 0.03) + (19 * 0.07) = 0.51 + 1.33 = 1.84
>>> # (13 * 0.01) + (19 * 0.11) = 0.13 + 2.09 = 2.22
>>> # (13 * 0.02) + (17 * 0.05) = 0.26 + 0.85 = 1.11
>>> incoming
array([1.84, 2.22, 1.11])
>>> outgoing = transfer.sum(axis=1)
>>> # (13 * 0.01) + (13 * 0.02) = 0.13 + 0.26 = 0.39
>>> # (17 * 0.03) + (17 * 0.05) = 0.51 + 0.85 = 1.36
>>> # (19 * 0.07) + (19 * 0.11) = 1.33 + 2.09 = 3.42
>>> outgoing
array([0.39, 1.36, 3.42])
⚠️ **GitHub.com Fallback** ⚠️