Questions RGCN - ufal/NPFL095 GitHub Wiki

Questions RGCN

  1. Let V = {A, B, C, D} and E = {(B, r, A), (D, r, C), (C, r, A), (C, q, A)},
    where (B, r, A) means there is an edge from node B to node A with relation type r. Let G = (V, E) be our relational graph.
    Let A be the currently processed node and let
W⁰_r = ((2,2,2), (2,2,2))
W⁰_q = ((1,2,3), (1,1,1))
W⁰_0 = ((2,1,2), (1,0,1))

h_A = (1,2,1)^T
h_B = (1,1,1)^T
h_C = (2,2,2)^T
h_D = (2,0,0)^T

where h_A ∈ R3 are features of node A, i.e. its "hidden" state on the 0-th layer (h_A=h⁰_A);
W⁰_0 ∈ R2x3 are learnable parameters for projection of features of the currently computed node;
W⁰_r ∈ R2x3 are learnable parameters for projection of features of a node connected by relation r to the current node.

Following Equation 2, compute h¹_A, i.e. the hidden representation of node A at the 1st layer. Ignore the regularization from Section 2.2. As in the paper, expect that Ci,r = |Nri| (i.e. the size of incoming edges with relation r).

Hints

  • Draw the multi-graph G on a paper.
  • h¹_A ∈ R2
  1. In practice, Graph Neural Networks are usually only 2-5 layers deep. Why?

  2. Bonus question: How would you train this model on large data (which is multiple times larger than the available memory)?

⚠️ **GitHub.com Fallback** ⚠️