TY - JOUR
T1 - Weisfeiler and Leman Go Relational
AU - Barcelo, Pablo
AU - Galkin, Mikhail
AU - Morris, Christopher
AU - Romero, Miguel
N1 - Funding Information:
Pablo Barceló is funded by Fondecyt grant 1200967, the ANID - Millennium Science Initiative Program - Code ICN17002, and the National Center for Artificial Intelligence CENIA FB210017, Basal ANID. Mikhail Galkin is funded by the Samsung AI grant held at Mila. Christopher Morris is partially funded a DFG Emmy Noether grant (468502433) and RWTH Junior Principal Investigator Fellowship under the Excellence Strategy of the Federal Government and the Länder. Miguel Romero is funded by Fondecyt grant 11200956, the Data Observatory Foundation, and the National Center for Artificial Intelligence CENIA FB210017, Basal ANID.
Publisher Copyright:
© 2022 Proceedings of Machine Learning Research. All rights reserved.
PY - 2022
Y1 - 2022
N2 - Knowledge graphs, modeling multi-relational data, improve numerous applications such as question answering or graph logical reasoning. Many graph neural networks for such data emerged recently, often outperforming shallow architectures. However, the design of such multi-relational graph neural networks is ad-hoc, driven mainly by intuition and empirical insights. Up to now, their expressivity, their relation to each other, and their (practical) learning performance is poorly understood. Here, we initiate the study of deriving a more principled understanding of multi-relational graph neural networks. Namely, we investigate the limitations in the expressive power of the well-known Relational GCN and Compositional GCN architectures and shed some light on their practical learning performance. By aligning both architectures with a suitable version of the Weisfeiler-Leman test, we establish under which conditions both models have the same expressive power in distinguishing non-isomorphic (multi-relational) graphs or vertices with different structural roles. Further, by leveraging recent progress in designing expressive graph neural networks, we introduce the k-RN architecture that provably overcomes the expressiveness limitations of the above two architectures. Empirically, we confirm our theoretical findings in a vertex classification setting over small and large multi-relational graphs.
AB - Knowledge graphs, modeling multi-relational data, improve numerous applications such as question answering or graph logical reasoning. Many graph neural networks for such data emerged recently, often outperforming shallow architectures. However, the design of such multi-relational graph neural networks is ad-hoc, driven mainly by intuition and empirical insights. Up to now, their expressivity, their relation to each other, and their (practical) learning performance is poorly understood. Here, we initiate the study of deriving a more principled understanding of multi-relational graph neural networks. Namely, we investigate the limitations in the expressive power of the well-known Relational GCN and Compositional GCN architectures and shed some light on their practical learning performance. By aligning both architectures with a suitable version of the Weisfeiler-Leman test, we establish under which conditions both models have the same expressive power in distinguishing non-isomorphic (multi-relational) graphs or vertices with different structural roles. Further, by leveraging recent progress in designing expressive graph neural networks, we introduce the k-RN architecture that provably overcomes the expressiveness limitations of the above two architectures. Empirically, we confirm our theoretical findings in a vertex classification setting over small and large multi-relational graphs.
UR - http://www.scopus.com/inward/record.url?scp=85164539883&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85164539883
SN - 2640-3498
VL - 198
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 1st Learning on Graphs Conference, LOG 2022
Y2 - 9 December 2022 through 12 December 2022
ER -