TY - GEN
T1 - Analyzing the Transferability of Collective Inference Models Across Networks
AU - Niu, Ransen
AU - Moreno, Sebastian
AU - Neville, Jennifer
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2016/1/29
Y1 - 2016/1/29
N2 - Collective inference models have recently been used to significantly improve the predictive accuracy of node classifications in network domains. However, these methods have generally assumed a fully labeled network is available for learning. There has been relatively little work on transfer learning methods for collective classification, i.e., to exploit labeled data in one network domain to learn a collective classification model to apply in another network. While there has been some work on transfer learning for link prediction and node classification, the proposed methods focus on developing algorithms to adapt the models without a deep understanding of how the network structure impacts transferability. Here we make the key observation that collective classification models are generally composed of local model templates that are rolled out across a heterogeneous network to construct a larger model for inference. Thus, the transferability of a model could depend on similarity of the local model templates and/or the global structure of the data networks. In this work, we study the performance of basic relational models when learned on one network and transferred to another network to apply collective inference. We show, using both synthetic and real data experiments, that transferability of models depends on both the graph structure and local model parameters. Moreover, we show that a probability calibration process (that removes bias due to propagation errors in collective inference) improves transferability.
AB - Collective inference models have recently been used to significantly improve the predictive accuracy of node classifications in network domains. However, these methods have generally assumed a fully labeled network is available for learning. There has been relatively little work on transfer learning methods for collective classification, i.e., to exploit labeled data in one network domain to learn a collective classification model to apply in another network. While there has been some work on transfer learning for link prediction and node classification, the proposed methods focus on developing algorithms to adapt the models without a deep understanding of how the network structure impacts transferability. Here we make the key observation that collective classification models are generally composed of local model templates that are rolled out across a heterogeneous network to construct a larger model for inference. Thus, the transferability of a model could depend on similarity of the local model templates and/or the global structure of the data networks. In this work, we study the performance of basic relational models when learned on one network and transferred to another network to apply collective inference. We show, using both synthetic and real data experiments, that transferability of models depends on both the graph structure and local model parameters. Moreover, we show that a probability calibration process (that removes bias due to propagation errors in collective inference) improves transferability.
UR - http://www.scopus.com/inward/record.url?scp=84964754257&partnerID=8YFLogxK
U2 - 10.1109/ICDMW.2015.192
DO - 10.1109/ICDMW.2015.192
M3 - Conference contribution
AN - SCOPUS:84964754257
T3 - Proceedings - 15th IEEE International Conference on Data Mining Workshop, ICDMW 2015
SP - 908
EP - 916
BT - Proceedings - 15th IEEE International Conference on Data Mining Workshop, ICDMW 2015
A2 - Wu, Xindong
A2 - Tuzhilin, Alexander
A2 - Xiong, Hui
A2 - Dy, Jennifer G.
A2 - Aggarwal, Charu
A2 - Zhou, Zhi-Hua
A2 - Cui, Peng
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 15th IEEE International Conference on Data Mining Workshop, ICDMW 2015
Y2 - 14 November 2015 through 17 November 2015
ER -