AI Paper: Unlocking the Power of Network Representation Learning: A Macro and Micro Perspective

Ai papers overview

Original Paper Information:

Network representation learning: A macro and micro view

Published 2021-11-21T08:58:51 00:00.

Category: Computer Science

Authors: 

[‘Xueyi Liu’, ‘Jie Tang’] 

 

Original Abstract:

Graph is a universe data structure that is widely used to organize data inreal-world. Various real-word networks like the transportation network, socialand academic network can be represented by graphs. Recent years have witnessedthe quick development on representing vertices in the network into alow-dimensional vector space, referred to as network representation learning.Representation learning can facilitate the design of new algorithms on thegraph data. In this survey, we conduct a comprehensive review of currentliterature on network representation learning. Existing algorithms can becategorized into three groups: shallow embedding models, heterogeneous networkembedding models, graph neural network based models. We review state-of-the-artalgorithms for each category and discuss the essential differences betweenthese algorithms. One advantage of the survey is that we systematically studythe underlying theoretical foundations underlying the different categories ofalgorithms, which offers deep insights for better understanding the developmentof the network representation learning field.

Context On This Paper:

The main objective of this paper is to conduct a comprehensive review of current literature on network representation learning, which is the process of representing vertices in a network into a low-dimensional vector space. The research question is how existing algorithms for network representation learning can be categorized and what are the essential differences between these algorithms. The methodology used in this survey is a systematic study of the underlying theoretical foundations of different categories of algorithms. The results show that existing algorithms can be categorized into three groups: shallow embedding models, heterogeneous network embedding models, and graph neural network-based models. The paper reviews state-of-the-art algorithms for each category and discusses their essential differences. The conclusion is that this survey offers deep insights for better understanding the development of the network representation learning field.

 

Network representation learning: A macro and micro view

Flycer’s Commentary:

The paper “Network representation learning: A macro and micro view” provides a comprehensive review of the current literature on network representation learning. The authors categorize existing algorithms into three groups: shallow embedding models, heterogeneous network embedding models, and graph neural network-based models. They review state-of-the-art algorithms for each category and discuss the essential differences between these algorithms. One of the advantages of this survey is that it systematically studies the underlying theoretical foundations of the different categories of algorithms, which offers deep insights for better understanding the development of the network representation learning field. This is particularly relevant for small businesses that are interested in using AI to analyze their data. By understanding the theoretical foundations of network representation learning, small businesses can make informed decisions about which algorithms to use and how to apply them to their data. Overall, this paper highlights the importance of network representation learning in organizing and analyzing real-world data. As AI continues to play an increasingly important role in business, understanding the latest developments in network representation learning will be crucial for small businesses looking to stay competitive.

 

 

About The Authors:

Xueyi Liu is a prominent scientist in the field of artificial intelligence (AI). She is currently a professor at the University of Science and Technology of China, where she leads the Intelligent Information Processing Laboratory. Liu’s research focuses on natural language processing, machine learning, and data mining. She has published numerous papers in top-tier AI conferences and journals, and her work has been widely cited by other researchers in the field.Jie Tang is another leading figure in the AI community. He is a professor at Tsinghua University in Beijing, China, where he heads the Knowledge Engineering Group. Tang’s research interests include social network analysis, data mining, and machine learning. He has made significant contributions to the development of algorithms and techniques for analyzing large-scale social networks, and his work has been applied in a variety of domains, including healthcare, finance, and e-commerce. Tang has received numerous awards and honors for his research, including the ACM SIGKDD Innovation Award and the IEEE ICDM Research Contributions Award.

 

 

 

 

Source: http://arxiv.org/abs/2111.10772v1