Book contents
- Frontmatter
- Contents
- List of Algorithms
- List of Symbols and Notation
- Preface
- 1 Preliminaries and Notation
- 2 Similarity/Proximity Measures between Nodes
- 3 Families of Dissimilarity between Nodes
- 4 Centrality Measures on Nodes and Edges
- 5 Identifying Prestigious Nodes
- 6 Labeling Nodes: Within-Network Classification
- 7 Clustering Nodes
- 8 Finding Dense Regions
- 9 Bipartite Graph Analysis
- 10 Graph Embedding
- Bibliography
- Index
6 - Labeling Nodes: Within-Network Classification
Published online by Cambridge University Press: 05 July 2016
- Frontmatter
- Contents
- List of Algorithms
- List of Symbols and Notation
- Preface
- 1 Preliminaries and Notation
- 2 Similarity/Proximity Measures between Nodes
- 3 Families of Dissimilarity between Nodes
- 4 Centrality Measures on Nodes and Edges
- 5 Identifying Prestigious Nodes
- 6 Labeling Nodes: Within-Network Classification
- 7 Clustering Nodes
- 8 Finding Dense Regions
- 9 Bipartite Graph Analysis
- 10 Graph Embedding
- Bibliography
- Index
Summary
Introduction
This chapter introduces some techniques to assign a class label to an unlabeled node, based on the knowledge of the class of some labeled nodes as well as the graph structure. This is a form of the task known as supervised classification in the machine learning and pattern recognition communities. Consider for example the case of a patents network [554] where each patent is a node and there is a directed link between two patents i and j if i cites j. In addition to the resulting graph structure, some information related to the nodes could be available, for instance, the industrial area of the patent (chemicals, information and communication technologies, drugs and medicals, electrical and electronics, etc.). Assume that the industrial area is known for some patents (labeled nodes) but not yet known for some other nodes (unlabeled nodes). The within-network classification or node classification task [89] aims to infer the label of the unlabeled nodes from the labeled ones and the graph structure.
As discussed in [553], within-network classification falls into the semisupervised classification paradigm [2, 10, 152, 844, 847]. The goal of semisupervised classification is to learn a predictive function using a small amount of labeled samples together with a (usually large) amount of unlabeled samples, the labels being missing or unobserved for these samples. Semisupervised learning tries to combine these two sources of information (labeled + unlabeled data) to build a predictive model in a better way than simply using the labeled samples alone, and thus simply ignoring the unlabeled samples. Indeed, in general, labeled data are expensive (think, for example, about an expert who has to label the cases manually), whereas unlabeled data are ubiquitous, for example, web pages. Hence, trying to exploit the distribution of unlabeled data during the estimation process can prove helpful. Among popular semisupervised algorithms, we find co-training, expectation-maximization algorithms, transductive inference, and so on – for a comprehensive survey of the topic see, for example, [844, 847].
However, to be effective, semisupervised learning algorithms on a graph rely on some strong assumptions about the distribution of these labels. The main assumption is that neighboring nodes are likely to belong to the same class and thus to share the same class label.
- Type
- Chapter
- Information
- Algorithms and Models for Network Data and Link Analysis , pp. 235 - 275Publisher: Cambridge University PressPrint publication year: 2016