We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Understanding the processes that give rise to networks gives us a better grasp of why we see the networks we do, where we might expect to find them, and how we might expect them to change over time. One way to achieve this is to create simulated networks. Simulated networks allow us to build networks based on detailed principles. We can then ask how networks derived from these principles behave and, correspondingly, understand how our observed networks may be generated by similar principles. This chapter explores many generative algorithms, including random graphs, small world networks, preferential attachment and acquisition, fitness networks, configuration models, amongst many others.
For any form of communication to make it beyond the category of talking to oneself, at least two individuals must share a common lexicon. Before languages can evolve into more complex forms, there must first be a pragmatic sense in which one individual can communicate a basic idea to another. How might shared lexicons have originated? Standard explorations of language often look in well-connected social groups such as chimpanzees, frequently numbering in the tens of individuals. But we might ask if language perhaps didn’t begin in a more humble arrangement, involving social groups of just two or a few individuals, such as that found in the orangutan? Agent-based models combined with network science offer a way to study this problem. By treating nodes as agents with strict rule-based behavior and edges as opportunities for interaction, agent-based models provide frameworks for studying how behavior and connectivity interact to create emergent phenomenon, such as the evolution of cooperation and cultural change. Here we will explore an agent-based model of the naming game to address how structure influences the emergence of shared lexicons.
What is memory? Scientists have proposed a wide variety of spatial metaphors to understand it. These range from the 2D wax tablets proposed by Plato and Aristotle and subsequently Freud, with his magic writing pad, to the 3D physical spaces that one can walk around inside, such as the subway of Collins and Quillian. If memory has such a spatial structure, then it suggests a simple rule: items in memory can be near or far from one another. Anything with a near-and-far structure lends itself to a network representation. Such spatial structure also lends itself to being in the wrong place at the wrong time: remembering things that never happened and forgetting things that did. This chapter explores how structure facilitates memories and also looks at a specific case of false memory to highlight how modeling the process of spreading activation on networks can enrich our understanding of structure beyond degree.
The false consensus effect is the observation that people tend to overestimate the number of people who share their views. In modern environments we also see growing evidence of greater polarization. For example, according to the Pew Research Center over the past five decades, congressional US Democrat and Republican ideologies have increasingly diverged, with an ever shrinking middle ground. This is appears to also be reflected among US citizens, with a "disappearing center" hastened by growing “anarchist” and “anti-establishment” ideologies. Many have speculated that this polarization is a global phenomenon. The question we pose here is how beliefs and network structure might interact to facilitate both false consensus effects and rising polarization.
Is searching memory like searching space? William James once wrote that “We make search in our memory for a forgotten idea, just as we rummage our house for a lost object.” Both space and memory have structure and we can use that structure to zero in on what we are looking for. In searching space, this is easy to see. A person hunting for their lost keys is not unlike a starling scouring the garden for wayward insects. But in searching memory, what is the map? And by what means does a person move from one memory to the next? In this chapter I will lay out the similarities between foraging in space and mind and then describe a research approach inspired by an ecological model of animal forging. Using this approach, we will combine data from a memory production task with a cognitive map – a network representation – of memory derived from natural language. We will then use this to compare a suite of models aimed at teasing apart how memory search is similar to our garden starling.
Conspiracy theories explain anomalous events as the outcome of secret plots by small groups of people with malevolent aims. Is every conspiracy unique, or do they all share a common thread? That is, might conspiracy explanations stem from a higher-order belief that binds together a wide variety of overtly independent phenomena under a common umbrella? We can call this belief the conspiracy frame. Network science allows us to examine this frame at two different levels: by examining the structural coherence of individual conspiracies and by examining the higher-level interconnectivity of the conspiracy beliefs as a whole.
When nodes share features we can combine those features in many possible ways. One standard way is to base relationships on shared features. But there are other possibilities. Here we will apply a number of approaches to investigate the concept of distinctiveness. Distinctiveness is how easy it is to discriminate one thing from another thing. In an important sense distinctiveness is therefore a hypothesis about how the mind works. We say two things are distinctive because a mind can distinguish them. But what makes something distinctive? In this chapter, I will introduce some of the theory behind distinctiveness and then demonstrate how we can use network science to investigate distinctiveness in children’s abilities to learn words. This takes a multilayer network approach, in which we will examine many different edge types constructed of various combinations of shared and unshared features. By examining these edge types will discover how best to combine features and which feature combinations best predict early word learning.
Structure matters for understanding behavior. This chapters introduces the main theme of the book, provides a number of stories about the importance of structure, and outlines the main structure of the book.
Degree is the simplest of the node-level measures, but its simplicity often hides its power. Here we will apply degree to the problem of mental structure. Specifically, what is the structure of the relationships between information in the mind? George Kingsley Zipf observed that word frequencies in natural language tend to a follow a scale-free distribution: The most frequent words are few, while the less frequent words are many with a specific linear relationship on a log-log plot. It has also been suggested that this power-law distribution applies to the relationships between words as well as to their meanings. Some words share meanings with many other words while others share few. This is a hypothesis based on the structural distribution of shared meanings, or polysemy (words with multiple meanings). This chapter will explain the theory underlying Zipf’s law of meaning and power laws. It will also show how we can combine these ideas with the most basic node-level network measure: degree.
A universal basic income is widely endorsed as a critical feature of effective governance. It is also growing in popularity in an era of substantial collective wealth alongside growing inequality. But how could it work? Current economic policies necessarily influence wealth distributions, but they are often sufficiently complicated that they hide their inefficiencies. Simplifications based on network science can offer plausible solutions and even offer ways to base universal basic income on merit. Here we will examine a case study based on a universal basic income for researchers. This is an important case because numerous funding agencies currently require costly proposal processes with high administrative costs. These are costly for the proposal writers, their evaluators, and the progress of science itself. Moreover, the outcomes are known to be biased and inefficiently managed. Network science can help us redesign funding allocations in a less costly and potentially more equitable way.
The way networks grow and change over time is called network evolution. Numerous off-the-shelf algorithms have been developed to study network evolution. These can give us insight into the way systems grow and change over time. However, what off-the-shelf algorithms often lack are knowledge of the behavioral details surrounding a specific problem. Here we will develop a simple case that we will revisit over the next few chapters: How do children learn words from exposure to a sea of language? One possibility is that the words children learn first influence the words they learn next. Another possibility is that the structure of language itself facilitates the learning of some words over others. Indeed, we know that adults speak differently to children in ways that facilitate language learning, with semantically informative words tending to appear more often around words that children learn earliest. This invites the question: To what extent does the semantic structure of language predict word learning? This chapter will provide a general framework for building and competing models against one another with a specific application to the network evolution of child vocabularies.
A valuable feature of networks is their ability to quantify structural relationships. Much like a chemist who, having purified their sample, can bring to bear a wide range of tools for understanding their sample by transforming data into a network, a network scientist can leverage a range of tools that provide precise measurements of their data’s structure. This includes node-level measures like degree and centrality, meso-scale measures like community formation, and macro-scale measures like density and modularity. This chapter introduces these measures and gives a brief guide to their use.
Words, like biological species, are born and then, someday, they die. The half-life of a word is roughly 2,000 years, meaning that in that interval about half of all words are replaced with an unrelated (noncognate) word. Where do the new words come from? There are numerous dimensions along which new words could vary from old words, so it may not be easy to see how to enter this problem. However, extending our small worlds metaphor and the observation of clusters in language, we tell a simple story that mirrors biological theories about the origin of species. Language has urban centers with well-populated and well-connected meanings (like *food* and *red*). It also has rural fringes, where words live more isolated lives as hermits with limited connections to other words (like *twang* and *ohm*). Are new words more likely to be born in urban centers or in the rural fringes?