To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Intuitively, crest speeds of water waves are assumed to match their phase speeds. However, this is generally not the case for natural waves within unsteady wave groups. This motivates our study, which presents new insights into the generic behaviour of crest speeds of linear to highly nonlinear unsteady waves. While our major focus is on gravity waves where a generic crest slowdown occurs cyclically, results for capillary-dominated waves are also discussed, for which crests cyclically speed up. This curious phenomenon arises when the theoretical constraint of steadiness is relaxed, allowing waves to change their form, or shape. In particular, a kinematic analysis of both simulated and observed open-ocean gravity waves reveals a forward-to-backward leaning cycle for each individual crest within a wave group. This is clearly manifest during the focusing of dominant wave groups essentially due to the dispersive nature of waves. It occurs routinely for focusing linear (vanishingly small steepness) wave groups, and it is enhanced as the wave spectrum broadens. It is found to be relatively insensitive to the degree of phase coherence and focusing of wave groups. The nonlinear nature of waves limits the crest slowdown. This reduces when gravity waves become less dispersive, either as they steepen or as they propagate over finite water depths. This is demonstrated by numerical simulations of the unsteady evolution of two- and three-dimensional dispersive gravity wave packets in both deep and intermediate water depths, and by open-ocean space–time measurements.
L’étude de la mémoire autobiographique dans l’autisme a été principalement réalisée chez l’adulte  et plus rarement chez l’enfant, révélant principalement des déficits en mémoire épisodique . Il n’existe qu’une étude chez l’adolescent  cherchant à caractériser les souvenirs épisodiques, incluant notamment leurs propriétés sensorielles. L’objectif de notre étude est d’évaluer les productions épisodiques d’événements à la fois passés et futurs, chez des adolescents avec trouble du spectre autistique, en utilisant un support et en explorant les détails phénoménologiques, émotionnels et les capacités de récollection.
Matériel et méthodes
Un échantillon de 16 adolescents avec un diagnostic de trouble du spectre autistique sans déficience intellectuelle (TSA-SDI), et un groupe de 16 enfants au développement typique, ont réalisé une tâche de mémoire autobiographique originale, ludique et contrôlée, comprenant à la fois une évaluation rapide et indicée de la composante sémantique et plus détaillée de la composante épisodique.
Les participants avec TSA-SDI présentent des difficultés de restitution des souvenirs épisodiques comparés aux témoins et bénéficient significativement de l’indiçage. Au niveau des propriétés perceptives, les adolescents avec TSA fournissent moins de couleurs que les témoins, alors que le nombre total de détails perceptifs ne diffère pas entre les deux groupes. Enfin, la reviviscence diffère selon la période évoquée : la reviviscence d’évènements passés est moins précise que la projection dans le futur.
Ces résultats confirment l’existence de troubles en mémoire épisodique chez les adolescents avec TSA-SDI, améliorés en présence d’un support visuel. Les propriétés sensorielles semblent être impliquées différemment dans l’organisation des souvenirs, notamment les couleurs, probablement en relation avec une perception atypique chez les personnes avec TSA-SDI . L’impact de la perception des couleurs sur la mémoire est une piste de recherche à approfondir.
THE CHANSON DE ROLAND, although contemporary with the development of chivalry, shows the value system of a highly aristocratic society of revenge, preserved in holy war. It exudes disdain for serfs, used as a foil for the “baron (ber)” and the “vassal,” and is not interested in the bulk of the army, largely consisting of “soudoyers.” It glorifies the heroic death of great warriors wishing to live up to their lineage and their ancestors, all Franks, and for this reason they are not afraid of dying or killing.
The Chanson d’Aspremont was composed shortly before 1190, either in southern Italy or in the French domains of the Plantagenets, or in both. Widely circulated during the Middle Ages, it lacks the poetic force of the Chanson de Roland. Perhaps nearer to a romance than an epic poem, it is two and a half times longer. The climate is changing and there is no unity of action. Martí de Riquer was even able to write that, “it had more than a little literary success, but the enormous mass of the chanson makes it tough to read.” The tensions and social diversity it reveals are also interesting for the historian: it may be that we can learn more from it than from Chanson de Roland, which Aspremont wishes to complete by recounting the “enfances” of Roland during one of Charlemagne's great battles in Calabria, on the slopes of Aspromonte. It was fought to defend Roma and the empire against the offensive of King Agoulant, preceded by his son Eaumont.
The Chanson d’Aspremont appears to borrow all its great values from the Song of Roland: its elegy of the great but terrible fight to the death in fair combat; its aristo cratic system; and a number of its formulas and narrative motifs. However, all this is transposed, shifted, and inserted into a more prosaic form, even diluted in a composite ensemble of interesting debates and combats, which are piquant rather than moving. Its composition is more like a game, and it juxtaposes various value systems. Most important, I think, is being able to read into it more truly chivalric elements. In fact, the succession of Frankish victories pushes sacrificial death somewhat into the background; instead, the stress placed on the actual ritual of knighting involves the theme of the promotion of the young, vavasours, and even serfs.
Donald J. Trump won the 2016 US presidential election with fewer popular votes than Hillary R. Clinton. This is the fourth time this has happened, the others being 1876, 1888, and 2000. In earlier work, we analyzed these elections (and others) and showed how the electoral winner can often depend on the size of the US House of Representatives. This work was inspired by Neubauer and Zeitlin (2003, 721–5) in their paper, “Outcomes of Presidential Elections and the House Size.” A sufficiently larger House would have given electoral victories to the popular vote winner in both 1876 and 2000. An exception is the election of 1888. We show that Trump’s victory in 2016 is like Harrison’s in 1888 and unlike Hayes’s in 1876 and Bush’s in 2000. This article updates our previous work to include the 2016 election. It also draws attention to some of the anomalous behavior that can arise under the Electoral College.
The Battle of Bouvines is known in detail from the narratives of William le Breton but his reports are less objective than modern historians, let alone those in military studies, have generally thought them to be. William's accounts are infused with Capetian propaganda in the way they put King Philip Augustus at the centre of the battle. Here, constructive criticism is offered of that line of argument relying in particular on the report by the so-called Anonymous of Béthune. It is suggested that the scale of the battle should be reduced and a discussion of what was at stake and what the battle's impact was is set out along similar lines as has been done for other battles in the context of warfare and feudal relations.
Bouvines appears at the juncture of two types of operations. A true chivalric battle engaged the right flank of the French and Count Ferran's knights with its codes and scenes similar to those found in the Anglo-Norman world of Orderic Vitalis, such as the use of a knife, and not arrows, with which to kill horses, as in the games of the ‘poignées’. Further to the left, it was essential for the French to disperse the footsoldiers, who protected the mounted knights, and whose flight left the calvalry exposed. It is uncertain whether organised mercenaries took part in the battle of Bouvines. The defection by the duke of Louvain was an important reason for the defeat of Emperor Otto and the ‘men from Brabant’ who offered resistance to the last man standing – for which they were praised by many French commentators – and who were probably men who had come with the emperor.
Orderic Vital n'a pas relaté la bataille de Bouvines. Pareil oubli s'explique vraisemblablement par le fait qu'il est mort près de soixante-dix ans auparavant. Mais c'est bien regrettable car du coup nous ignorons s'il aurait été frappé, comme à Brémule, par le petit nombre de morts et s'il aurait laissé entrevoir là une des ces batailles partielles dont l'histoire de la conflictualité féodale est émaillée, et qui ne sont que des fractures exceptionnelles, vite réduites, de la société chevaleresque et chrétienne.
We revisit the classical but as yet unresolved problem of predicting the breaking onset of 2D and 3D irrotational gravity water waves. Based on a fully nonlinear 3D boundary element model, our numerical simulations investigate geometric, kinematic and energetic differences between maximally tall non-breaking waves and marginally breaking waves in focusing wave groups. Our study focuses initially on unidirectional domains with flat bottom topography and conditions ranging from deep to intermediate depth (depth to wavelength ratio from 1 to 0.2). Maximally tall non-breaking (maximally recurrent) waves are clearly separated from marginally breaking waves by their normalised energy fluxes localised near the crest tip region. The initial breaking instability occurs within a very compact region centred on the wave crest. On the surface, this reduces to the local ratio of the energy flux velocity (here the fluid velocity) to the crest point velocity for the tallest wave in the evolving group. This provides a robust threshold parameter for breaking onset for 2D wave packets propagating in uniform water depths from deep to intermediate. Further targeted study of representative cases of the most severe laterally focused 3D wave packets in deep and intermediate depth water shows that the threshold remains robust. These numerical findings for 2D and 3D cases are closely supported by our companion observational results. Warning of imminent breaking onset is detectable up to a fifth of a carrier wave period prior to a breaking event.
We estimate a medium-scale dynamic stochastic general equilibrium (DSGE) model for the euro area in an open-economy framework. The model includes structural trends on all variables, allowing us to estimate on gross data. First, we provide a theoretical balanced growth path consistent with permanent productivity shocks, inflation target changes, and permanent shocks to the openness of the economies. We then define the cycle as the gap between this sustainable trajectory and the gross data. Hence, we can properly deal with persistent deviations of the trade balance. Finally, we find persistent and strong effects from the asymmetric increase of euro-area imports during the last 10 years on domestic inflation. From 2000Q1 to 2008Q4, we estimate the contribution of the imbalanced development of international trade on euro-area inflation to an average of −0.7%, and on nominal interest rate, to an average of −1.4%.
We report on new experiments designed to investigate bed destabilization processes in a two-dimensional wave flume physical model of a beach. The mobile bed consists of non-cohesive granular material of low density. The wave conditions are provided by repeating a cycle of waves made of two bichromatic groups of different period. The horizontal and vertical velocities are acoustically profiled vertically from free-stream elevation down to the still bed level in the surf zone. Additional measurements of the fluid pressure at positions closely aligned horizontally and vertically in and slightly above the sediment bed are undertaken. Mobile bed interfaces, still bed and top interface, are detected via acoustic and optical methods. Both methods are cross-compared and give similar results. Flow turbulence over the bed is analysed, the Reynolds turbulent shear stress is found negligible compared to the orbital flow induced momentum diffusion. The shear stress and the horizontal pressure gradient are computed at near-bed elevation and used in the bed incipient plug flow model of Sleath (Cont. Shelf Res., vol. 19 (13), 1999, pp. 1643–1664). Both the model and the measurements confirm that destabilization occurs when the non-dimensional pressure gradient (or Sleath number) exceeds the threshold value of 0.3 which is simultaneous with strong flow acceleration. The near-bottom fluid shear stress detected during these flow accelerations at steep wave fronts is found experimentally to be negative, which is retrieved with an unsteady plug flow model. This is suggesting that the fluid above the bed resists the sediment layer motion at these particular phases.
The threshold for the onset of breaking proposed by Barthelemy et al. (arXiv:1508.06002v1, 2015) has been investigated in the laboratory for unidirectional wave groups in deep water and extended to include different classes of wave groups and moderate wind forcing. Thermal image velocimetry was used to compare measurements of the wave crest point (maximum elevation and also the point of maximum) surface water particle velocity (
) with the wave crest point speed (
) determined by an array of closely spaced wave gauges. The crest point surface energy flux ratio
that distinguishes maximum recurrence from marginal breaking was found to be
. Increasing wind forcing from zero to
systematically increased this threshold by 2 %. Increasing the spectral bandwidth (decreasing the Benjamin–Feir index from 0.39 to 0.31) systematically reduced the threshold by 1.5 %.
We discuss in this chapter the socio-economical aspects of cities and we start by revisiting the classical models of urban economics such as the Alonso-Muth-MIlls and the Beckmann models. We enclose all the details of the derivation for these models, allowing non-economists to follow and to understand the basic assumptions and results used in urban economics. We then discuss the spatial income segregation in cities, both from an empirical and a modeling point of view. We propose a discussion on the Schelling model and its relation with statistical physics. We end this chapter by presenting scaling ideas and theoretical approaches for computing the exponents.
Most of the world's people are now living in cities and urbanization is expected to keep increasing in the near future. The resulting challenges are complex, difficult to handle, and range from increasing dependence on energy, to the emergence of socio-spatial inequalities, to serious environmental and sustainability issues. Understanding and modeling the structure and evolution of cities is then more important than ever as policy makers are actively looking for new paradigms in urban and transport planning.
The recent advances obtained in the understanding of cities have generated increased attention to the potential implication of new theoretical models in agreement with data. Questions such as urban sprawl, effects of congestion, dominant mechanisms governing the spatial distribution of activities and residences, and the effect of new transportation infrastructures are fundamental questions that we need to understand if we want a harmonious development of cities in the future, from both social and economic points of view.
Cities were for a long time the subject of numerous studies in a large number of fields. Discussion of the ideal city can be traced back at least to the Renaissance, and more recently scientists have tried to describe quantitatively the formation and evolution of cities. Regional science and then quantitative geography addressed various problems such as the spatial organization of cities, the impact of infrastructures, and transport. It is remarkable to note that as early as the 1970s quantitative geographers realized the crucial importance of networks in these systems, and produced visionary studies about networks, their evolution, and the complexity of cities (Haggett et al. 1977).
These studies were further developed mathematically by economists who discussed the interplay between space and economic aspects in cities. Many important models find their origin in the seminal paper of Von Thunen and describe isolated, monocentric cities in terms of utility maximization subject to budget constraints. These models allowed spatial economics to get a grasp of the relations between space, income, and transportation; for example. Japanese economists Fujita and Ogawa discussed the impact of agglomeration effects between firms in a general model that deals with the location choice for individuals and companies.
In the previous chapter, we discussed the analysis and modeling of mobility patterns in cities. However, as cities expand, their transportation networks are also growing, with increasing interconnections between different transportation modes. In large cities, we can now choose the transportation mode to travel from one point to another, and this trip can even involve several different modes. This multimodality is a new aspect of large cities and brings new questions and problems. From the users, point of view, it becomes difficult to deal with the huge amount of information needed for describing the different transportation networks and their interconnections. From the transport agencies point of view, the managing task becomes harder because the different modes are usually run by separate agencies; this renders optimization difficult owing to the large number of aspects that have to be taken into account (Guo and Wilson 2011).
In particular, an important problem concerning multimodality is the synchronization between different modes. For example, on average in the UK, 23% of travel time is lost in connections for trips with more than one mode (Gallotti and Barthelemy, 2014). This lack of synchronization between modes induces differences between the theoretical quickest trip and the “time-respecting” path, which takes into account waiting times at interconnection nodes.
In order to address these problems and more generally to understand the impact of the coupling between modes, we need new tools in order to identify the main factors that govern their efficiency. The multilayer network approach seems to be the most convenient framework for studying these systems (Kivel ä et al. 2014; Boccaletti et al. 2014). In this framework, each layer represents a mode and intermodal connections are represented by inter-layer links. In this chapter we discuss some aspects of multimodality and present tools for measuring and characterizing these coupled networks and their efficiency as a whole.
A multilayer network view of urban navigation
Empirical observations of multimodality
We first describe empirical results obtained by Gallotti and Barthelemy (2014) from timetables for the whole UK and for all transport modes. We note that these results were not obtained for traffic data (that are usually difficult to get). Instead we used these timetable data and the assumption of uniform demand.
As we discussed in Chapter 1, “understanding” has many definitions, with a variable amount of quantitative input. For a physicist, understanding does not mean having a story consistent with reality only, but also having mathematical tools and models able to describe real phenomena and to predict the outcome of experiments. Even if a qualitative description of processes is somewhat satisfying, it is not enough for constructing a science of cities. Indeed, we would like to identify the most important parameters, not only to understand the past, but also to be able to construct a model that gives, with a reasonable confidence, the future evolution of a city and to test the impact of various policies.
At this point, we certainly have a number of pieces of the puzzle, and we have discussed some of them in this book. It doesn't however mean that we have solved the full puzzle. New data sources and large datasets allow us to get a precise idea of what is happening in cities. We are currently experiencing an exciting time during which we can challenge the purely theoretical developments made these last decades. In many empirical studies, the identification of relevant factors was essentially done statistically, and we can now hope to go beyond and to have a more mechanistic approach, where a model based on simple processes is able to reproduce empirical observations.
Concerning the spatial structure of cities, new data sources give us a real-time, high-resolution picture of mobility. The structure of mobility flows that come out from these datasets departs from the usual image of a monocentric city where flows converge towards the central business district. Instead, for large cities, the main flows appear to be far from the localization between centers of residence and activities, that we could have na ïvely expected. This massive amount of data also allows us to quantitatively assess the degree of polycentricity of an urban system. A simple model showed that congestion is a crucial factor in understanding the evolution of polycentricity and mobility patterns with the population size.
The beginning of statistical physics can be traced back to thermodynamics in the nineteenth century. The field is still very active today, with modern problems occurring in out-of-equilibrium systems. The first problems (up to c. 1850) were to describe the exchange of heat through work and to define concepts such as temperature and entropy. A little later many studies were devoted to understanding the link between a microscopic description of a system (in terms of atoms and molecules) and a macroscopic observation (e.g., the pressure or the volume of a system). The concepts of energy and entropy could then be made more precise, leading to an important formalization of the dynamics of systems and their equilibrium properties.
More recently, during the twentieth century, statistical physicists invested much time in understanding phase transitions. The typical example is a liquid that undergoes a liquid-to-solid transition when the temperature is lowered. This very common phenomenon turned out, however, to be quite complex to understand and to describe theoretically. Indeed, this type of “emergent behavior” is not easily predictable from the properties of the elementary constituents and as Anderson (1972) put it: ”… the whole becomes not only more than but very different from the sum of its parts.” In these studies, physicists understood that interactions play a critical role: without interactions there is usually no emergent behavior, since the new properties that appear at large scales result from the interactions between constituents. Even if the interaction is “simple,” the emergent behavior might be hard to predict or describe. In addition, the emergent behavior depends, not on all the details describing the system, but rather on a small number of parameters that are actually relevant at large scales (see for example Goldenfeld 1992).
Statistical physics thus primarily deals with the link between microscopic rules and macroscopic emergent behavior and many techniques and concepts have been developed in order to understand this translation – among them the notion of relevant parameters, but also the idea that at each level of description of a system there is a specifically adapted set of tools and concepts.
We discuss here modeling approaches for explaining the population distribution characterized by the famous Zipf's law. We start with classical models (Gibrat and Gabaix), discuss their derivation, results, and limits. We then propose a discussion of a new approach based on stochastic diffusion. We also revisit the Central place theory from a quantitative point of view and show that most of Christaller's results can be understood in terms of sptial fluctuations.
Mobility is obviously a crucial phenomenon in cities. In fact, it is probably one of the most important mechanisms that govern the structure and dynamics of cities. Indeed, individuals go to cities to buy, sell or exchange goods, to work, or to meet with other individuals and for this they need different transportation means. This is where technology enters the problem through the (average) velocity of transportation modes. This average velocity increased during the evolution of technology and modified the structure and organization of cities. For example, we see in Fig. 5.1 that the “horizon” of an individual depends strongly on her transportation mode. For a walker, the horizon is essentially isotropic and small, while the car allows for a wider exploration but one which is anisotropic and follows transportation infrastructures. This correlation between the spatial structure of the city and the available technology at the moment of its creation is clearly illustrated by Anas et al. (1998) for US cities. Many major cities, such as Denver or Oklahoma City, developed around rail terminals that triggered the formation of central business districts. In contrast, automobile-era cities that developed later, such as Dallas or Houston, have a spatial organization that is essentially determined by the highway system.
In terms of mobility, the city center is also the location that mimimizes the average distance to all other locations in the city. Very naturally, it is then the main attraction for businesses and residences, which leads to competition for space between individuals or firms, giving rise to the real-estate market. There is also a well-known relation between land-use and accessibility, as was discussed some time ago by Hansen (1959), and new, extensive datasets will certainly enable us in the future to characterize precisely the relation between these important factors.
It is of course very difficult to make an exhaustive review about all studies on mobility and we will focus in this chapter on several specific points. We will mostly describe the general features of mobility and will leave the discussion about multimodal aspects for Chapter 6.