To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Item 9 of the Patient Health Questionnaire-9 (PHQ-9) queries about thoughts of death and self-harm, but not suicidality. Although it is sometimes used to assess suicide risk, most positive responses are not associated with suicidality. The PHQ-8, which omits Item 9, is thus increasingly used in research. We assessed equivalency of total score correlations and the diagnostic accuracy to detect major depression of the PHQ-8 and PHQ-9.
We conducted an individual patient data meta-analysis. We fit bivariate random-effects models to assess diagnostic accuracy.
16 742 participants (2097 major depression cases) from 54 studies were included. The correlation between PHQ-8 and PHQ-9 scores was 0.996 (95% confidence interval 0.996 to 0.996). The standard cutoff score of 10 for the PHQ-9 maximized sensitivity + specificity for the PHQ-8 among studies that used a semi-structured diagnostic interview reference standard (N = 27). At cutoff 10, the PHQ-8 was less sensitive by 0.02 (−0.06 to 0.00) and more specific by 0.01 (0.00 to 0.01) among those studies (N = 27), with similar results for studies that used other types of interviews (N = 27). For all 54 primary studies combined, across all cutoffs, the PHQ-8 was less sensitive than the PHQ-9 by 0.00 to 0.05 (0.03 at cutoff 10), and specificity was within 0.01 for all cutoffs (0.00 to 0.01).
PHQ-8 and PHQ-9 total scores were similar. Sensitivity may be minimally reduced with the PHQ-8, but specificity is similar.
Point-prevalence surveys for infection or colonization with methicillin-resistant Staphylococcus aureus (MRSA), vancomycin-resistant Enterococcus (VRE), extended-spectrum β-lactamase (ESBL)-producing Enterobacteriaceae, carbapenem-resistant Enterobacteriaceae (CREs), and for Clostridium difficile infection (CDI) were conducted in Canadian hospitals in 2010 and 2012 to better understanding changes in the epidemiology of antimicrobial-resistant organisms (AROs), which is crucial for public health and care management.
A third survey of the same AROs in adult inpatients in Canadian hospitals with ≥50 beds was performed in February 2016. Data on participating hospitals and patient cases were obtained using standard criteria and case definitions. Associations between ARO prevalence and institutional characteristics were assessed using logistic regression models.
In total, 160 hospitals from 9 of the 10 provinces with 35,018 adult inpatients participated in the survey. Median prevalence per 100 inpatients was 4.1 for MRSA, 0.8 for VRE, 1.1 for CDI, 0.8 for ESBLs, and 0 for CREs. No significant change occurred compared to 2012. CREs were reported from 24 hospitals (15%) in 2016 compared to 10 hospitals (7%) in 2012. Routine universal or targeted admission screening for VRE decreased from 94% in 2010 to 74% in 2016. Targeted screening for MRSA on admission was associated with a lower prevalence of MRSA infection. Large hospitals (>500 beds) had higher prevalences of CDI.
This survey provides national prevalence rates for AROs in Canadian hospitals. Changes in infection control and prevention policies might lead to changes in the epidemiology of AROs and our capacity to detect them.
Automated storage and retrieval systems are principal components of modern production and warehouse facilities. In particular, automated guided vehicles nowadays substitute human-operated pallet trucks in transporting production materials between storage locations and assembly stations. While low-level control systems take care of navigating such driverless vehicles along programmed routes and avoid collisions even under unforeseen circumstances, in the common case of multiple vehicles sharing the same operation area, the problem remains how to set up routes such that a collection of transport tasks is accomplished most effectively. We address this prevalent problem in the context of car assembly at Mercedes-Benz Ludwigsfelde GmbH, a large-scale producer of commercial vehicles, where routes for automated guided vehicles used in the production process have traditionally been hand-coded by human engineers. Such ad-hoc methods may suffice as long as a running production process remains in place, while any change in the factory layout or production targets necessitates tedious manual reconfiguration, not to mention the missing portability between different production plants. Unlike this, we propose a declarative approach based on Answer Set Programming to optimize the routes taken by automated guided vehicles for accomplishing transport tasks. The advantages include a transparent and executable problem formalization, provable optimality of routes relative to objective criteria, as well as elaboration tolerance towards particular factory layouts and production targets. Moreover, we demonstrate that our approach is efficient enough to deal with the transport tasks evolving in realistic production processes at the car factory of Mercedes-Benz Ludwigsfelde GmbH.
We introduce the asprilo1 framework to facilitate experimental studies of approaches addressing complex dynamic applications. For this purpose, we have chosen the domain of robotic intra-logistics. This domain is not only highly relevant in the context of today's fourth industrial revolution but it moreover combines a multitude of challenging issues within a single uniform framework. This includes multi-agent planning, reasoning about action, change, resources, strategies, etc. In return, asprilo allows users to study alternative solutions as regards effectiveness and scalability. Although asprilo relies on Answer Set Programming and Python, it is readily usable by any system complying with its fact-oriented interface format. This makes it attractive for benchmarking and teaching well beyond logic programming. More precisely, asprilo consists of a versatile benchmark generator, solution checker and visualizer as well as a bunch of reference encodings featuring various ASP techniques. Importantly, the visualizer's animation capabilities are indispensable for complex scenarios like intra-logistics in order to inspect valid as well as invalid solution candidates. Also, it allows for graphically editing benchmark layouts that can be used as a basis for generating benchmark suites.
In this contribution, the impact of extreme environmental conditions in terms of energy-level radiation of protons on silicon–germanium (SiGe)-integrated circuits is experimentally studied. Canonical representative structures including linear (passive interconnects/antennas) and non-linear (low-noise amplifiers) are used as carriers for assessing the impact of aggressive stress conditions on their performances. Perspectives for holistic modeling and characterization approaches accounting for various interaction mechanisms (substrate resistivity variations, couplings/interferences, drift in DC and radio frequency (RF) characteristics) for active samples are down to allow for optimal solutions in pushing SiGe technologies toward applications with harsh and radiation-intense environments (e.g. space, nuclear, military). Specific design prototypes are built for assessing mission-critical profiles for emerging RF and mm-wave applications.
Most, probably, of our decisions to do something positive, the full consequences of which will be drawn out over many days to come, can only be taken as the result of animal spirits – a spontaneous urge to action rather than inaction, and not as the outcome of a weighted average of quantitative benefits multiplied by quantitative probabilities.
(John Maynard Keynes)
After our deep-dive into the microstructural foundations of price dynamics, the time is ripe to return to one of the most important (and one of the most contentious!) questions in financial economics: what information is contained in prices and price moves? This question has surfaced in various shapes and forms throughout the book, and we feel that it is important to devote a full chapter to summarise and clarify the issues at stake. We briefly touched on some of these points in Section 2.3. Now that we have a better handle on how markets really work at the micro-scale, we return to address this topic in detail.
The Efficient-Market View
Traditionally, market prices are regarded to reflect the fundamental value (of a stock, currency, commodity, etc.), up to small and short-lived mispricings. In this way, a financial market is regarded as a measurement apparatus that aggregates all private estimates of an asset's true (but hidden) value and, after a quick and efficient digestion process, provides an output price. In this view, private estimates should only evolve because of the release of a new piece of information that objectively changes the value of the asset. Prices are then martingales because (by definition) new information cannot be anticipated or predicted. In this context, neither microstructural effects nor the process of trading itself can affect prices, except perhaps on very short time scales, due to discretisation effects like the tick size.
This Platonian view of markets is fraught with a wide range of difficulties that have been the subject of thousands of academic papers in the last 30 years (including those with renewed insights from the perspective of market microstructure). The most well known of these puzzles are:
• The excess-trading puzzle: If prices really reflect value and are unpredictable, why are there still so many people obstinately trying to eke out profits from trading? […]
Seller: It's the law of supply and demand, buddy. You want it or not?
(Translated from “6”, by Alexandre Laumonier)
A market is a place where buyers meet sellers to perform trades, and where prices adapt to supply and demand. This time-worn idea is certainly broadly correct, but reality is rather more intricate. At the heart of all markets lies a fundamental tension: buyers want to buy low and sellers want to sell high. Given these opposing objectives, how do market participants ever agree on a price at which to trade?
As the above dialogue illustrates, if a seller was allowed to increase the price whenever a buyer declared an interest to buy, then the price could reach a level so high that the buyer was no longer interested – and vice-versa. If traders always behaved in this way, then conducting even a single transaction would require a long and hard negotiation. Although this might be feasible if trades only occurred very infrequently, modern financial markets involve many thousands of transactions every single day. Therefore, finding a mechanism to conduct this process at scale, such that huge numbers of buyers and sellers can coordinate in real time, is an extremely complex problem.
Centuries of market activity have produced many possible solutions, each with their own benefits and drawbacks. Today, most markets implement an electronic, continuous-time double-auction mechanism based on the essential idea of a limit order book (LOB), which we introduce in Chapter 3. However, as a brief glance at the financial press will confirm, ensuring market stability and “fair and orderly trading” is still elusive, and it remains unclear whether modern electronic markets are any less prone to serious problems than old-fashioned trading pits.
In this part, we will start our deep-dive into the mechanisms of price formation at the most microscopic scale. In the coming chapters, we will zoom in – in both space and time – to consider how the interactions between single orders contribute to the price-formation process in an LOB.
In line with our overall effort to start with elementary models before adding any layers of complexity, we will initially focus on purely stochastic, rather than strategic, behaviours. In short, we will assume that agents’ actions are governed by simple rules that can be summarised by stochastic processes with rate parameters that depend only on the current state of the world – or, more precisely, on the current state of the LOB.
We will start with the study of the volume dynamics of a single queue of orders at a given price. Such a queue grows due to the arrival of new limit orders, or shrinks due to limit orders being cancelled or executed against incoming market orders. We will investigate the behaviour of the average volume of a queue, its stationary distribution and its time to depletion when starting from a given size. We will repeat each of these analyses with different assumptions regarding the behaviour of the order flows, to illustrate how these quantities depend on the specific details of the modelling framework.
From there, extending the model to account for the joint behaviour of the bid and ask queues will be a natural next step. We will introduce the “race to the bottom” between the best bid and ask queues. This race dictates whether the next price move will be upwards (if the ask depletes first) or downwards (if the bid depletes first). We will then fit these order flows directly to empirical data, in a model-free attempt to analyse the interactions between order flow and LOB state.
We will end this modelling effort by introducing a stochastic model for the whole LOB. This approach, which is often called the Santa Fe model, will allow us to make some predictions concerning the LOB behaviour within and beyond the bid–ask spread. Comparison with real-world data will evidence that such a “zero-intelligence” approach of a purely stochastic order flow succeeds at explaining some market variables, but fails at capturing others.
Throughout the book, we have pieced together several clues that suggest that the unpredictable nature of prices emerges from a fine balance between a strongly auto-correlated order flow and a dynamic price impact. At the same time, however, we have seen that so-called surprise models can also reproduce this balance with very few assumptions. It therefore seems that there might be another side to the coin, which statistical models fail to capture. Are we missing something?
So far, we have regarded order flows as simple stochastic processes with specified statistical properties. In doing so, we have excluded the strategic behaviours implemented by real market participants. In reality, however, investors’ actions are clearly influenced by their desire to make profits. Therefore, it seems logical that strategic behaviour should play an important role in real markets.
In this part, we will take the next logical step in our journey by exploring how including strategic considerations can shed light on some otherwise surprising features of financial markets. In doing so, we will examine how markets can remain in a delicate balance, maintained by ongoing competition between rational, profit-maximising agents. We will discuss the seminal Kyle model, which provides a beautiful explanation of how impact arises from liquiditya providers’ fears of adverse selection from informed traders. The model illustrates how impact allows information to be reflected in the price, and makes clear the important role played by noise traders – as we alluded to in the very first part of this book. More generally, the Kyle model, albeit not very realistic, is a concrete example of how competition and arbitrage can produce diffusive prices, even in the presence of private information.
Economic models also provide important insights on the challenges faced by market-makers. We will discuss the work of Glosten and Milgrom, which shows how the bid–ask spread must compensate market-makers for adverse selection in a competitive market. The relationship between a metaorder's slippage and its permanent impact will appear as another manifestation of the same idea. This framework paves the way for richer models of liquidity provision, where inventory risk, P&L skewness and finite tick-size effects all contribute to the challenge. We will also discuss how the Glosten–Milgrom model teaches us an important lesson: that the apparent profit opportunities from “buying low, selling high” around the spread are completely misleading.
We are now approaching the end of a long empirical, theoretical and conceptual journey. In this last part, we will take the important step of accumulating all this knowledge about markets to illuminate some important decisions about how to best act in them. In the following chapters, we will address two practical topics of utmost importance for market participants and regulators: optimal trade execution and market fairness and stability.
For both academics and practitioners, the question of how to trade and execute optimally is central to the study of financial markets. Given an investment decision with a given direction, volume and time horizon, how should a market participant execute it in practice? Addressing this question is extremely difficult, and requires a detailed understanding of market dynamics that ranges from the microscopic scale of order flow in the LOB (e.g. for optimising the positioning of orders within a queue or for choosing between a limit order and a market order) to the mesoscopic scale of slow liquidity (e.g. for scheduling the execution of large investment decisions over several hours, days or months). Many important questions on these topics have been active areas of research in the last two decades.
The final chapter of this book will address the critical question of market fairness and stability. These topics have been at the core of many debates in recent years – especially with the rise of competition between exchanges and high-frequency trading. In this last chapter, we will discuss how some market instabilities can arise from the interplay between liquidity takers and liquidity providers, and can even be created by the very mechanisms upon which markets are built.
In this jungle, regulators bear the important responsibility of designing the rules that define the ecology in financial markets. As we will discuss, because the resulting system is so complex, sometimes these rules can have unintended consequences on both the way that individual traders behave and the resulting price-formation process. Therefore, sometimes the most obvious solutions are not necessarily the best ones!
We conclude with a discussion of an interesting hypothesis: that markets intrinsically contain – and have always contained – some element of instability. From long-range correlations to price impact, this book has evidenced a collection of phenomena that are likely inherent to financial markets, and that we argue market observers, actors and regulators need to understand and accept.
In recent years, the availability of high-quality, high-frequency data has revolutionised the study of financial markets. By providing a detailed description not only of asset prices, but also of market participants’ actions and interactions, this wealth of information offers a new window into the inner workings of the financial ecosystem. As access to such data has become increasingly widespread, study of this field – which is widely known as market microstructure – has blossomed, resulting in a growing research community and growing literature on the topic.
Accompanying these research efforts has been an explosion of interest from market practitioners, for whom understanding market microstructure offers many practical benefits, including managing the execution costs of trades, monitoring market impact and deriving optimal trading strategies. Similar questions are of vital importance for regulators, who seek to ensure that financial markets fulfil their core purpose of facilitating fair and orderly trade. The work of regulators has come under increasing scrutiny since the rapid uptake of high-frequency trading, which popular media outlets seem to fear and revere in ever-changing proportions. Only with a detailed knowledge of the intricate workings of financial markets can regulators tackle these challenges in a scientifically rigorous manner.
Compared to economics and mathematical finance, the study of market microstructure is still in its infancy. Indeed, during the early stages of this project, all four authors shared concerns that the field might be too young for us to be able to produce a self-contained manuscript on the topic. To assess the lay of the land, we decided to sketch out some ideas and draft a preliminary outline. In doing so, it quickly became apparent that our concerns were ill-founded, and that although far from complete, the story of market microstructure is already extremely compelling.We hope that the present book does justice both to the main developments in the field and to our strong belief that they represent a truly new era in the understanding of financial markets.
What is Market Microstructure?
Before we embark on our journey, we pause for a moment to ask the question: What is market microstructure? As the name suggests, market microstructure is certainly concerned with the details of trading at the micro-scale. However, this definition does little justice to the breadth of issues and themes that this field seeks to address.
When men are in close touch with each other, they no longer decide randomly and independently of each other, they each react to the others. Multiple causes come into play which trouble them and pull them from side to side, but there is one thing that these influences cannot destroy and that is their tendency to behave like Panurges sheep.
(Henri Poincaré, Comments on Bachelier's thesis)
As is stated in any book on market microstructure (including the present one!), markets must be organised such that trading is fair and orderly. This means that markets should make the best efforts to be an even playing field for all market participants, and should operate such that prices are as stable as possible. As we emphasised in Chapter 1, market stability relies heavily on the existence of liquidity providers, who efficiently buffer instantaneous fluctuations in supply and demand, and smooth out the trading process. It seems reasonable that these liquidity providers should receive some reward for stabilising markets, since by doing so they expose themselves to the risk of extreme adverse price moves. However, rewarding these liquidity providers too heavily is in direct conflict with the requirement that markets are fair. In summary, if bid–ask spreads are too small, then liquidity providers are not sufficiently incentivised, so liquidity becomes fragile; if bid–ask spreads are too wide, then the costs of trading become unacceptable for other investors.
The rise of electronic-markets with competing venues and platforms is an elegant way to solve this dual problem, through the usual argument of competition. In a situation where all market participants can act either as liquidity providers or as liquidity takers (depending on their preferences and on market conditions), the burden of providing liquidity is shared, and bid–ask spreads should settle around fair values, as dictated by the market. As we saw in Chapter 17, this indeed seems to be the case in most modern liquid markets, such as major stock markets, futures and FX markets, in which the average bid–ask spread and the costs associated with adverse selection offset each other almost exactly.
I will remember that I didn't make the world, and it doesn't satisfy my equations.
(Emanuel Derman and Paul Wilmott “The Modeler's Oath”)
Optimal execution is a major issue for financial institutions, in which asset managers, hedge funds, derivative desks, and many more seek to minimise the costs of trading. Such costs can consume a substantial fraction of the expected profit of a trading idea, which perhaps explains why most active managers fail to beat passive index investing in the long run. Market friction is especially relevant for high-turnover strategies. Given the importance of these considerations, it is unsurprising that a whole new branch of the financial industry has emerged to address the notion of best execution. Nowadays, many brokerage firms propose trading costs analysis (TCA) and optimised execution solutions for their buy-side clients.
Trading costs are usually classified into two groups:
• Direct trading costs are fees that must be paid to access a given market. These include brokerage fees, direct-access fees, transaction taxes, regulatory fees and liquidity fees. These fees are all relatively straightforward to understand and to measure. As a general rule, direct trading costs are of the order of 0.1–1 basis points and SEC regulatory fees are 0.01 basis points.
• Indirect trading costs arise due to market microstructure effects and due to the dynamics of the supply and demand. In contrast to direct trading costs, indirect trading costs are quite subtle. These costs include the bid–ask spread and impact costs. As we discussed in Chapter 16, the bid–ask spread is a consequence of the information asymmetry between different market participants and is determined endogenously by market dynamics. Spread costs are typically a few basis points in liquid markets, but can vary substantially over time, according to market conditions. Note that the tick size can have a considerable effect on the value of the spread, especially for large-tick stocks.
Impact costs are also a consequence of the bounded availability of liquidity. These costs also arise endogenously, but are deceptive because they are of a statistical nature. Impact is essentially invisible to the naked eye (after a buy trade, the price actually goes down about half the time, and vice-versa), and only appears clearly after careful averaging (see Chapter 12).
Throughout the book, our empirical calculations are based on historical data that describes the LOB dynamics during the whole year 2015 for 120 liquid stocks traded on NASDAQ. On the NASDAQ platform, each stock is traded in a separate LOB with a tick size of ϑ = $0.01. The platform enables traders to submit both visible and hidden limit orders. Visible limit orders obey standard price–time priority, while hidden limit orders have lower priority than all visible limit orders at the same price. The platform also allows traders to submit mid-price-pegged limit orders. These orders are hidden, but when executed, they appear with a price equal to the national mid-price (i.e. the mid-price calculated from the national best bid and offer) at their time of execution. Therefore, although the tick size is $0.01, some orders are executed at a price ending with $0.005.
The data that we study originates from the LOBSTER database (see below), which lists every market order arrival, limit order arrival, and cancellation that occurs on the NASDAQ platform during 09:30–16:00 each trading day. Trading does not occur on weekends or public holidays, so we exclude these days from our analysis.When we calculate daily traded volumes or intra-day patterns, we include all activity during the full trading day. In all other cases, we exclude market activity during the first and last hour of each trading day, to remove any abnormal trading behaviour that can occur shortly after the opening auction or shortly before the closing auction.
For each stock and each trading day, the LOBSTER data consists of two different files:
• The message file lists every market order arrival, limit order arrival and cancellation that occurs. The LOBSTER data does not describe hidden limit order arrivals, but it does provide some details whenever a market order matches to a hidden limit order (see discussion below).
• The orderbook file lists the state of the LOB (i.e. the total volume of buy or sell orders at each price). The file contains a new entry for each line in the message file, to describe the market state immediately after the corresponding event occurs.