We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Measurement of subjective animal welfare creates a special problem in validating the measurement indicators used. Validation is required to ensure indicators are measuring the intended target state, and not some other object. While indicators can usually be validated through looking for correlation between target and indicator under controlled manipulations, this is not possible when the target state is not directly accessible. In this paper, I outline a four-step approach using the concept of robustness, that can help with validating indicators of subjective animal welfare.
The question of whether global norms are experiencing a crisis allows for two concurrent answers. From a facticity perspective, certain global norms are in crisis, given their worldwide lack of implementation and effectiveness. From a validity perspective, however, a crisis is not obvious, as these norms are not openly contested discursively and institutionally. In order to explain the double diagnosis (crisis/no crisis), this article draws on international relations research on norm contestation and norm robustness. It proposes the concept of hidden discursive contestation and distinguishes it from three other key types of norm contestation: open discursive, open non-discursive and hidden non-discursive contestation. We identify four manifestations of hidden discursive contestation in: (1) the deflection of responsibility; (2) forestalling norm strengthening; (3) displaying norms as functional means to an end; and (4) downgrading or upgrading single norm elements. Our empirical focus is on the decent work norm, which demonstrates the double diagnosis. While it lacks facticity, it enjoys far-reaching verbal acceptance and high validity. Our qualitative analysis of discursive hidden contestation draws on two case studies: the International Labour Organization’s compliance procedures, which monitor international labour standards, and the United Nations Treaty Process on a binding instrument for business and human rights. Although both fora have different context and policy cycles, they exhibit similar strategies of hidden discursive contestation.
The Markov True and Error (MARTER) model (Birnbaum & Wan, 2020) has three components: a risky decision making model with one or more parameters, a Markov model that describes stochastic variation of parameters over time, and a true and error (TE) model that describes probabilistic relations between true preferences and overt responses. In this study, we simulated data according to 57 generating models that either did or did not satisfy the assumptions of the True and Error fitting model, that either did or did not satisfy the error independence assumptions, that either did or did not satisfy transitivity, and that had various patterns of error rates. A key assumption in the TE fitting model is that a person’s true preferences do not change in the short time within a session; that is, preference reversals between two responses by the same person to two presentations of the same choice problem in the same brief session are due to random error. In a set of 48 simulations, data generating models either satisfied this assumption or they implemented a systematic violation, in which true preferences could change within sessions. We used the true and error (TE) fitting model to analyze the simulated data, and we found that it did a good job of distinguishing transitive from intransitive models and in estimating parameters not only when the generating model satisfied the model assumptions, but also when model assumptions were violated in this way. When the generating model violated the assumptions, statistical tests of the TE fitting models correctly detected the violations. Even when the data contained violations of the TE model, the parameter estimates representing probabilities of true preference patterns were surprisingly accurate, except for error rates, which were inflated by model violations. In a second set of simulations, the generating model either had error rates that were or were not independent of true preferences and transitivity either was or was not satisfied. It was found again that the TE analysis was able to detect the violations of the fitting model, and the analysis correctly identified whether the data had been generated by a transitive or intransitive process; however, in this case, estimated incidence of a preference pattern was reduced if that preference pattern had a higher error rate. Overall, the violations could be detected and did not affect the ability of the TE analysis to discriminate between transitive and intransitive processes.
This chapter defines robustness and fragility, argues that they can only be determined confidently in retrospect, but that assessments made by political actors, whilst subjective, have important political implications. We suggest some of the consideration that may shape these assessment. They include ideology, historical lessons, and the Zeitgeist. We go on to describe the following chapters, providing an outline of the book.
I make two related claims: (1) assessments of stability made by political actors and analysts are largely hit or miss; and (2) that leader responses to fear of fragility or confidence in robustness are unpredictable in their consequences. Leader assessments are often made with respect to historical lessons derived from dramatic past events that appear relevant to the present. These lessons may or may not be based on good history and may or may not be relevant to the case at hand. Leaders and elites who believe their orders to be robust can help make their beliefs self-fulfilling. However, overconfidence can help make these orders fragile. I argue that leader and elite assessments of robustness and fragility are influenced by cognitive biases and also often highly motivated. Leaders and their advisors use information selectively and can confirm tautologically the lessons they apply.
This volume focuses on the assessments political actors make of the relative fragility and robustness of political orders. The core argument developed and explored throughout its different chapters is that such assessments are subjective and informed by contextually specific historical experiences that have important implications for how leaders respond. Their responses, in turn, feed into processes by which political orders change. The volume's contributions span analyses of political orders at the state, regional and global levels. They demonstrate that assessments of fragility and robustness have important policy implications but that the accuracy of assessments can only be known with certainty ex post facto. The volume will appeal to scholars and advanced students of international relations and comparative politics working on national and international orders.
We review our theoretical claims in light of the empirical chapters and their evidence that leader assessments matter, are highly subjective, and very much influenced by ideology and role models. They are also influenced by leader estimates of what needs to be done and their political freedom to act. This is in turn shows variation across leaders. The most common response to fragility is denial, although some leaders convince themselves – usually unrealistically – they can enact far-reaching reforms to address it.
The study of threshold functions has a long history in random graph theory. It is known that the thresholds for minimum degree k, k-connectivity, as well as k-robustness coincide for a binomial random graph. In this paper we consider an inhomogeneous random graph model, which is obtained by including each possible edge independently with an individual probability. Based on an intuitive concept of neighborhood density, we show two sufficient conditions guaranteeing k-connectivity and k-robustness, respectively, which are asymptotically equivalent. Our framework sheds some light on extending uniform threshold values in homogeneous random graphs to threshold landscapes in inhomogeneous random graphs.
Even within well-studied organisms, many genes lack useful functional annotations. One way to generate such functional information is to infer biological relationships between genes or proteins, using a network of gene coexpression data that includes functional annotations. Signed distance correlation has proved useful for the construction of unweighted gene coexpression networks. However, transforming correlation values into unweighted networks may lead to a loss of important biological information related to the intensity of the correlation. Here, we introduce a principled method to construct weighted gene coexpression networks using signed distance correlation. These networks contain weighted edges only between those pairs of genes whose correlation value is higher than a given threshold. We analyze data from different organisms and find that networks generated with our method based on signed distance correlation are more stable and capture more biological information compared to networks obtained from Pearson correlation. Moreover, we show that signed distance correlation networks capture more biological information than unweighted networks based on the same metric. While we use biological data sets to illustrate the method, the approach is general and can be used to construct networks in other domains. Code and data are available on https://github.com/javier-pardodiaz/sdcorGCN.
The Bourbonnais region is characterized by extensive grassland-based beef systems operating in balance with natural resources. Interventions from policy, inter-professional, and producers’ organizations are needed for improving the value chain structure, maintenance of the landscape, and relationship with the public. As farmers face retirement, the lack of people willing to replace them threatens the future of this natural, traditional region.
Airports have been frequently affected by different internal and external disruptive events, which generally deteriorated their planned/regular performances. Their resilience is defined as the ability to withstand and maintain a certain level of functionality of performances compared to their reference regular/planned level during the impact of disruptive events and to recover reasonably rapidly afterwards. Robustness is defined as the level of saved functionality of performances compared to their planned/regular level, enabling continuous operations during the impact of disruptive events. The level of deteriorated functionality compared to that of the planned/regular functionality of performances represents their vulnerability. This paper develops a methodology for assessing resilience, robustness and vulnerability of airports affected by a given disruptive event(s). The methodology consists of analytical models of indicators of operational, economic, social and environmental performances of airports and other main actors/stakeholders involved. These indicators are used as figures-of-merit in analytical models for assessing their cumulative and time-dependent resilience, robustness, and vulnerability. The methodology is applied to assess resilience, robustness and vulnerability of two large airports – LHR (London Heathrow, UK) and NYC JFK (John F. Kennedy, US) – affected by a global and lasting external disruptive event – the COVID-19 pandemic disease. Based on the indicators of operational and economic performances, the results indicate very low resilience and robustness and very high vulnerability of both airports and their other main actors/stakeholders involved. Their resilience and robustness based on the indicators of social and environmental performances were not substantively different from the corresponding vulnerability. In absolute terms, LHR airport has been affected stronger than its NYC JFK counterpart. Savings in costs/externalities during the observed period under the given conditions have modestly compensated for total losses of both airports and their main actors/stakeholders involved.
This research presents an upper limb cable-driven rehabilitating robot with one degree of redundancy to improve the movements of the injured. A spatial trajectory is planned through the joint limit avoidance approach to apply the limits of the joint angles, which is a new method for trajectory planning of joints with an allowed definite interval. Firstly, a Lyapunov-based control is applied to the robot with taking uncertainty and disturbances into consideration. To derive the best responses of the system with considering uncertainty and disturbances, a novel robust tracking controller, namely a computed-torque-like with independent-joint compensation, is introduced. The mentioned new robust controller has not been applied to any cable robot which is the novelty of this paper to derive a superior output and the robustness of the given approach. Stability analysis of both controllers is demonstrated and the outputs of the controllers are compared for an exact three-dimensional motion planning and desirable cable forces. Eventually, the proposed novel controller revealed a better function in the presence of uncertainties and disturbances with about 28.21% improvement in tracking errors and 69.22% improvement in the required cable forces as control inputs, which is a considerable figure.
Robust designs protect system utility in the presence of uncertainty in technical and operational outcomes. Systems-of-systems, which lack centralized managerial control, are vulnerable to strategic uncertainty from coordination failures between partially or completely independent system actors. This work assesses the suitability of a game-theoretic equilibrium selection criterion to measure system robustness to strategic uncertainty and investigates the effect of strategically robust designs on collaborative behavior. The work models interactions between agents in a thematic representation of a mobile computing technology transition using an evolutionary game theory framework. Strategic robustness and collaborative solutions are assessed over a range of conditions by varying agent payoffs. Models are constructed on small world, preferential attachment and random graph topologies and executed in batch simulations. Results demonstrate that systems designed to reduce the impacts of coordination failure stemming from strategic uncertainty also increase the stability of the collaborative strategy by increasing the probability of collaboration by partners; a form of robustness by environment shaping that has not been previously investigated in design literature. The work also demonstrates that strategy selection follows the risk dominance equilibrium selection criterion and that changes in robustness to coordination failure can be measured with this criterion.
Although robustness is an important consideration to guarantee the performance of designs under deviation, systems are often engineered by evaluating their performance exclusively at nominal conditions. Robustness is sometimes evaluated a posteriori through a sensitivity analysis, which does not guarantee optimality in terms of robustness. This article introduces an automated design framework based on multiobjective optimisation to evaluate robustness as an additional competing objective. Robustness is computed as a sampled hypervolume of imposed geometrical and operational deviations from the nominal point. In order to address the high number of additional evaluations needed to compute robustness, artificial neutral networks are used to generate fast and accurate surrogates of high-fidelity models. The identification of their hyperparameters is formulated as an optimisation problem. In the frame of a case study, the developed methodology was applied to the design of a small-scale turbocompressor. Robustness was included as an objective to be maximised alongside nominal efficiency and mass-flow range between surge and choke. An experimentally validated 1D radial turbocompressor meanline model was used to generate the training data. The optimisation results suggest a clear competition between efficiency, range and robustness, while the use of neural networks led to a speed-up by four orders of magnitude compared to the 1D code.
Self-organizing systems (SOS) are developed to perform complex tasks in unforeseen situations with adaptability. Predefining rules for self-organizing agents can be challenging, especially in tasks with high complexity and changing environments. Our previous work has introduced a multiagent reinforcement learning (RL) model as a design approach to solving the rule generation problem of SOS. A deep multiagent RL algorithm was devised to train agents to acquire the task and self-organizing knowledge. However, the simulation was based on one specific task environment. Sensitivity of SOS to reward functions and systematic evaluation of SOS designed with multiagent RL remain an issue. In this paper, we introduced a rotation reward function to regulate agent behaviors during training and tested different weights of such reward on SOS performance in two case studies: box-pushing and T-shape assembly. Additionally, we proposed three metrics to evaluate the SOS: learning stability, quality of learned knowledge, and scalability. Results show that depending on the type of tasks; designers may choose appropriate weights of rotation reward to obtain the full potential of agents’ learning capability. Good learning stability and quality of knowledge can be achieved with an optimal range of team sizes. Scaling up to larger team sizes has better performance than scaling downwards.
We investigate spatial random graphs defined on the points of a Poisson process in d-dimensional space, which combine scale-free degree distributions and long-range effects. Every Poisson point is assigned an independent weight. Given the weight and position of the points, we form an edge between any pair of points independently with a probability depending on the two weights of the points and their distance. Preference is given to short edges and connections to vertices with large weights. We characterize the parameter regime where there is a non-trivial percolation phase transition and show that it depends not only on the power-law exponent of the degree distribution but also on a geometric model parameter. We apply this result to characterize robustness of age-based spatial preferential attachment networks.
A solid QCA does not end with the analytic moment. Researchers must make several analytic decisions at various stages in the analysis, some with more confidence than others. Researchers might also be confronted with data that are structured in analytically relevant ways. For example, cases might group into different geographic, substantive, or temporal clusters, or there might be relevant causal dependencies or sequences among conditions.
This chapter introduces the different robustness and diagnostic tools available in R to assess QCA results. It enables the reader to investigate to what extent their QCA results are robust against equally plausible analytic decisions regarding the selection of calibration anchors or consistency and frequency cut-offs. We present possibilities to assess robustness in R. Moreover, we introduce tools for cluster diagnostics and discuss strategies for dealing with timing and temporality, including ‘coincidence analysis’ (CNA).
Learning goals:
- Basic understanding of different approaches to diagnosing and assessing QCA results.
- Familiarity with how the robustness of QCA results to different analytical decisions can be assessed.
- Familiarity with proposals on how to assess QCA results in the presence of clustered data.
- Familiarity with how to model sequences and causal chains in R.
The earth system is being transformed by human activities. The complex societal, technological, and environmental changes underway require governance systems that can anticipate, manage, and help steer these changes along more sustainable trajectories. A decade ago the Earth System Governance (ESG) Project proposed that adaptiveness is one of the key attributes and goals of governance in these situations. Adaptiveness is an umbrella concept encompassing and related to adaptive management and governance, adaptive capacity, vulnerability, resilience, robustness, and social learning. As part of the ESG Project Harvesting Initiative, this book aims to take stock and review a decade of research progress surrounding these themes. Its key research question is: How has adaptiveness, as an umbrella concept, been developed and applied in the context of earth system governance in the first decade after its inception, and what insights and practical solutions has it yielded? Here this ambitious question is divided into four specific questions based on the ESG Project 2009 Science Plan: (1) What are the politics of adaptiveness? (2) Which governance processes foster adaptiveness? (3) What attributes of governance systems enhance capacities to adapt? (4) How, when, and why does adaptiveness influence eartth system governance?