To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Natural mortality (M) is a key parameter for understanding population dynamics, especially in relation to harvested populations. Direct observations of M in crustaceans are scarce, due to the moulting process. Indirect methods to estimate M with easier-to-obtain life history attributes are therefore used routinely. Given their theoretical background, we reviewed the applicability of these methods for crustaceans. We applied the selected methods to two crustacean species harvested in Chilean waters: the yellow squat lobster (Cervimunida johni) and red squat lobster (Pleuroncodes monodon). Uncertainty of each M estimate was incorporated in the life history parameters that input into the indirect method (trait-error) and parameters defining the indirect method (coefficient-trait-error). Methods based on the relationship between total mortality and maximum age, or with different ages and based on life history theory were the most appropriate for crustaceans since they apply across taxa. M estimates showed high variability between species, sexes and areas. Estimations of M for C. johni varied from 0.13 to 0.28 (year−1) for males and 0.17 to 0.51 (year−1) for females. For P. monodon values for the north varied from 0.26 to 0.37 (year−1) for males and 0.24 to 0.45 (year−1) for females. In the south, values of M were higher for both males (0.43–0.68 year−1) and females (0.41–1.06 year−1). High variability in the M estimates was associated with the method and number of parameters, their uncertainty, theoretical background and probability distribution. M estimates are not comparable, raising the need to propagate the uncertainty of M into the stock assessment of Chilean squat lobsters.
Errors in data are a part of life for experimenters in science and engineering. This chapter considers the types of errors, including random and systematic error that can occur during an experiment and methods by which uncertainties arising from such errors can be combined. Many worked examples are included in this chapter, as well as exercises for the student to complete
No matter how much care is taken during an experiment, or how sophisticated the equipment used, values obtained through measurement are influenced by errors. Errors can be thought of as acting to conceal the true value of the quantity sought through experiment. Random errors cause values obtained through measurement to occur above and below the true value. This chapter considers statistically-based methods for dealing with variability in experimental data such as that caused by random errors. As statistics can be described as the science of assembling, organising and interpreting numerical data, it is an ideal tool for assisting in the analysis of experimental data.
With the creation of the Brady Plan – a program developed by US Treasury Secretary Nicholas Brady in 1989 to convert national debt into bonds following the Latin American debt crisis – emerging market countries joined their wealthier cousins as important participants in global bond and equity markets. The subsequent profitability and popularity of emerging market bonds combined with the securitization of other debts (most notably the packaging of mortgages into tradable securities), the creation of a wide range of increasingly sophisticated finance instruments and the normalization of free capital mobility led to a dramatic surge in the growth of private-sector liquidity and sparked the development of the extraordinarily highly interconnected global financial marketplace that we live in today. The resulting globalization of finance has transformed the international financial system from one dominated by official, public sources of capital to one in which private-sector components of liquidity now permeate virtually all facets of the financial system.
Speaking before the Senate Banking Committee on July 15, 2008, US Treasury Secretary Henry Paulson petitioned Congress for the authority to use taxpayer funds to prevent America’s mortgage giants Fannie Mae and Freddie Mac from collapsing. The hearing addressed widespread homeowner mortgage defaults that had sent stock prices plummeting and investors fleeing. Paulson argued that if investors came to understand that the government would not allow Fannie and Freddie to go under, stock prices would stabilize and a larger crisis could be averted. In a statement that would be repeated in news stories for years to come, Paulson speculated, “If you have a bazooka in your pocket and people know it, you probably won’t have to use it.” In this instance, the “bazooka theory” failed: Paulson not only had to fire his bazooka shortly after acquiring it, but its blast proved grossly inadequate to calm market uncertainty and forestall what became the worst financial crisis to hit the United States since the Great Depression. In spite of Paulson’s newly acquired money and authority, investors dumped Fannie and Freddie shares, both organizations fell into government conservatorship, political criticisms of bailouts grew louder and the whirlpool of uncertainty swirled ever faster.
In this chapter, the authors apply the approach of basing a definition on the pattern of characteristic features of a term to that of ‘trust’. Starting from an analysis of ‘a patient trusts his doctor’, they identify seven characteristic features of ‘trust’. (1) Trust refers to an expectation regarding the trustworthiness. (2) Trust presupposes a situation of uncertainty and risk. (3) Trust is responsible if this expectation is justified. (4) This expectation must be realistic. (5) Trust is a free choice and implies the (at least unconscious) acceptance of the trust’s inherent risk. (6) A breach of trust causes a feeling of betrayal on the part of the truster. (7) Trust refers to the relationship between agents, who are competent and autonomous with regard to the topic of trust. Of the seven features, (1), (2), and (3) are essential, i.e., if one of them is absent, it cannot be a case of trust. From this follows their basic definition of trust: ‘trust refers to a justified expectation regarding the trustworthiness of the trustee under conditions of uncertainty and risk’.
The development and conceptual relationship of the constructs of threat appraisal (TA) and intolerance of uncertainty (IU) are explored in the context of anxiety disorders. A narrative review tracking the development of these constructs and their relationship is undertaken. There is some evidence to suggest that the interaction between the components of threat appraisal (probability × cost) may partially account for or provide a theoretical framework which explains presenting levels of anxiety. Furthermore, research suggested that IU is a construct which contributes to a broad range of anxiety disorders. It was concluded that distinctive cognitive biases linked with IU – such as interpreting ambiguous and uncertain (both positive and negative) information as highly concerning – suggests that IU is interpreted negatively independent of threat appraisal. These findings mean a number of issues remain unclear, including whether IU in anxiety-provoking situations is sufficient in itself – independent of threat appraisal – in eliciting high levels of anxiety. Additionally, it is unclear whether threat appraisal and IU act as independent constructs, or more in an interactive manner in anxiety. To achieve further clarity on these issues, methodological recommendations for future research are made.
Key learning aims
(1)To understand the conceptual foundations of TA and IU in the cognitive model of anxiety.
(2)To understand the empirical evidence supporting the role of both TA and IU in anxiety.
(3)To appreciate the potential relationship between these concepts in anxiety.
Interpersonally presented emotions help to calibrate people’s orientations to things happening in the shared environment. For example, social referencing involves one person seeking clarification of the appropriate appraisal of an object, event, or person, and another person responding with an emotional orientation that disambiguates things. However, this paradigmatic case represents only one of the possible ways in which emotions affect other people’s physical or mental attitudes. In other cases, emotion-related responses affect other people’s orientations independent of their explicit informational content. Further, emotional knowledge may be co-constructed dynamically rather than transmitted unidirectionally from one person to another. In these cases, affective social learning need not involve changes in the perceived meaning of emotional objects, but rather adjustments in interactants’ orientations to what is happening. This chapter suggests ways of extending and going beyond existing methodological and theoretical approaches to emotional influence and identifies some of the blindspots of previous research.
This chapter provides a theoretical discussion of the boundaries of affective social learning (ASL), focusing on the conditions under which ASL takes place and the processes underlying ASL. In other words, the question is not so much what, but rather how we learn from others’ emotions. I propose three conditions that form a minimal requirement for others’ emotional reactions to have an initial impact on an individual: (1) the source (displaying the emotion and from whom the target learns) needs to show some emotional expression; (2) the target pays attention to the source, and is aware of the source’s focus of attention and the relevance of the emotion expression for him or her; (3) the source needs to put some trust in the source’s judgement or evaluation of the world. I next discuss the conditions under which we are most motivated to learn from others – when we are in an uncertain or ambiguous situation, for example, but also when the target experiences or expresses emotions, which can then be regulated by the source. This form of ASL may form the basis of emotional psychopathology in families, where children’s emotions are often met with depressed, anxious or indifferent reactions by their parents, through which children unintentionally learn inappropriate emotional responses. Different types of learning are then discussed that may be involved in ASL, ranging from fear conditioning to cognitive learning.
It has long been established in theory that uncertainty impacts on firm behaviour. However, the empirical basis for quantifying the uncertainty-reducing effects of trade agreements has not been firmly established. In this paper, we develop estimates of the effect of reducing uncertainty regarding regulation of foreign services markets by making commitments that are bound under a trade agreement. Specifically, we identify the effect on services trade of services trade restrictions, as measured by the OECD's Services Trade Restrictiveness Index (STRI), and the separate effect of ‘water’ in binding commitments, as assessed by the difference between countries’ commitments under the General Agreement on Trade in Services (GATS) or free trade agreements (FTAs) and applied levels of market access, as captured by STRI scores. Using a gravity model, we find that services trade responds positively but inelastically to reductions in services trade barriers, as measured by the STRI and, in our preferred regression, the response to actual restrictions is more than twice – specifically 2.4 times – as strong as the response to comparable reductions in uncertainty, as measured by water. Moving from GATS commitments to FTA commitments leads to a 4.7% increase in services trade because of the reduction in uncertainty.
This paper draws on recent advances in our knowledge (much of it owed to the proliferation of military diplomas) and a new analytical method to quantify the number of soldiers and their children who received Roman citizenship between 14 and 212 c.e. Although significant uncertainties remain, these can be quantified and turn out to be small relative to the overall scale of enfranchisement. The paper begins by reviewing what is known about grants of citizenship to soldiers, with particular attention to the remaining uncertainties, before presenting a quantitative model of the phenomenon. The total number of beneficiaries was somewhere in the region 0.9–1.6 million — significantly lower than previous estimates have suggested. It also emerges that the rate of enfranchisement varied substantially over time, in line with significant changes in manpower, length of service (and hence the number of recruits and discharged veterans) and the rate of family formation among soldiers. The Supplementary Material available online (https://doi.org/10.1017/S0075435819000662) contains a database of military diplomas (Supplementary Appendix 1), a mathematical model of enfranchisement implemented in MS Excel (Supplementary Appendix 2), a description of the model (Supplementary Appendix 3A) and a derivation of the model of attrition across service cohorts in Fig. 6 (Supplementary Appendix 3B).
The objective of this study is to propose a methodology for assessing waterway traffic capacity in the Shanghai estuary of the Yangtze River. To achieve this objective, we first put forward the estimation method which utilises the minimum collision distance taking the dynamic ship domain into consideration. Considering possible effects caused by unknown external factors, the waterway traffic capacity is then represented by a probability distribution. Finally, we quantify the equivalent units of ships with various ship sizes as well as the effects of large-sized ships on the waterway traffic capacity. Results show that a large-sized ship is equivalent to more small-sized ships during the daytime period than at night. In addition, the deployment of large-sized ships could increase the waterway traffic capacity and such an increment highly depends on the increased proportion of large-sized ships in the waterway traffic.
Product Service Systems (PSS) are increasingly complex and collaborative. For instance, manufacturing companies, service providers, and other companies collaborate and jointly develop and operate a PSS (ex: smart grid), where its constituent elements are managed and operated independently. Managerial independence and operational independence are commonly considered key characteristics of a System of Systems (SoS). Hence, a collaborative PSS exhibits System of Systems (SoSs) characteristics. These systems have previously been introduced as Product Service Systems of Systems (PSSoSs). In this paper, we propose to identify relevant uncertainties in the PSSoS design process. For this purpose, we go beyond the PSSoS concept definition and propose a comprehensive framework for PSS and PSSoS characterization. Moreover, based on both a literature review and an industrial diagnosis, we identify PSSoSs-specific design uncertainties.
Complexity of products and systems is increasing through digitalization, interdisciplinarity as well as high technology maturity and new business models. In consequence, new product development (NPD) projects need to manage and satisfy a large number of requirements from a broad range of stakeholders. Yet, NPD projects are often delayed due to requirement changes. In this paper, a new method for analyzing requirement change propagation is presented. The method is based on the assessment of requirement interrelations structured in a requirements structure matrix by a modified page-rank algorithm. By the method, a high number of strongly interrelated requirements can be analyzed in an efficient manner. Additionally, higher-level interrelations as well as the relative weights of requirements are also incorporated in the analysis. Hereby, an efficient holistic approach towards the analysis of requirement change propagation is proposed.
The initial planning of the development of complicated products usually requires time and efforts. However, even the most accurate plans are not able to cope with all the uncertainties that might arise during an ongoing project. If a severe uncertainty affect the development project a critical situation arises and a disruption might happen. The present literature does not offer a comprehensive solution on how to investigate these type of events after they occurred. This contribution presents a model that aims to analyse and better understand disruptions that affect ongoing product development projects.
In the design phase of product development (PD) process, most new products face significant uncertainties and risks. Uncertainty is typically associated with a lack of information, while learning is a process that acquires information. Therefore, learning fast and at low cost decreases the uncertainty and increases the efficiency of the product design phase. This paper investigates the concept of the cost of learning in PD's design phase. Reviewing the literature, we conceptualize the cost of learning and review the learning methods considering three aspects in the design phase of the PD process: (1) costs associated with learning from mistakes and failures, (2) learning methods and (3) categories of learners. This paper thus provides the conceptual foundations for future work to increase the efficiency of the PD process by reducing the cost of learning from mistakes and failures.
To be successful in innovation, organisations need to be dynamically adaptable to novel situations to avoid getting ‘left behind’. Yet, they face vast uncertainties stemming from unforeseeable technological shifts or future user and market behaviour, making strategic decision-making on innovation an extremely difficult task. Decision-makers thus increasingly try to control or shape the future, rather than foresee it. This includes thinking ahead and generating potential pathways that will make an innovation viable. This captures the essence of designerly ways of thinking in reasoning toward ‘what might be’. Extant literature has been reviewed that discusses alternative strategies how this future-oriented thinking can be applied to become better at selecting novel ideas for development. We observe parallels between divergent thinking, abductive reasoning, analogising and lateral thinking suggested by different authors in this process. The paper continues to propose how these key mechanisms can be embedded within an existing framework for decision-making under uncertainty, the ‘OODA Loop’, which has seen increasing uptake in such decision-making scenarios.
Geometrical work piece deviations are unavoidable and directly affect the function and quality of technological products. Tolerance management is regarded as a crucial subtask of the development of technological products, because it ensures the function as well as a sufficient product quality while maintaining reasonable production costs. That means, that geometric tolerances as an essential part of the product description greatly affect the functional capability, manufacturability, mountability, verifiability and the costs of the final product. The research group FOR 2271 was founded to enable the computer-aided specification of tolerances, which meet the requirements of production, assembly, verification and function by close cooperation between the departments responsible for product design, assembly and metrology. The aim of this contribution is to determine the manufacturing process scatter as well as the measurement uncertainty and establish ways and means to include that information into efficient meta-models, ultimately enabling improved and accurate tolerance analyses.
System design at the early stage of design plays an important role in design process. Model based systems engineering is seen as a prominent approach for this challenge. System design can be explored by means of system simulation. However, as the system is a complex system, system model tends to have high level of abstraction. Therefore, the models cannot depict every details of the system, which makes optimization unreasonable.
Furthermore, at the early stage of design, there are many uncertainties such as success of technological developments. By properly incorporating uncertain factors in system design, the system can be tolerant. Currently system design is conducted by experienced experts. However, for more complex system, it would be difficult to continue the current practice. Therefore, a method to support design team to make decision in system design is needed.
This paper proposes a computational support for the system design. Design constraints, which seems the core information that design team wants at system design, are modeled. By visualizing constraints quantitatively and intuitively, the proposed method can support design team to conduct system design and design study.
Technological advances as well as novel manufacturing and design paradigms, such as industry 4.0 and digitalization, offer new opportunities for innovative products. However, they also increase the product complexity and cause new challenges in the production process. Therefore, agile production approaches are crucial. Tolerance compensation provides more flexibility in the production process, as demands on dimensional accuracy of the components are reduced. As a result, tolerance compensation also offers the possibility of reducing production costs without compromising product quality. Nevertheless, tolerance compensation is often considered a reactive intervention to reduce the number of out-of-spec parts a posteriori instead of including it in the early stages of Geometrical Variations Management. The contribution tackles this issue by characterizing and categorizing different methods of tolerance compensation as well as providing design guidelines for the application of tolerance compensation methods. This enables design engineers to select a suitable tolerance compensation method for different applications.