To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Research on complex behavior change interventions has largely focused on intervention development and testing their effects in feasibility trials, pilot studies, and randomized controlled trials. However, a significant gap exists in translating behavior interventions informed by theory into real-world practice. This chapter describes how engaging stakeholders can improve the likelihood that effective behavior change interventions are put into practice. The chapter begins with an overview of implementation science and normalization process theory – which outlines how effective interventions are routinely implemented. The roles of stakeholders as research partners and research participants are differentiated using research in health contexts. For example, the process of stakeholder involvement is illustrated using digital health interventions for people with long-term physical health conditions with reference to UK Medical Research Council guidelines on complex interventions. The examples illustrate (1) how stakeholder support in the co-design of complex interventions can improve their utility, usability, accessibility, and acceptability and (2) how stakeholder perspectives elicited using mixed methods during the feasibility and pilot phases of intervention development can help inform subsequent stages of intervention development. Finally, the evaluation and implementation phase is explored, using a case study to illustrate the need to engage with additional stakeholders to translate effective interventions into routine practice.
This chapter describes the reflective-impulsive model (RIM) and elaborates those features that are functionally important for behavioral interventions. The RIM explains behavior as being controlled by two interacting systems, which each follow a distinct set of operating principles. The reflective system operates based on propositional representations and syllogistic reasoning and affects behavior via goal-driven decisions mediated via a process of intending, which activates goal-congruent behavioral schemata until the goal associated with the decision is reached. The impulsive system operates based on associative representations, with behavioral schemata serving as a pathway to behavior that is also modulated by the reflective system. Within the impulsive system, motivational orientations of approach-avoidance as well as homeostatic dysregulation modulate the accessibility of representations in the impulsive system and, thereby, its reactivity to stimuli. The impulsive system operates at a higher degree of automaticity compared to the reflective system but is, at the same time, constrained in its processing capabilities such as being unable to process negations. Interventions based on the RIM typically aim to change evaluative associations, to prevent deprivation-driven hyper-reactivity to stimuli, and to change approach/avoidance tendencies via computer-based training. Although there are several demonstrations of their effectiveness, there is still ongoing debate about the mediators and boundary conditions of these interventions.
Rigorous evaluation of interventions is vital to advance the science of behavior change and identify effective interventions. Although randomized controlled trials (RCTs) are often considered the “gold standard”, other designs are also useful. Considerations when choosing intervention design are the research questions, the stage of evaluation, and different evaluation perspectives. Approaches to explore the utility of an intervention, include a focus on (1) efficacy; (2) “real-world” effectiveness; (3) how an intervention works to produce change; or (4) how the intervention interacts with context. Many evaluation designs are available: experimental, quasi-experimental, and nonexperimental. Each has strengths and limitations and choice of design should be driven by the research question. Choosing relevant outcomes is an important step in planning an evaluation. A typical approach is to identify one primary outcome and a narrow range of secondary outcomes. However, focus on one primary outcome means other important changes may be missed. A well-developed program theory helps identify a relevant outcomes. High-quality evaluation requires (1) involvement of relevant stakeholders; (2) evaluating and updating program theory; (3) consideration of the wider context; (4) addressing implementation issues; and (5) appropriate economics input. Addressing these can increase the quality, usefulness, and impact of behavior change interventions.
Behavior change interventions based on self-determination theory focus on promoting autonomous motivation, using autonomy support to do so. This chapter outlines the autonomy-supportive intervention program (ASIP), which helps supervisors “upgrade” the quality of their motivating style toward those they supervise, as occurs in the classroom, workplace, home, sport arenas, and health care settings. This is an important approach to behavior change because, when supervisors become more autonomy-supportive and less controlling, those they supervise tend to increase their adaptive behaviors (e.g., learning, prosocial behavior) and well-being as well as to decrease their maladaptive behaviors (e.g., disengagement, antisocial behavior) and ill-being. This chapter defines the key constructs and practices featured in the ASIP (i.e., supervisor’s motivating styles, supervisee’s psychological needs); identifies the theoretical basis and the specific mechanisms by which this intervention enables behavior change; provides an overview of what occurs during an ASIP; outlines the evidence base supporting the efficacy and benefits of the intervention; and offers step-by-step guidelines for how practitioners might carry out an ASIP in different contexts and populations.
Recently, the asymptotic mean value of the height for a birth-and-death process is given in Videla [Videla, L.A. (2020)]. We consider the asymptotic variance of the height in the case when the number of states tends to infinity. Further, we prove that the heights exhibit a cutoff phenomenon and that the normalized height converges to a degenerate distribution.
On October 2 of 2016, Colombian voters rejected a peace agreement between the government and the FARC, a Marxist guerrilla, to put an end to the oldest armed conflict in the western hemisphere. Ex-combatants who fully confessed to their crimes, asked for forgiveness and repaired the victims would have been granted more lenient sentences. Those who criticized the agreement vehemently opposed this kind of justice, claiming that it was an antidemocratic expression of impunity and that it violated international human rights standards. Through the study of the Colombian case, this chapter explains how the conceptions of justice, impunity, and punishment, are a construct that relies on ‘international standards’ – on human rights, criminal justice, the rule of law, and democracy. Such standards have been devised in global north countries and do not necessarily respond to the realities and necessities of global south societies, but, nevertheless, have become predominant and almost uncontested. The chapter also aims to show the ways in which the relationship between political and economic regimes, on the one hand, and the scope and forms that punishment takes, on the other, are constructed and legitimize particular forms of government, both at the national and international levels.
Evidence of work clearly connected to the composition of the Eroica is traceable from 1802 onwards. This consist of letters, sketches and other materials in the composer’s hand but also by copyists and collaborators, who worked with him. Although some fundamental documents ‘such as the autograph score’ are now lost, these materials make it possible to reconstruct in detail many aspects of the genesis of the symphony. This chapter seeks to reconstruct the different stages in the genesis of the Eroica, on the basis of a well-established research tradition ‘represented by scholar such as Gustav Nottebohm, Alan Tyson, Otto Biba, Michael C. Tusa, Bathia Churgin and Lewis Lockwood’. It focuses on general aspects of Beethoven’s creative process and draw attention to the variety of possible methodological approaches developed by musicologists during nearly two centuries of research on the subject.
While Perron and Wada (2009) maximum likelihood estimation approach suggests that postwar US real GDP follows a trend stationary process (TSP), our Bayesian approach based on the same model and the same sample suggests that it follows a difference stationary process (DSP). We first show that the results based on the approach should be interpreted with caution, as they are relatively more subject to the ‘pile-up problem’ than those based on the Bayesian approach. We then directly estimate and compare the two competing TSP and DSP models of real GDP within the Bayesian framework. Our empirical results suggest that a DSP model is preferred to a TSP model both in terms of in-sample fits and out-of-sample forecasts.
Hypothetical thinking involves imagining possibilities and mentally exploring their consequences. This chapter overviews a contemporary, integrative account of such thinking in the form of Jonathan Evans’s hypothetical thinking theory. This default-interventionist, dual–process theory operates according to three principles: relevance, singularity, and satisficing. To illustrate the explanatory strength of the theory a range of empirical evidence is considered that has arisen from extensive research on hypothesis testing, which involves individuals generating and evaluating hypotheses as they attempt to derive a more general understanding of information. The chapter shows how key findings from hypothesis-testing research undertaken in both laboratory and real-world studies (e.g. in domains such as scientific reasoning) are readily explained by the principles embedded in hypothetical thinking theory. The chapter additionally points to important new directions for future research on hypothetical thinking, including the need for: (1) further studies of real-world hypothesis testing in collaborative contexts, including ones outside of the domain of scientific reasoning; (2) increased neuroscientific analysis of the brain systems underpinning hypothetical thinking so as to inform theoretical developments; and (3) systematic individual-differences investigations to explore the likely association between people’s capacity to think creatively and their ability to engage in effective hypothetical thinking.
Through imagining possible actions and considering their consequences, we are able to reason about the morality of behavior – judging whether an action is morally right or wrong. Neuroscience research indicates that moral reasoning depends on a complex, broadly distributed network of brain regions that interact in a both cooperative and competitive manner. Understanding the underlying neurobiology that governs how these regions dynamically interact to produce patterns of behavior is therefore of interest to the field. Currently, prominent theories suggest that moral judgments (consequentialist or deontological) are the product of two distinct cognitive systems (i.e. a dual-process framework). Network neuroscience, an emerging field that measures and interprets brain activity through the framework of modern network science, is positioned to expand our understanding of this dual-process framework by examining how topological properties of networks influence consequentialist and deontological reasoning, and how these two processing systems interact in order to imagine hypothetical scenarios during complex deontological reasoning tasks. In this chapter, we review evidence from neuroscience that bears on our understanding of the dual-process moral reasoning framework and advance a network neuroscience perspective on the neurobiological substrates that underlie it.
Mutual responsiveness is necessary to sustain a close relationship and, to achieve it, people must protect their overall motivation to act in a caring way against the costs naturally arising from the challenges of maintaining interdependence. These challenges are universal and require solutions that constitute relatively automatic habit structures. The solutions allow people to “keep their eyes on the prize” and sustain their overall rewards without being distracted by the localized costs that occur along the way. For instance, one important challenge involves partners’ behavior that will on occasion interfere with one’s personal goals, by either pursuing their own interests first or failing to coordinate dyadic goals. In a case of motivation cognition, the automatic response to such experiences is to rationalize the negative, costly behavior by exaggerating the partners’ positive features and compensating cognitively for it. However, consistent with the MODE model, if people have the cognitive resources for deliberation, those whose broader goals are more self-protective rather than connective will overturn the pro-relationship impulses, to their ultimate detriment. Research exploring three different automatic procedural rules that illustrate this process of motivated cognition will be described.
In this paper, we propose a multivariate Hawkes framework for modelling and predicting cyber attacks frequency. The inference is based on a public data set containing features of data breaches targeting the US industry. As a main output of this paper, we demonstrate the ability of Hawkes models to capture self-excitation and interactions of data breaches depending on their type and targets. In this setting, we detail prediction results providing the full joint distribution of future cyber attacks times of occurrence. In addition, we show that a non-instantaneous excitation in the multivariate Hawkes model, which is not the classical framework of the exponential kernel, better fits with our data. In an insurance framework, this study allows to determine quantiles for number of attacks, useful for an internal model, as well as the frequency component for a data breach guarantee.
A central part of the design process is collaboration, harnessing specialist expertise often in meetings. We understand relatively little about how meetings serve teams of designers and their work and this study uses soft systems methodology to attempt to create structures that describe and explain meetings. The results suggest extension of the boundary of interest and suggest a conceptual framework which reveals some under-addressed stages and activities which may help designers improve their meetings.
Systems engineering (SE) is a general methodological approach that includes all relevant activities to design, develop and verify a system. This work was based on the need to enhance the integration of the customer needs into the design phases of SE. A joint methodology was proposed integrating the SE approach with the Design Thinking (DT). An analysis was conducted as part of a case study proposed by IBM Corporation for the development of a security system for a building. The results confirm that the insertion of the DT in the SE has a significant impact on the generation of concept solutions.
Increasing complexity of products and design processes leads to intensive collaboration of different stakeholders in technical product development. This causes a demand for suitable methods of collaboration across department interfaces, as between design and simulation. The paper investigates typical barriers of collaboration at this interface and measures to overcome them. Methods of complexity management form links based on literature and empirical data from online surveys and interview studies. The framework uses a set of structural metrics to analyse collaboration networks systematically.
More than 15 years after the publication of the agile manifesto of software development, agile development approaches have also reached the processes of physical product development. Because of the boundary conditions and requirements here, which differ strongly from those of pure software development, these approaches often reach their limits. However, research and practice have quickly recognized that hybrid approaches integrate the strengths of agile and plan-driven development. This paper presents 25 hybrid development approaches that have been identified in a Systematic Literature Review.
Engineer-To-Order (ETO) companies develop complex one-of-a-kind products based on specific customer demands. Given the product uniqueness, the commissioning plays an important role in the product development process. However, the project variety and low data availability hinder the analysis of the commissioning processes. This paper proposes a framework for the structured analysis of commissioning processes in ETO companies by analysing the impacts from product requirements and design on the commissioning performance. A case study presents the practical application of the developed framework.
Sustainable design of equipment for process intensification requires a comprehensive and correct identification of relevant stakeholder requirements, design problems and tasks crucial for innovation success. Combining the principles of the Quality Function Deployment with the Importance-Satisfaction Analysis and Contradiction Analysis of requirements gives an opportunity to define a proper process innovation strategy more reliably and to develop an optimal process intensification technology with less secondary engineering and ecological problems.
The paper describes the rigorous implementation of a validated methodological experimental protocol to divergent and convergent thinking tasks occurring in Design by neurophysiological means (EEG and eye-tracking). EEG evidence confirms the findings coherently to the literature. Interesting is the confirmation of such results through eye-tracking ones, and further evidence emerged. In particular, neurophysiological results in idea generation differ between designers and engineers. This study was supported by a multidisciplinary team, both for the neuropsychological and data analysis aspects.
The challenge of user requirements for maintenance scheduling design in large asset-intensive industries suffers from lack of academic and empirical studies. Therefore, using a representative case study, this paper aims to: (1) identify the current practices and complex scheduling requirements; (2) propose a design support tool to optimize the maintenance scheduling process; and (3) report the gained benefits. The results reveal that the proposed tool can decrease the resource requirements, increase the capacity utilization, and reduce the cost while addressing the complex user requirements.