We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter reviews the methods that psychologists have devised for measuring wisdom. There are two classical types of measures: self-report scales, where people rate themselves with respect to characteristics of wisdom, and performance measures, where people respond to descriptions of problems that require wisdom. Both types of measures have their problems. Self-report wisdom scales are susceptible to both unintentional distortions, if participants have inaccurate views of themselves, and intentional distortions, if participants want to present themselves as wiser than they are. Performance measures require a lot of effort for administration and scoring, and they measure what participants theoretically think about a problem, which is not necessarily what they would do if they were faced with that problem in real life. New approaches have tried to move the measurement of wisdom closer to real life. Some researchers ask people about difficult events from their own life. Other researchers use videos instead of real-life conflicts and written problem descriptions. There is still a lot of room for improvement of our wisdom measures.
Cloud storage faces many problems in the storage process which badly affect the system's efficiency. One of the most problems is insufficient buffer space in cloud storage. This means that the packets of data wait to have storage service which may lead to weakness in performance evaluation of the system. The storage process is considered a stochastic process in which we can determine the probability distribution of the buffer occupancy and the buffer content and predict the performance behavior of the system at any time. This paper modulates a cloud storage facility as a fluid queue controlled by Markovian queue. This queue has infinite buffer capacity which determined by the M/M/1/N queue with constant arrival and service rates. We obtain the analytical solution of the distribution of the buffer occupancy. Moreover, several performance measures and numerical results are given which illustrate the effectiveness of the proposed model.
The increasing complexity and variation in ART laboratory and clinical practice, together with a recognition of the importance of patient-centered outcomes, makes ART registries a central resource for informing patients, regulators and governments about the performance of ART treatment. The sequential nature of ART treatment gives rise to a multitude of possible numerators and denominators for measuring treatment outcomes. Which combination of these is the most appropriate and important depends on the stakeholder perspective and the purpose of the measure. ART registries are used in a number of countries to measure ART performance, particularly for reporting at a fertility clinic level. While health data transparency generally leads to better decision making, the process of measurement itself has the potential to both positively and negatively alter the behavior of clinics and clinicians. Finally, personalized patient predictor tools developed using large registry datasets are becoming common and are likely to become an important tool to assist clinicians in counselling patients about their individual chances of ART success.
Efforts to respond to performance-based accountability mandates for public health emergency preparedness have been hindered by a weak evidence base linking preparedness activities with response outcomes. We describe an approach to measure development that was successfully implemented in the Centers for Disease Control and Prevention Public Health Emergency Preparedness Cooperative Agreement. The approach leverages insights from process mapping and experts to guide measure selection, and provides mechanisms for reducing performance-irrelevant variation in measurement data. Also, issues are identified that need to be addressed to advance the science of measurement in public health emergency preparedness.
Although significant progress has been made in measuring public health emergency preparedness, system-level performance measures are lacking. This report examines a potential approach to such measures for Strategic National Stockpile (SNS) operations.
Methods
We adapted an engineering analytic technique used to assess the reliability of technological systems—failure mode and effects analysis—to assess preparedness. That technique, which includes systematic mapping of the response system and identification of possible breakdowns that affect performance, provides a path to use data from existing SNS assessment tools to estimate likely future performance of the system overall.
Results
Systems models of SNS operations were constructed and failure mode analyses were performed for each component. Linking data from existing assessments, including the technical assistance review and functional drills, to reliability assessment was demonstrated using publicly available information. The use of failure mode and effects estimates to assess overall response system reliability was demonstrated with a simple simulation example.
Conclusions
Reliability analysis appears an attractive way to integrate information from the substantial investment in detailed assessments for stockpile delivery and dispensing to provide a view of likely future response performance.(Disaster Med Public Health Preparedness. 2013;7:96-104)
The lack of frequent real-world opportunities to study preparedness for large-scale public health emergencies has hindered the development of an evidence base to support best practices, performance measures, standards, and other tools needed to assess and improve the nation’s multibillion dollar investment in public health preparedness. In this article, we argue that initial funding priorities for public health systems research on preparedness should focus on using engineering-style methods to identify core preparedness processes, developing novel data sources and measures based on smaller-scale proxy events, and developing performance improvement approaches to support the translation of research into practice within the wide variety of public health systems found in the nation. (Disaster Med Public Health Preparedness. 2008;2:247–250)
A recent trend throughout Australia has been to develop multi-purpose indoor public aquatic centres in favour of outdoor pools. Such major policy and planning decisions often rely on consultants' feasibility studies, yet there is limited comprehensive industry-wide data available on which to base such decisions. The industry-wide performance measures discussed in this paper help fill this void by providing objective data to support the contention that multi-purpose indoor aquatic centres tend to outperform centres with solely outdoor pools. The key indicators of performance are based on financial viability and community participation data for a sample of Australian public aquatic centres.
We consider the simulation of transient performance measures of high reliable fault-tolerant
computer systems. The most widely used mathematical tools to model the behavior of these
systems are Markov processes. Here, we deal basically with the simulation of the mean time to
failure (MTTF) and the reliability, R(t), of the system at time t. Some variance reduction
techniques are used to reduce the simulation time. We will combine two of these techniques:
Importance Sampling and Conditioning Technique. The resulting hybrid algorithm performs significant
reduction of simulation time and gives stables estimations.
There are some generalised semi-Markov processes (GSMP) which are insensitive, that is the value of some performance measures for the system depend only on the mean value of lifetimes and not on their actual distribution. In most cases this is not true and a performance measure can take on a number of values depending on the lifetime distributions. In this paper we present a method for finding tight bounds on the sensitivity of performance measures for the class of GSMPs with a single generally distributed lifetime. Using this method we can find upper and lower bounds for the value of a function of the stationary distribution as the distribution of the general lifetime ranges over a set of distributions with fixed mean. The method is applied to find bounds on the average queue length of the Engset queue and the time congestion in the GI/M/n/n queueing system.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.