We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The single pulses of PSR J1921+1419 were examined in detail using high-sensitivity observations from the Five-hundred-meter Aperture Spherical radio Telescope (FAST) at a central frequency of 1250 MHz. The high-sensitivity observations indicate that the pulsar exhibits two distinct emission modes, which are classified as strong and weak modes based on the intensity of the single pulses. In our observations, the times spent in both modes are nearly equal, and each is about half of the total observation time. The minimum duration of both modes is $1\,P$ and the maximum duration is $13\,P$, where P is the pulsar spin period. Additionally, the mean intensity of the weak mode is less than half of that of the strong mode. Notably, the switching between these modes demonstrates a clear quasi-periodicity with a modulation period of approximately $10 \pm 2\,P$. An analysis of the polarisation properties of both modes indicates that they originate from the same region within the magnetosphere of the pulsar. Finally, the viewing geometry was analysed based on the kinematical effects.
We report data on the experimental articles published from 2000 to 2021 in seven leading general-interest economics journals. We also look at time trends in the characteristics of the published experimental articles, including citations and the nationality of the authors. We find an overall increasing trend in the publication of non-lab experiments in all journals. By contrast, the share of lab experiments has more than halved in the AER and remained low in other Top five journals. The diverging trends for non-lab and lab experiments are not universal as the shares of both have increased in two other high-ranking economics journals (JEEA and EJ). We also observe some heterogeneities in publication, citations, rankings, and locations of authors' affiliations across journals and types of experiments.
Observations of the intracluster medium (ICM) in the outskirts of galaxy clusters reveal shocks associated with gas accretion from the cosmic web. Previous work based on non-radiative cosmological hydrodynamical simulations have defined the shock radius, $r_{\text{shock}}$, using the ICM entropy, $K \propto T/{n_\mathrm{e}}^{2/3}$, where T and $n_{\text{e}}$ are the ICM temperature and electron density, respectively; the $r_{\text{shock}}$ is identified with either the radius at which K is a maximum or at which its logarithmic slope is a minimum. We investigate the relationship between $r_{\text{shock}}$, which is driven by gravitational hydrodynamics and shocks, and the splashback radius, $r_{\text{splash}}$, which is driven by the gravitational dynamics of cluster stars and dark matter and is measured from their mass profile. Using 324 clusters from The Three Hundred project of cosmological galaxy formation simulations, we quantify statistically how $r_{\text{shock}}$ relates to $r_{\text{splash}}$. Depending on our definition, we find that the median $r_{\text{shock}} \simeq 1.38 r_{\text{splash}} (2.58 R_{200})$ when K reaches its maximum and $r_{\text{shock}} \simeq 1.91 r_{\text{splash}} (3.54 R_{200})$ when its logarithmic slope is a minimum; the best-fit linear relation increases as $r_{\text{shock}} \propto 0.65 r_{\text{splash}}$. We find that $r_{\text{shock}}/R_{200}$ and $r_{\text{splash}}/R_{200}$ anti-correlate with virial mass, $M_{200}$, and recent mass accretion history, and $r_{\text{shock}}/r_{\text{splash}}$ tends to be larger for clusters with higher recent accretion rates. We discuss prospects for measuring $r_{\text{shock}}$ observationally and how the relationship between $r_{\text{shock}}$ and $r_{\text{splash}}$ can be used to improve constraints from radio, X-ray, and thermal Sunyaev-Zeldovich surveys that target the interface between the cosmic web and clusters.
This study explored the intimacy-power patterns in Chinese direct criticism and how this may reflect native Chinese speakers’ consideration of rapport management. With data retrieved from BCC, a representative corpus of modern Chinese, the analyses identified the intimacy degree and power relativity of the interlocutors where direct criticism was used. Results revealed that native Chinese speakers use direct criticism mostly in close and equal relationships followed by distant and equal ones; also, direct criticism with different criticizing markers manifests their uniqueness that close and equal relationships appeared more in criticism with “你太(nitai) + adj.”, “我看你(wokanni)” and “你真是(nizhenshi)” while distant and equal relationships appeared more frequently in criticism with “你这(nizhe) + n. /adj.”. These results reflect that native Chinese speakers adopt rapport-maintaining/rapport-enhancing orientations by using criticism more often in close and equal relationships, together with their tendency to ignore rapport, especially in distant and equal relationships. To conclude, this study reveals the patterns of intimacy-power relationships in Chinese speakers’ usage of direct criticism, which reflects their awareness of rapport management. Overall, it provides insights into our understanding of the nature of the speech act of criticism.
Diagnostic classification models (DCMs) have seen wide applications in educational and psychological measurement, especially in formative assessment. DCMs in the presence of testlets have been studied in recent literature. A key ingredient in the statistical modeling and analysis of testlet-based DCMs is the superposition of two latent structures, the attribute profile and the testlet effect. This paper extends the standard testlet DINA (T-DINA) model to accommodate the potential correlation between the two latent structures. Model identifiability is studied and a set of sufficient conditions are proposed. As a byproduct, the identifiability of the standard T-DINA is also established. The proposed model is applied to a dataset from the 2015 Programme for International Student Assessment. Comparisons are made with DINA and T-DINA, showing that there is substantial improvement in terms of the goodness of fit. Simulations are conducted to assess the performance of the new method under various settings.
Time limits are imposed on many computer-based assessments, and it is common to observe examinees who run out of time, resulting in missingness due to not-reached items. The present study proposes an approach to account for the missing mechanisms of not-reached items via response time censoring. The censoring mechanism is directly incorporated into the observed likelihood of item responses and response times. A marginal maximum likelihood estimator is proposed, and its asymptotic properties are established. The proposed method was evaluated and compared to several alternative approaches that ignore the censoring through simulation studies. An empirical study based on the PISA 2018 Science Test was further conducted.
We develop a latent variable selection method for multidimensional item response theory models. The proposed method identifies latent traits probed by items of a multidimensional test. Its basic strategy is to impose an \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$L_{1}$$\end{document} penalty term to the log-likelihood. The computation is carried out by the expectation–maximization algorithm combined with the coordinate descent algorithm. Simulation studies show that the resulting estimator provides an effective way in correctly identifying the latent structures. The method is applied to a real dataset involving the Eysenck Personality Questionnaire.
Item replenishing is essential for item bank maintenance in cognitive diagnostic computerized adaptive testing (CD-CAT). In regular CAT, online calibration is commonly used to calibrate the new items continuously. However, until now no reference has publicly become available about online calibration for CD-CAT. Thus, this study investigates the possibility to extend some current strategies used in CAT to CD-CAT. Three representative online calibration methods were investigated: Method A (Stocking in Scale drift in on-line calibration. Research Rep. 88-28, 1988), marginal maximum likelihood estimate with one EM cycle (OEM) (Wainer & Mislevy In H. Wainer (ed.) Computerized adaptive testing: A primer, pp. 65–102, 1990) and marginal maximum likelihood estimate with multiple EM cycles (MEM) (Ban, Hanson, Wang, Yi, & Harris in J. Educ. Meas. 38:191–212, 2001). The objective of the current paper is to generalize these methods to the CD-CAT context under certain theoretical justifications, and the new methods are denoted as CD-Method A, CD-OEM and CD-MEM, respectively. Simulation studies are conducted to compare the performance of the three methods in terms of item-parameter recovery, and the results show that all three methods are able to recover item parameters accurately and CD-Method A performs best when the items have smaller slipping and guessing parameters. This research is a starting point of introducing online calibration in CD-CAT, and further studies are proposed for investigations such as different sample sizes, cognitive diagnostic models, and attribute-hierarchical structures.
Recently, it has been recognized that the commonly used linear structural equation model is inadequate to deal with some complicated substantive theory. A new nonlinear structural equation model with fixed covariates is proposed in this article. A procedure, which utilizes the powerful path sampling for computing the Bayes factor, is developed for model comparison. In the implementation, the required random observations are simulated via a hybrid algorithm that combines the Gibbs sampler and the Metropolis-Hastings algorithm. It is shown that the proposed procedure is efficient and flexible; and it produces Bayesian estimates of the parameters, latent variables, and their highest posterior density intervals as by-products. Empirical performances of the proposed procedure such as sensitivity to prior inputs are illustrated by a simulation study and a real example.
Process data refer to data recorded in computer-based assessments (CBAs) that reflect respondents’ problem-solving processes and provide greater insight into how respondents solve problems, in addition to how well they solve them. Using the rich information contained in process data, this study proposed an item expansion method to analyze action-level process data from the perspective of diagnostic classification in order to comprehensively understand respondents’ problem-solving competence. The proposed method cannot only estimate respondents’ problem-solving ability along a continuum, but also classify respondents according to their problem-solving skills. To illustrate the application and advantages of the proposed method, a Programme for International Student Assessment (PISA) problem-solving item was used. The results indicated that (a) the estimated latent classes provided more detailed diagnoses of respondents’ problem-solving skills than the observed score categories; (b) although only one item was used, the estimated higher-order latent ability reflected the respondents’ problem-solving ability more accurately than the unidimensional latent ability estimated from the outcome data; and (c) interactions among problem-solving skills followed the conjunctive condensation rule, which indicated that the specific action sequence appeared only when a respondent mastered all required problem solving skills. In conclusion, the proposed diagnostic classification approach is feasible and promising analyzing process data.
The main purpose of this article is to develop a Bayesian approach for structural equation models with ignorable missing continuous and polytomous data. Joint Bayesian estimates of thresholds, structural parameters and latent factor scores are obtained simultaneously. The idea of data augmentation is used to solve the computational difficulties involved. In the posterior analysis, in addition to the real missing data, latent variables and latent continuous measurements underlying the polytomous data are treated as hypothetical missing data. An algorithm that embeds the Metropolis-Hastings algorithm within the Gibbs sampler is implemented to produce the Bayesian estimates. A goodness-of-fit statistic for testing the posited model is presented. It is shown that the proposed approach is not sensitive to prior distributions and can handle situations with a large number of missing patterns whose underlying sample sizes may be small. Computational efficiency of the proposed procedure is illustrated by simulation studies and a real example.
In behavioral, biomedical, and psychological studies, structural equation models (SEMs) have been widely used for assessing relationships between latent variables. Regression-type structural models based on parametric functions are often used for such purposes. In many applications, however, parametric SEMs are not adequate to capture subtle patterns in the functions over the entire range of the predictor variable. A different but equally important limitation of traditional parametric SEMs is that they are not designed to handle mixed data types—continuous, count, ordered, and unordered categorical. This paper develops a generalized semiparametric SEM that is able to handle mixed data types and to simultaneously model different functional relationships among latent variables. A structural equation of the proposed SEM is formulated using a series of unspecified smooth functions. The Bayesian P-splines approach and Markov chain Monte Carlo methods are developed to estimate the smooth functions and the unknown parameters. Moreover, we examine the relative benefits of semiparametric modeling over parametric modeling using a Bayesian model-comparison statistic, called the complete deviance information criterion (DIC). The performance of the developed methodology is evaluated using a simulation study. To illustrate the method, we used a data set derived from the National Longitudinal Survey of Youth.
The estimation of workspace for parallel kinematic machines (PKMs) typically relies on geometric considerations, which is suitable for PKMs operating under light load conditions. However, when subjected to heavy load, PKMs may experience significant deformation in certain postures, potentially compromising their stiffness. Additionally, heavy load conditions can impact motor loading performance, leading to inadequate motor loading in specific postures. Consequently, in addition to geometric constraints, the workspace of PKMs under heavy load is also constrained by mechanism deformation and motor loading performance.
This paper aims at developing a new heavy load 6-PSS PKM for multi-degree of freedom forming process. Additionally, it proposes a new method for estimating the workspace, which takes into account both mechanism deformation and motor loading performance. Initially, the geometric workspace of the machine is predicted based on its geometric configuration. Subsequently, the workspace is predicted while considering the effects of mechanism deformation and motor loading performance separately. Finally, the workspace is synthesized by simultaneously accounting for both mechanism deformation and motor loading performance, and a new index of workspace utilization rate is proposed. The results indicate that the synthesized workspace of the machine diminishes as the load magnitude and load arm increase. Specifically, under a heavy load magnitude of 6000 kN and a load arm of 200 mm, the utilization rate of the synthesized workspace is only 9.9% of the geometric workspace.
Developing large-eddy simulation (LES) wall models for separated flows is challenging. We propose to leverage the significance of separated flow data, for which existing theories are not applicable, and the existing knowledge of wall-bounded flows (such as the law of the wall) along with embedded learning to address this issue. The proposed so-called features-embedded-learning (FEL) wall model comprises two submodels: one for predicting the wall shear stress and another for calculating the eddy viscosity at the first off-wall grid nodes. We train the former using the wall-resolved LES (WRLES) data of the periodic hill flow and the law of the wall. For the latter, we propose a modified mixing length model, with the model coefficient trained using the ensemble Kalman method. The proposed FEL model is assessed using the separated flows with different flow configurations, grid resolutions and Reynolds numbers. Overall good a posteriori performance is observed for predicting the statistics of the recirculation bubble, wall stresses and turbulence characteristics. The statistics of the modelled subgrid-scale (SGS) stresses at the first off-wall grids are compared with those calculated using the WRLES data. The comparison shows that the amplitude and distribution of the SGS stresses and energy transfer obtained using the proposed model agree better with the reference data when compared with the conventional SGS model.
Despite growing awareness of the mental health damage caused by air pollution, the epidemiologic evidence on impact of air pollutants on major mental disorders (MDs) remains limited. We aim to explore the impact of various air pollutants on the risk of major MD.
Methods
This prospective study analyzed data from 170 369 participants without depression, anxiety, bipolar disorder, and schizophrenia at baseline. The concentrations of particulate matter with aerodynamic diameter ≤ 2.5 μm (PM2.5), particulate matter with aerodynamic diameter > 2.5 μm, and ≤ 10 μm (PM2.5–10), nitrogen dioxide (NO2), and nitric oxide (NO) were estimated using land-use regression models. The association between air pollutants and incident MD was investigated by Cox proportional hazard model.
Results
During a median follow-up of 10.6 years, 9 004 participants developed MD. Exposure to air pollution in the highest quartile significantly increased the risk of MD compared with the lowest quartile: PM2.5 (hazard ratio [HR]: 1.16, 95% CI: 1.09–1.23), NO2 (HR: 1.12, 95% CI: 1.05–1.19), and NO (HR: 1.10, 95% CI: 1.03–1.17). Subgroup analysis showed that participants with lower income were more likely to experience MD when exposed to air pollution. We also observed joint effects of socioeconomic status or genetic risk with air pollution on the MD risk. For instance, the HR of individuals with the highest genetic risk and highest quartiles of PM2.5 was 1.63 (95% CI: 1.46–1.81) compared to those with the lowest genetic risk and lowest quartiles of PM2.5.
Conclusions
Our findings highlight the importance of air pollution control in alleviating the burden of MD.
Optical coherence tomography (OCT) and confocal microscopy are pivotal in retinal imaging, offering distinct advantages and limitations. In vivo OCT offers rapid, noninvasive imaging but can suffer from clarity issues and motion artifacts, while ex vivo confocal microscopy, providing high-resolution, cellular-detailed color images, is invasive and raises ethical concerns. To bridge the benefits of both modalities, we propose a novel framework based on unsupervised 3D CycleGAN for translating unpaired in vivo OCT to ex vivo confocal microscopy images. This marks the first attempt to exploit the inherent 3D information of OCT and translate it into the rich, detailed color domain of confocal microscopy. We also introduce a unique dataset, OCT2Confocal, comprising mouse OCT and confocal retinal images, facilitating the development of and establishing a benchmark for cross-modal image translation research. Our model has been evaluated both quantitatively and qualitatively, achieving Fréchet inception distance (FID) scores of 0.766 and Kernel Inception Distance (KID) scores as low as 0.153, and leading subjective mean opinion scores (MOS). Our model demonstrated superior image fidelity and quality with limited data over existing methods. Our approach effectively synthesizes color information from 3D confocal images, closely approximating target outcomes and suggesting enhanced potential for diagnostic and monitoring applications in ophthalmology.
This article studies how ‘cybercrime’ is framed under the pre-existing regional prohibition regimes and how it would be reshaped under the auspices of the UN. This article adopts a sociolegal approach by integrating transnational criminal law (TCL) and the conceptual framework of recursivity. Observations and analyses show that (i) only the Budapest Convention has institutional capacity to shape ‘cybercrime’, while state behaviour of framing ‘cybercrime’ is actually subject to human rights instruments; (ii) states reach an exceptional compromise in transforming ‘cybercrime’ at the global level during the negotiations under the UN; and (iii) protection from cybercrime is emerging as a common interest. This author proposes that the normative changes of framing ‘cybercrime’ reflect the competition of states for normative power on the international plane; therefore, a pursuit of a universalist formula for countering cybercrime would not succeed owing to a lack of a global commitment to what basic norms and rules govern state behaviour in cyberspace. Lastly, this author proposes that transnational criminalization of cybercrime should seek a minimum public order at the first place because it is premature to provide any real global regulation at this moment.
In this work, the shape of a bluff body is optimized to mitigate velocity fluctuations of turbulent wake flows based on large-eddy simulations (LES). The Reynolds-averaged Navier–Stokes method fails to capture velocity fluctuations, while direct numerical simulations are computationally prohibitive. This necessitates using the LES method for shape optimization given its scale-resolving capability and relatively affordable computational cost. However, using LES for optimization faces challenges in sensitivity estimation as the chaotic nature of turbulent flows can lead to the blowup of the conventional adjoint-based gradient. Here, we propose using the regularized ensemble Kalman method for the LES-based optimization. The method is a statistical optimization approach that uses the sample covariance between geometric parameters and LES predictions to estimate the model gradient, circumventing the blowup issue of the adjoint method for chaotic systems. Moreover, the method allows for the imposition of smoothness constraints with one additional regularization step. The ensemble-based gradient is first evaluated for the Lorenz system, demonstrating its accuracy in the gradient calculation of the chaotic problem. Further, with the proposed method, the cylinder is optimized to be an asymmetric oval, which significantly reduces turbulent kinetic energy and meander amplitudes in the wake flows. The spectral analysis methods are used to characterize the flow field around the optimized shape, identifying large-scale flow structures responsible for the reduction in velocity fluctuations. Furthermore, it is found that the velocity difference in the shear layer is decreased with the shape change, which alleviates the Kelvin–Helmholtz instability and the wake meandering.
To characterise the nature of digital food and beverage advertising in Singapore.
Setting:
Food and beverage advertisements within twenty clicks on the top twelve non-food websites and all posts on the Facebook and Instagram pages of fifteen major food companies in Singapore were sampled from 1 January to 30 June 2018.
Design:
Advertised foods were classified as being core (healthier), non-core or mixed dishes (e.g. burger) using the WHO nutrient profile model and national guidelines. Marketing techniques were assessed using published coding frameworks.
Participants:
NA
Results:
Advertisements (n 117) on the twelve non-food websites were largely presented as editorial content. Food companies posted twice weekly on average on social media sites (n 1261), with eatery chains posting most frequently and generating the largest amount of likes and shares. Key marketing techniques emphasised non-health attributes, for example, hedonic or convenience attributes (85 % of advertisements). Only a minority of foods and beverages advertised were core foods (non-food website: 16·2 %; social media: 13·5 %).
Conclusions:
Top food and beverage companies in Singapore actively use social media as a platform for promotion with a complex array of marketing techniques. A vast majority of these posts were unhealthy highlighting an urgent need to consider regulating digital food and beverage advertising in Singapore.