To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A deep convolutional neural network has been developed to denoise atomic-resolution transmission electron microscope image datasets of nanoparticles acquired using direct electron counting detectors, for applications where the image signal is severely limited by shot noise. The network was applied to a model system of CeO2-supported Pt nanoparticles. We leverage multislice image simulations to generate a large and flexible dataset for training the network. The proposed network outperforms state-of-the-art denoising methods on both simulated and experimental test data. Factors contributing to the performance are identified, including (a) the geometry of the images used during training and (b) the size of the network's receptive field. Through a gradient-based analysis, we investigate the mechanisms learned by the network to denoise experimental images. This shows that the network exploits both extended and local information in the noisy measurements, for example, by adapting its filtering approach when it encounters atomic-level defects at the nanoparticle surface. Extensive analysis has been done to characterize the network's ability to correctly predict the exact atomic structure at the nanoparticle surface. Finally, we develop an approach based on the log-likelihood ratio test that provides a quantitative measure of the agreement between the noisy observation and the atomic-level structure in the network-denoised image.
This research employs machine learning (Mask Region-Based Convolutional Neural Networks [Mask R-CNN]) and cluster analysis (Density-based spatial clustering of applications with noise [DBSCAN]) to identify more than 20,000 relict charcoal hearths (RCHs) organized in large “fields” within and around State Game Lands (SGLs) in Pennsylvania. This research has two important threads that we hope will advance the archaeological study of landscapes. The first is the significant historical impact of charcoal production, a poorly understood industry of the late eighteenth to early twentieth century, on the historic and present landscape of the United States. Although this research focuses on charcoal production in Pennsylvania, it has broad application for both identifying and contextualizing historical charcoal production throughout the world and for better understanding modern charcoal production. The second thread is the use of open data, open source, and open access tools to conduct this analysis, as well as the open publication of the resultant data. Not only does this research demonstrate the significance of open access tools and data but the open publication of our code as well as our data allow others to replicate our work, to tweak our code and protocols for their own work, and reuse our results.
The quality of life and lifespan are greatly reduced among individuals with mental illness. To improve prognosis, the nascent field of precision psychiatry aims to provide personalised predictions for the course of illness and response to treatment. Unfortunately, the results of precision psychiatry studies are rarely externally validated, almost never implemented in clinical practice, and tend to focus on a few selected outcomes. To overcome these challenges, we have established the PSYchiatric Clinical Outcome Prediction (PSYCOP) cohort, which will form the basis for extensive studies in the upcoming years.
PSYCOP is a retrospective cohort study that includes all patients with at least one contact with the psychiatric services of the Central Denmark Region in the period from January 1, 2011, to October 28, 2020 (n = 119 291). All data from the electronic health records (EHR) are included, spanning diagnoses, information on treatments, clinical notes, discharge summaries, laboratory tests, etc. Based on these data, machine learning methods will be used to make prediction models for a range of clinical outcomes, such as diagnostic shifts, treatment response, medical comorbidity, and premature mortality, with an explicit focus on clinical feasibility and implementation.
We expect that studies based on the PSYCOP cohort will advance the field of precision psychiatry through the use of state-of-the-art machine learning methods on a large and representative data set. Implementation of prediction models in clinical psychiatry will likely improve treatment and, hopefully, increase the quality of life and lifespan of those with mental illness.
This paper presents a method to solve the kinematics of a rigid-flexible and variable-diameter continuous manipulator. The multi-segment underwater manipulator is driven by McKibben water hydraulic artificial muscle (WHAM). Considering the effect of elasticity and friction, we optimized the static mathematical model of WHAM. The kinematic model of the manipulator with load is established based on the hypothesis of piecewise constant curvature (PCC). We developed an optimization algorithm to calculate the length of the WHAMs according to the principle of minimum strain energy and obtain the configuration space parameters of the kinematic model. Based on the infinitesimal method, the homogeneous transformation matrices of the variable-diameter bending sections are computed, and the terminal position and attitude are obtained. In this paper, we studied the working space of the manipulator by quantitative analysis of the impact factors including pressure and load. A deep neural network (DNN) with six hidden layers is designed to solve inverse kinematics. The forward kinematic results are used to train and test the DNN, and the correlation coefficient between the output and target samples reaches 0.945. We carried out an underwater experiment and verified the effectiveness of the kinematic modeling and solution method.
By recognizing the motion of the healthy side, the lower limb exoskeleton robot can provide therapy to the affected side of stroke patients. To improve the accuracy of motion intention recognition based on sensor data, the research based on deep learning was carried out. Eighty healthy subjects performed gait experiments under five different gait environments (flat ground, 10
upslope and downslope, and upstairs and downstairs) by simulating stroke patients. To facilitate the training and classification of the neural network, this paper presents template processing schemes to adapt to different data formats. The novel algorithm model of a hybrid network model based on convolutional neural network (CNN) and Long–short-term memory (LSTM) model is constructed. To mitigate the data-sparse problem, a spatial–temporal-embedded LSTM model (SQLSTM) combining spatial–temporal influence with the LSTM model is proposed. The proposed CNN-SQLSTM model is evaluated on a real trajectory dataset, and the results demonstrate the effectiveness of the proposed model. The proposed method will be used to guide the control strategy design of robot system for active rehabilitation training.
Accurate, robust and fast image reconstruction is a critical task in many scientific, industrial and medical applications. Over the last decade, image reconstruction has been revolutionized by the rise of compressive imaging. It has fundamentally changed the way modern image reconstruction is performed. This in-depth treatment of the subject commences with a practical introduction to compressive imaging, supplemented with examples and downloadable code, intended for readers without extensive background in the subject. Next, it introduces core topics in compressive imaging – including compressed sensing, wavelets and optimization – in a concise yet rigorous way, before providing a detailed treatment of the mathematics of compressive imaging. The final part is devoted to recent trends in compressive imaging: deep learning and neural networks. With an eye to the next decade of imaging research, and using both empirical and mathematical insights, it examines the potential benefits and the pitfalls of these latest approaches.
Achieving robust and fast two-dimensional adaptive beamforming of phased array antennas is a challenging problem due to its high-computational complexity. To address this problem, a deep-learning-based beamforming method is presented in this paper. In particular, the optimum weight vector is computed by modeling the problem as a convolutional neural network (CNN), which is trained with I/O pairs obtained from the optimum Wiener solution. In order to exhibit the robustness of the new technique, it is applied on an 8 × 8 phased array antenna and compared with a shallow (non-deep) neural network namely, radial basis function neural network. The results reveal that the CNN leads to nearly optimal Wiener weights even in the presence of array imperfections.
Identification of human individuals within a group of 39 persons using micro-Doppler (μ-D) features has been investigated. Deep convolutional neural networks with two different training procedures have been used to perform classification. Visualization of the inner network layers revealed the sections of the input image most relevant when determining the class label of the target. A convolutional block attention module is added to provide a weighted feature vector in the channel and feature dimension, highlighting the relevant μ-D feature-filled areas in the image and improving classification performance.
The number of unmanned aerial vehicles (UAVs, also known as drones) has increased dramatically in the airspace worldwide for tasks such as surveillance, reconnaissance, shipping and delivery. However, a small number of them, acting maliciously, can raise many security risks. Recent Artificial Intelligence (AI) capabilities for object detection can be very useful for the identification and classification of drones flying in the airspace and, in particular, are a good solution against malicious drones. A number of counter-drone solutions are being developed, but the cost of drone detection ground systems can also be very high, depending on the number of sensors deployed and powerful fusion algorithms. We propose a low-cost counter-drone solution composed uniquely by a guard-drone that should be able to detect, locate and eliminate any malicious drone. In this paper, a state-of-the-art object detection algorithm is used to train the system to detect drones. Three existing object detection models are improved by transfer learning and tested for real-time drone detection. Training is done with a new dataset of drone images, constructed automatically from a very realistic flight simulator. While flying, the guard-drone captures random images of the area, while at the same time, a malicious drone is flying too. The drone images are auto-labelled using the location and attitude information available in the simulator for both drones. The world coordinates for the malicious drone position must then be projected into image pixel coordinates. The training and test results show a minimum accuracy improvement of 22% with respect to state-of-the-art object detection models, representing promising results that enable a step towards the construction of a fully autonomous counter-drone system.
Over the past few years, deep learning has risen to the foreground as a topic of massive interest, mainly as a result of successes obtained in solving large-scale image processing tasks. There are multiple challenging mathematical problems involved in applying deep learning: most deep learning methods require the solution of hard optimisation problems, and a good understanding of the trade-off between computational effort, amount of data and model complexity is required to successfully design a deep learning approach for a given problem.. A large amount of progress made in deep learning has been based on heuristic explorations, but there is a growing effort to mathematically understand the structure in existing deep learning methods and to systematically design new deep learning methods to preserve certain types of structure in deep learning. In this article, we review a number of these directions: some deep neural networks can be understood as discretisations of dynamical systems, neural networks can be designed to have desirable properties such as invertibility or group equivariance and new algorithmic frameworks based on conformal Hamiltonian systems and Riemannian manifolds to solve the optimisation problems have been proposed. We conclude our review of each of these topics by discussing some open problems that we consider to be interesting directions for future research.
We propose a new neighbouring prediction model for mortality forecasting. For each mortality rate at age x in year t, mx,t, we construct an image of neighbourhood mortality data around mx,t, that is, Ꜫmx,t (x1, x2, s), which includes mortality information for ages in [x-x1, x+x2], lagging k years (1 ≤ k ≤ s). Combined with the deep learning model – convolutional neural network, this framework is able to capture the intricate nonlinear structure in the mortality data: the neighbourhood effect, which can go beyond the directions of period, age, and cohort as in classic mortality models. By performing an extensive empirical analysis on all the 41 countries and regions in the Human Mortality Database, we find that the proposed models achieve superior forecasting performance. This framework can be further enhanced to capture the patterns and interactions between multiple populations.
In the field of transmission electron microscopy, data interpretation often lags behind acquisition methods, as image processing methods often have to be manually tailored to individual datasets. Machine learning offers a promising approach for fast, accurate analysis of electron microscopy data. Here, we demonstrate a flexible two-step pipeline for the analysis of high-resolution transmission electron microscopy data, which uses a U-Net for segmentation followed by a random forest for the detection of stacking faults. Our trained U-Net is able to segment nanoparticle regions from the amorphous background with a Dice coefficient of 0.8 and significantly outperforms traditional image segmentation methods. Using these segmented regions, we are then able to classify whether nanoparticles contain a visible stacking fault with 86% accuracy. We provide this adaptable pipeline as an open-source tool for the community. The combined output of the segmentation network and classifier offer a way to determine statistical distributions of features of interest, such as size, shape, and defect presence, enabling the detection of correlations between these features.
Nowadays many financial derivatives, such as American or Bermudan options, are of early exercise type. Often the pricing of early exercise options gives rise to high-dimensional optimal stopping problems, since the dimension corresponds to the number of underlying assets. High-dimensional optimal stopping problems are, however, notoriously difficult to solve due to the well-known curse of dimensionality. In this work, we propose an algorithm for solving such problems, which is based on deep learning and computes, in the context of early exercise option pricing, both approximations of an optimal exercise strategy and the price of the considered option. The proposed algorithm can also be applied to optimal stopping problems that arise in other areas where the underlying stochastic process can be efficiently simulated. We present numerical results for a large number of example problems, which include the pricing of many high-dimensional American and Bermudan options, such as Bermudan max-call options in up to 5000 dimensions. Most of the obtained results are compared to reference values computed by exploiting the specific problem design or, where available, to reference values from the literature. These numerical results suggest that the proposed algorithm is highly effective in the case of many underlyings, in terms of both accuracy and speed.
Trajectory prediction is an important support for analysing the vessel motion behaviour, judging the vessel traffic risk and collision avoidance route planning of intelligent ships. To improve the accuracy of trajectory prediction in complex situations, a Generative Adversarial Network with Attention Module and Interaction Module (GAN-AI) is proposed to predict the trajectories of multiple vessels. Firstly, GAN-AI can infer all vessels’ future trajectories simultaneously when in the same local area. Secondly, GAN-AI is based on adversarial architecture and trained by competition for better convergence. Thirdly, an interactive module is designed to extract the group motion features of the multiple vessels, to achieve better performance at the ship encounter situations. GAN-AI has been tested on the historical trajectory data of Zhoushan port in China; the experimental results show that the GAN-AI model improves the prediction accuracy by 20%, 24% and 72% compared with sequence to sequence (seq2seq), plain GAN, and the Kalman model. It is of great significance to improve the safety management level of the vessel traffic service system and judge the degree of ship traffic risk.
We provide an introduction of the functioning, implementation, and challenges of convolutional neural networks (CNNs) to classify visual information in social sciences. This tool can help scholars to make more efficient the tedious task of classifying images and extracting information from them. We illustrate the implementation and impact of this methodology by coding handwritten information from vote tallies. Our paper not only demonstrates the contributions of CNNs to both scholars and policy practitioners, but also presents the practical challenges and limitations of the method, providing advice on how to deal with these issues.
Deep learning (DL) has seen tremendous recent successes in many areas of artificial intelligence. It has since sparked great interests in its potential use in power systems. However, success from using DL in power systems has not been straightforward. Even with the continuing proliferation of data collected in the power systems from, e.g., synchrophasors and smart meters, how to effectively use these data, especially with DL techniques, remains a widely open problem. This chapter shows that the great power of DL can be unleashed in solving many fundamentally hard high-dimensional real-time inference problems in power systems. In particular, DL, if used appropriately, can effectively exploit both the intricate knowledge from the nonlinear power system models and the expressive power of DL predictor models. This chapter also shows the great promise of DL in significantly improving the stability, resilience, and security of power systems.