Due to unplanned maintenance of the back-end systems supporting article purchase on Cambridge Core, we have taken the decision to temporarily suspend article purchase for the foreseeable future. We apologise for any inconvenience caused whilst we work with the relevant teams to restore this service.
To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Relationships between reproductive strategies and population spatial structure have often been suggested for lichens, but there is a lack of supporting aerobiological data. For the first time, this study couples aerobiological investigations on meiospore dispersal by Caloplaca crenulatella (Nyl.) H. Olivier and Rhizocarpon geographicum (L.) DC. with analysis of local spatial patterns of thalli of both species. During a two-year monitoring period carried out on the walls of a medieval castle in NW Italy, a total of 169 polar diblastic spores, 20% of which were morphologically attributable to C. crenulatella, was detected in the mycoareosol, while muriform spores of R. geographicum were never found. Laboratory experiments confirmed that different dispersal patterns characterize the two species, the meiospores of R. geographicum being poorly discharged and only recovered at a short distance from thalli, whereas those of C. crenulatella were more abundantly discharged, suspended and better dispersed by a moderate air flow. This difference was reflected on the castle walls by the random spatial pattern of C. crenulatella, while R. geographicum showed a clustered distribution. Different discharge rates and take-off limitations, possibly related to size differences between the spores, are not sufficient to explain the different colonization patterns and dynamics of the two species. Additional intrinsic and extrinsic factors are likely to drive their dispersal and establishment success. Nevertheless, information on the relationships between different dispersal patterns of the species and the local spatial structure of their populations might help to predict the recovery potential of lichen species exposed to habitat loss or disturbance, or encrusting monument surfaces.
New analytical models are introduced to describe the motion of a Herschel–Bulkley fluid slumping under gravity in a narrow fracture and in a porous medium. A useful self-similar solution can be derived for a fluid injection rate that scales as time
; an expansion technique is adopted for a generic injection rate that is power law in time. Experiments in a Hele-Shaw cell and in a narrow channel filled with glass ballotini confirm the theoretical model within the experimental uncertainty.
Nowadays there is no field research which is not flooded with data. Among the sciences, astrophysics has always been driven by the analysis of massive amounts of data. The development of new and more sophisticated observation facilities, both ground-based and spaceborne, has led data more and more complex (Variety), an exponential growth of both data Volume (i.e., in the order of petabytes), and Velocity in terms of production and transmission. Therefore, new and advanced processing solutions will be needed to process this huge amount of data. We investigate some of these solutions, based on machine learning models as well as tools and architectures for Big Data analysis that can be exploited in the astrophysical context.
Radio survey datasets comprise an increasing number of individual observations stored as sets of multidimensional data. In large survey projects, astronomers commonly face limitations regarding: 1) interactive visual analytics of sufficiently large subsets of data; 2) synchronous and asynchronous collaboration; and 3) documentation of the discovery workflow. To support collaborative data inquiry, we present encube, a large-scale comparative visual analytics framework. encube can utilise advanced visualization environments such as the CAVE2 (a hybrid 2D and 3D virtual reality environment powered with a 100 Tflop/s GPU-based supercomputer and 84 million pixels) for collaborative analysis of large subsets of data from radio surveys. It can also run on standard desktops, providing a capable visual analytics experience across the display ecology. encube is composed of four primary units enabling compute-intensive processing, advanced visualisation, dynamic interaction, parallel data query, along with data management. Its modularity will make it simple to incorporate astronomical analysis packages and Virtual Observatory capabilities developed within our community. We discuss how encube builds a bridge between high-end display systems (such as CAVE2) and the classical desktop, preserving all traces of the work completed on either platform – allowing the research process to continue wherever you are.
Machine learning techniques have proven to be increasingly useful in astronomical applications over the last few years, for example in object classification, estimating redshifts and data mining. One example of object classification is classifying galaxy morphology. This is a tedious task to do manually, especially as the datasets become larger with surveys that have a broader and deeper search-space. The Kaggle Galaxy Zoo competition presented the challenge of writing an algorithm to find the probability that a galaxy belongs in a particular class, based on SDSS optical spectroscopy data. The use of convolutional neural networks (convnets), proved to be a popular solution to the problem, as they have also produced unprecedented classification accuracies in other image databases such as the database of handwritten digits (MNIST †) and large database of images (CIFAR ‡). We experiment with the convnets that comprised the winning solution, but using broad classifications. The effect of changing the number of layers is explored, as well as using a different activation function, to help in developing an intuition of how the networks function and to see how they can be applied to radio galaxy images.
The Digitised First Byurakan Survey (DFBS) provides low dispersion optical spectra for about 24 million sources. A two-step machine learning algorithm based on similarities to predefined templates is applied to select different classes of rare objects in the dataset automatically, for example late type stars, quasars and white dwarves. Identifying outliers from the groups of common astrophysical objects may lead to discovery of rare objects, such as gamma-ray burst afterglows.
The Wide Field Infrared Survey Telescope (WFIRST) is a 2.4 m telescope with a large field of view ( ~ 0.3 deg2) and fine angular resolution (0.11”). WFIRST’s Wide Field Instrument (WFI) will obtain images in the Z, Y, J, H, F184, W149 (wide) filter bands, and grism spectra of the same large field of view. The data volume of the WFIRST Science Archive is expected to reach a few Petabytes. We describe plans to enable users to find the data of interest and, if needed, to analyze the data in situ using sophisticated software tools provided by the archive. As preparation, we are building a mini-archive that will help us to define realistic science requirements and to design the full WFIRST Science Archive.
This paper discusses an autoregressive model for the analysis of irregularly observed time series. The properties of this model are studied and a maximum likelihood estimation procedure is proposed. The finite sample performance of this estimator is assessed by Monte Carlo simulations, showing accurate estimators. We implement this model to the residuals after fitting an harmonic model to light-curves from periodic variable stars from the Optical Gravitational Lensing Experiment (OGLE) and Hipparcos surveys, showing that the model can identify time dependency structure that remains in the residuals when, for example, the period of the light-curves was not properly estimated.
The Hubble Catalog of Variables (HCV) is a 3 year ESA funded project that aims to develop a set of algorithms to identify variables among the sources included in the Hubble Source Catalog (HSC) and produce the HCV. We will process all HSC sources with more than a predefined number of measurements in a single filter/instrument combination and compute a range of lightcurve features to determine the variability status of each source. At the end of the project, the first release of the Hubble Catalog of Variables will be made available at the Mikulski Archive for Space Telescopes (MAST) and the ESA Science Archives. The variability detection pipeline will be implemented at the Space Telescope Science Institute (STScI) so that updated versions of the HCV may be created following the future releases of the HSC.
The FP-7 (Framework Programme 7 of the European Union) PERICLES project addresses the life-cycle of large and complex data sets to cater for the evolution of context of data sets and user communities, including groups unanticipated when the data was created. Semantics of data sets are thus also expected to evolve and the project includes elements which could address the reuse of data sets at periods where the data providers and even their institutions are not available any more. This paper presents the PERICLES science case with the example of the SOLAR (SOLAR monitoring observatory) payload on International Space Station-Columbus.
After its first implementation in 2003 the Astro-WISE technology has been rolled out in several European countries and is used for the production of the KiDS survey data. In the multi-disciplinary Target initiative this technology, nicknamed WISE technology, has been further applied to a large number of projects. Here, we highlight the data handling of other astronomical applications, such as VLT-MUSE and LOFAR, together with some non-astronomical applications such as the medical projects Lifelines and GLIMPS; the MONK handwritten text recognition system; and business applications, by amongst others, the Target Holding.
We describe some of the most important lessons learned and describe the application of the data-centric WISE type of approach to the Science Ground Segment of the Euclid satellite.
In order to understand how galaxies form and evolve, the measurement of the parameters related to their morphologies and also to the way they interact is one of the most relevant requirements. Due to the huge amount of data that is generated by surveys, the morphological and interaction analysis of galaxies can no longer rely on visual inspection. For dealing with such issue, new approaches based on machine learning techniques have been proposed in the last years with the aim of automating the classification process. We tested Deep Learning using images of galaxies obtained from CANDELS to study the accuracy achieved by this tool considering two different frameworks. In the first, galaxies were classified in terms of their shapes considering five morphological categories, while in the second, the way in which galaxies interact was employed for defining other five categories. The results achieved in both cases are compared and discussed.
Large Synoptic Survey Telescope will make great contributions to many scientific fields. One of the modules will be time-domain astronomy and detection of transient events. In this paper, some considerations about transient events and alerts are presented.
We describe here the parallels in astronomy and earth science datasets, their analyses, and the opportunities for methodology transfer from astroinformatics to geoinformatics. Using example of hydrology, we emphasize how meta-data and ontologies are crucial in such an undertaking. Using the infrastructure being designed for EarthCube - the Virtual Observatory for the earth sciences - we discuss essential steps for better transfer of tools and techniques in the future e.g. domain adaptation. Finally we point out that it is never a one-way process and there is enough for astroinformatics to learn from geoinformatics as well.
An introduction is given to the use of prototype-based models in supervised machine learning. The main concept of the framework is to represent previously observed data in terms of so-called prototypes, which reflect typical properties of the data. Together with a suitable, discriminative distance or dissimilarity measure, prototypes can be used for the classification of complex, possibly high-dimensional data. We illustrate the framework in terms of the popular Learning Vector Quantization (LVQ). Most frequently, standard Euclidean distance is employed as a distance measure. We discuss how LVQ can be equipped with more general dissimilarites. Moreover, we introduce relevance learning as a tool for the data-driven optimization of parameterized distances.
Currently large sky area spectral surveys like SDSS, 2dF, and LAMOST, using the new generation of telescopes and observatories, have provided massive spectral data sets for astronomical research. Most of the data can be automatically handled with pipelines, but visually inspection by human eyes is still necessary in several situations, like low SNR spectra, QSO recognition and peculiar spectra mining. Using ASERA, A Spectrum Eye Recognition Assistant, we can set up a team spectral inspection platform. On a preselected spectral data set, members of a team can individually view spectra one by one, find the best match template and estimate the redshift. Results from different members will be gathered and merged to raise the team work efficiency. ASERA mainly targets the spectra of SDSS and LAMOST fits data formats. Other formats can be supported with some conversion. Spectral templates from SDSS and LAMOST pipelines are embedded and users can easily add their own templates. Convenient cross identification interfaces with SDSS, SIMBAD, VIZIER, NED and DSS are also provided. An application example targeting finding strong emission line spectra from LAMOST DR2 is presented.
Astronomy cloud computing environment is a cyber-Infrastructure for Astronomy Research initiated by Chinese Virtual Observatory (China-VO) under funding support from NDRC (National Development and Reform commission) and CAS (Chinese Academy of Sciences). Based on virtualization technology, astronomy cloud computing environment was designed and implemented by China-VO team. It consists of five distributed nodes across the mainland of China. Astronomer can get compuitng and storage resource in this cloud computing environment. Through this environments, astronomer can easily search and analyze astronomical data collected by different telescopes and data centers , and avoid the large scale dataset transportation.
In this article we described CoLiTec software for full automated frames processing. CoLiTec software allows processing the Big Data of observation results as well as processing of data that is continuously formed during observation. The scope of solving tasks includes frames brightness equalization, moving objects detection, astrometry, photometry, etc. Along with the high efficiency of Big Data processing CoLiTec software also ensures high accuracy of data measurements. A comparative analysis of the functional characteristics and positional accuracy was performed between CoLiTec and Astrometrica software. The benefits of CoLiTec used with wide field and low quality frames were observed. The efficiency of the CoLiTec software was proved by about 700.000 observations and over 1.500 preliminary discoveries.
Strong gravitational microlensing (GM) events provide us a possibility to determine both the parameters of microlensed source and microlens. GM can be an important clue to understand the nature of dark matter on comparably small spatial and mass scales (i.e. substructure), especially when speaking about the combination of astrometrical and photometrical data about high amplification microlensing events (HAME). In the same time, fitting of HAME lightcurves of microlensed sources is quite time-consuming process. That is why we test here the possibility to apply the statistical machine learning techniques to determine the source and microlens parameters for the set of HAME lightcurves, using the simulated set of amplification curves of sources microlensed by point masses and clumps of DM with various density profiles.