Please note, due to essential maintenance online transactions will not be possible between 02:30 and 04:00 BST, on Tuesday 17th September 2019 (22:30-00:00 EDT, 17 Sep, 2019). We apologise for any inconvenience.
To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Adding Au to Pd nanoparticles (NPs) can impart high catalytic activity with respect to hydrogenation of a wide range of substances. These materials are often synthesized by reducing metallic precursors; hence, sonochemical and solvothermal processes are commonly used to anchor these bimetals onto thin supports, including graphene. Although similar NPs have been studied reasonably well, a clear understanding of structural characteristics relative to their synthesis parameters is lacking, due to limitations in characterization techniques, which may prevent optimization of this very promising catalyst. In this report, a strategic approach has been used to identify this structural and material synthesis correlation, starting with controlled sample preparation and followed by detailed characterization. This includes advanced scanning transmission electron microscopy and electron energy loss spectroscopy; the latter using a state-of-the-art instrumentation to map the distribution of Pd and Au, and to identify chemical state of the Pd NPs, which has not been previously reported. Results show that catalytic bimetal NP clusters were made of small zero-valent Pd NPs aggregating to form a shell around an Au core. Not only can the described characterization approach be applied to similar material systems, but the results can guide the optimization of the synthesis procedures.
In this paper, anhydrous calcium sulphate CaSO4 (anhydrite) is considered as a carrier material for organic matter delivery from Space to Earth. Its capability of incorporating important fractions of water, leading to different species like bassanite and gypsum, as well as organic molecules; its discovery on Mars surface and in meteorites; the capability to dissipate much energy by its chemical decomposition into solid (CaO) and gaseous (SO3) oxide, make anhydrite a very promising material in an astrobiological perspective. Since chemical cooling has been recently considered by some of the present authors for the case of Ca/Mg carbonates, CaSO4 can be placed into a class of ‘white soft minerals’ (WSM) of astrobiological interest. In this context, CaSO4 is evaluated here by using the atmospheric entry model previously developed for carbonates. The model includes grain dynamics, thermochemistry, stoichiometry, radiation and evaporation heat losses. Results are discussed in comparison with MgCO3 and CaCO3 and show that sub-mm anhydrite grains are potentially effective organic matter carriers. A Monte Carlo simulation is used to provide distributions of the sulphate fraction as a function of altitude. Two-zone model results are presented to support the isothermal grain hypothesis.
In this paper, a first study of the atmospheric entry of carbonate micrometeoroids, in an astrobiological perspective, is performed. Therefore an entry model, which includes two-dimensional dynamics, non-isothermal atmosphere, ablation and radiation losses, is build and benchmarked to literature data for silicate micrometeoroids. A thermal decomposition model of initially pure magnesium carbonate is proposed, and it includes thermal energy, mass loss and the effect of changing composition as the carbonate grain is gradually converted into oxide. Several scenarios are obtained by changing the initial speed, entry angle and grain diameter, producing a systematic comparison of silicate and carbonate grain. The results of the composite model show that the thermal behaviour of magnesium carbonate is markedly different from that of the corresponding silicate, much lower equilibration temperatures being reached in the first stages of the entry. At the same time, the model shows that the limit of a thermal protection scenario, based on magnesium carbonate, is the very high decomposition speed even at moderate temperatures, which results in the total loss of carbon already at about 100 km altitude. The present results show that, although decomposition and associated cooling are important effects in the entry process of carbonate grains, the specific scenario of pure MgCO3 micrograin does not allow complex organic matter delivery to the lower atmosphere. This suggests us to consider less volatile carbonates for further studies.
We describe here the parallels in astronomy and earth science datasets, their analyses, and the opportunities for methodology transfer from astroinformatics to geoinformatics. Using example of hydrology, we emphasize how meta-data and ontologies are crucial in such an undertaking. Using the infrastructure being designed for EarthCube - the Virtual Observatory for the earth sciences - we discuss essential steps for better transfer of tools and techniques in the future e.g. domain adaptation. Finally we point out that it is never a one-way process and there is enough for astroinformatics to learn from geoinformatics as well.
We review recent advances and ongoing work in evolving the NASA/IPAC Extragalactic Database (NED) beyond an object reference database into a data mining discovery engine. Updates to the infrastructure and data integration techniques are enabling more than a 10-fold expansion; NED will soon contain over a billion objects with their fundamental attributes fused across the spectrum via cross-identifications among the largest sky surveys (e.g., GALEX, SDSS, 2MASS, AllWISE, EMU), and over 100,000 smaller but scientifically important catalogs and journal articles. The recent discovery of super-luminous spiral galaxies exemplifies the opportunities for data mining and science discovery directly from NED’s rich data synthesis. Enhancements to the user interface, including new APIs, VO protocols, and queries involving derived physical quantities, are opening new pathways for panchromatic studies of large galaxy samples. Examples are shown of graphics characterizing the content of NED, as well as initial steps in exploring the database via interactive statistical visualizations.
The Wide Field Infrared Survey Telescope (WFIRST) is a 2.4 m telescope with a large field of view ( ~ 0.3 deg2) and fine angular resolution (0.11”). WFIRST’s Wide Field Instrument (WFI) will obtain images in the Z, Y, J, H, F184, W149 (wide) filter bands, and grism spectra of the same large field of view. The data volume of the WFIRST Science Archive is expected to reach a few Petabytes. We describe plans to enable users to find the data of interest and, if needed, to analyze the data in situ using sophisticated software tools provided by the archive. As preparation, we are building a mini-archive that will help us to define realistic science requirements and to design the full WFIRST Science Archive.
We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z’s and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF’s derived from a traditional SED template fitting method (Le Phare).
Radio survey datasets comprise an increasing number of individual observations stored as sets of multidimensional data. In large survey projects, astronomers commonly face limitations regarding: 1) interactive visual analytics of sufficiently large subsets of data; 2) synchronous and asynchronous collaboration; and 3) documentation of the discovery workflow. To support collaborative data inquiry, we present encube, a large-scale comparative visual analytics framework. encube can utilise advanced visualization environments such as the CAVE2 (a hybrid 2D and 3D virtual reality environment powered with a 100 Tflop/s GPU-based supercomputer and 84 million pixels) for collaborative analysis of large subsets of data from radio surveys. It can also run on standard desktops, providing a capable visual analytics experience across the display ecology. encube is composed of four primary units enabling compute-intensive processing, advanced visualisation, dynamic interaction, parallel data query, along with data management. Its modularity will make it simple to incorporate astronomical analysis packages and Virtual Observatory capabilities developed within our community. We discuss how encube builds a bridge between high-end display systems (such as CAVE2) and the classical desktop, preserving all traces of the work completed on either platform – allowing the research process to continue wherever you are.
In the present work we investigate the possible relationship of long-period comets with five large and distant trans-Neptunian bodies (Sedna, Eris, 2007 OR10, 2012 VP113 and 2008 ST291) in order to determine the probability of the transfer of a part of these kind of comets to the inner of the Solar System. To identify such relationships, we studied the relative positions of the comet orbits and listed TNOs. Using numerical integration methods, we examined dynamical evolution of the comets and have found one encounter of comet C/1861J1 and Eris.
Compressive Sensing is an emerging technology for data compression and simultaneous data acquisition. This is an enabling technique for significant reduction in data bandwidth, and transmission power and hence, can greatly benefit space-flight instruments. We apply this process to detect exoplanets via gravitational microlensing. We experiment with various impact parameters that describe microlensing curves to determine the effectiveness and uncertainty caused by Compressive Sensing. Finally, we describe implications for space-flight missions.
We compare the results of using a Random Forest Classifier with the results of using Nonparametric Discriminant Analysis to classify whether a filament channel (in the case of a filament eruption) or an active region (in the case of a flare) is about to produce an event. A large number of descriptors are considered in each case, but it is found that only a small number are needed in order to get most of the improvement in performance over always predicting the majority class. There is little difference in performance between the two classifiers, and neither results in substantial improvements over simply predicting the majority class.
For examining possibilities and challenges in doing science with multi-band and non-simultaneous data from upcoming surveys like LSST, the Pan-STARRS1 (PS1) 3π can be used as a pilot survey. This is especially important to explore the possibilities in detection and classification of variable sources within the first years of LSST’s 10-year baseline. We had explored the capabilities of PS1 3π for carrying out time-domain science in a variety of applications. We had used structure function fitting as well as period fitting, to search for and classify high-latitude as well as low-latitude variable sources, in particular RR Lyrae, Cepheids and QSOs.
In this article we described CoLiTec software for full automated frames processing. CoLiTec software allows processing the Big Data of observation results as well as processing of data that is continuously formed during observation. The scope of solving tasks includes frames brightness equalization, moving objects detection, astrometry, photometry, etc. Along with the high efficiency of Big Data processing CoLiTec software also ensures high accuracy of data measurements. A comparative analysis of the functional characteristics and positional accuracy was performed between CoLiTec and Astrometrica software. The benefits of CoLiTec used with wide field and low quality frames were observed. The efficiency of the CoLiTec software was proved by about 700.000 observations and over 1.500 preliminary discoveries.
An introduction is given to the use of prototype-based models in supervised machine learning. The main concept of the framework is to represent previously observed data in terms of so-called prototypes, which reflect typical properties of the data. Together with a suitable, discriminative distance or dissimilarity measure, prototypes can be used for the classification of complex, possibly high-dimensional data. We illustrate the framework in terms of the popular Learning Vector Quantization (LVQ). Most frequently, standard Euclidean distance is employed as a distance measure. We discuss how LVQ can be equipped with more general dissimilarites. Moreover, we introduce relevance learning as a tool for the data-driven optimization of parameterized distances.
This paper discusses an autoregressive model for the analysis of irregularly observed time series. The properties of this model are studied and a maximum likelihood estimation procedure is proposed. The finite sample performance of this estimator is assessed by Monte Carlo simulations, showing accurate estimators. We implement this model to the residuals after fitting an harmonic model to light-curves from periodic variable stars from the Optical Gravitational Lensing Experiment (OGLE) and Hipparcos surveys, showing that the model can identify time dependency structure that remains in the residuals when, for example, the period of the light-curves was not properly estimated.
The Digitised First Byurakan Survey (DFBS) provides low dispersion optical spectra for about 24 million sources. A two-step machine learning algorithm based on similarities to predefined templates is applied to select different classes of rare objects in the dataset automatically, for example late type stars, quasars and white dwarves. Identifying outliers from the groups of common astrophysical objects may lead to discovery of rare objects, such as gamma-ray burst afterglows.
Currently large sky area spectral surveys like SDSS, 2dF, and LAMOST, using the new generation of telescopes and observatories, have provided massive spectral data sets for astronomical research. Most of the data can be automatically handled with pipelines, but visually inspection by human eyes is still necessary in several situations, like low SNR spectra, QSO recognition and peculiar spectra mining. Using ASERA, A Spectrum Eye Recognition Assistant, we can set up a team spectral inspection platform. On a preselected spectral data set, members of a team can individually view spectra one by one, find the best match template and estimate the redshift. Results from different members will be gathered and merged to raise the team work efficiency. ASERA mainly targets the spectra of SDSS and LAMOST fits data formats. Other formats can be supported with some conversion. Spectral templates from SDSS and LAMOST pipelines are embedded and users can easily add their own templates. Convenient cross identification interfaces with SDSS, SIMBAD, VIZIER, NED and DSS are also provided. An application example targeting finding strong emission line spectra from LAMOST DR2 is presented.
The FP-7 (Framework Programme 7 of the European Union) PERICLES project addresses the life-cycle of large and complex data sets to cater for the evolution of context of data sets and user communities, including groups unanticipated when the data was created. Semantics of data sets are thus also expected to evolve and the project includes elements which could address the reuse of data sets at periods where the data providers and even their institutions are not available any more. This paper presents the PERICLES science case with the example of the SOLAR (SOLAR monitoring observatory) payload on International Space Station-Columbus.
We present simulator of alerts for the Large Synoptic Survey Telescope (LSST) developed by Belgrade group. This simulator will be used in testing the functionality of external event brokers/Complex Event Processing (CEP) engines. It is based on current LSST Simulation framework and allows for different classes of objects to be ‘alerted’. A Web service based on our simulator is prototyped and can be accessed by developers of brokers/CEP engines.