To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Inherent in the use of radioisotope sources with secondary fluorescers is the background produced by scattering of the source photons from the exciter system. A Monte Carlo program has been developed that is capable of simulating the backscattered photon spectrum as a function of the system geometry, including shielding and collimation variations. This computer program generates the scattered photon spectrum incident on both the sample and detector. The program is applied to a commercially available exciter system to study the effect of specific geometric design changes on the scattered spectrum.
Composition imaging of industrial samples has been reported using dual energy and multiple energy transmission computed tomography [1,2]. The simplest approach utilizes monoenergetic sources to obtain tomographs of a sample at two different energies. Each tomograph represents the linear attenuation coefficient distribution of the sample at the given source energy.
The Monte Carlo simulation method that has been previously developed and demonstrated for EDXRF analysis with annular radioisotope excitation sources is extended to systems using secondary fluorescer X-ray machines for excitation. Comparisons of the Monte Carlo predictions with experimental results indicate that the modification is valid.
A procedure to obtain analytical models for the elemental X-ray pulse-height distribution libraries necessary in the library least-squares analysis of energy-dispersive x-ray fluorescence spectra is outlined. This is accomplished by first obtaining the response function of Si(Li) detectors for incident photons in the energy range of interest. Subsequently this response function is used to generate the desired elemental library standards for use in the least-squares analysis of spectra, or it can be used directly within a least-squares computer program, thus eliminating the large amount of computer storage required for the standards.
A Monte Carlo model that predicts the entire photon, spectrum for energy-dispersive X-ray fluorescence (EDXRF) analyzers excited by radio-isotope sources from multielement homogeneous samples is developed and demonstrated. The components of the photon spectrum include: (1) the and Kα and Kβ characteristic primary, secondary and tertiary X rays from both the unscattered and scattered source photons, (2) the characteristic X rays excited by other characteristic X rays that have been scattered, and (3) the scattered source photons from single, double, and multiple scatters in the sample.
The computer code NCSMCXF based on this model has been developed. It is capable of handling up to 20 elements per sample and provides a detailed account of the intensities of the X rays and backscattered source photons per unit source decay as well as a summary of the relative intensities from all elements present in the sample. Cubic splines are used within the code for photoelectric and total scattering cross sections and two-variable cubic splines for angular coherent and incoherent scattering distributions for efficiency in both computation time and storage. The code also provides the pulse-height spectrum of the sample by using the appropriate Si(Li) detector response function. The Monte Carlo predictions for benchmark experimental results on two alloy samples of known composition indicate that the model is very accurate. This approach is capable of replacing most of the experimental work presently required in EDXRF quantitative analysis.
A previous Monte Carlo model that uses the simple assumption of spherical homogeneous particles to approximate sample heterogeneities has been modified to improve the computer execution time requirements for the heterogeneous sample case. A new technique for photon tracking in this medium is used and reduces the computation time requirement by half.
EDXRF analysis is conveniently split into two parts: (1) the determination of X-ray intensities and (2) the determination of elemental amounts from X-ray intensities. For the first, most EDXRF analysis has been done by some method of integrating the essentially Gaussian distribution of observed full energy pulse heights. This might be done, for example, by least-square fitting of Gaussian distributions superimposed on a straight line or a quadratic background. Recently more elaborate shapes of the energy peaks also have been considered (Kennedy, 1990). After the X-ray intensities have been determined, interelement effects between the analyte element and other elements must be corrected for in order to obtain the elemental amounts from X-ray intensities. This correction can be done either by an empirical correction procedure as in the influence coefficient method which requires measurements on a number of standard samples to determine the required coefficients, or by theoretical calculation as in the fundamental parameters method which does not require standard samples.
Monte Carlo simulation is used to determine the effects of selfabsorption for the low energy X-rays of light elements in the size range front 1 to 20 μm. Calculations are performed for a wide angle Fe-55 radioisotope-excited energy dispersive XRF system. Results are obtained for sulfur attenuation in thin layers, long cylinders, and spheres composed of various matrix materials. The enhancement effect is also treated for the transition region between thin and thick layer samples as well as in spheres of various sizes. Results are also comrpared to fixed angle analytical models.
The least-squares method with complete component library spectra is applied to the quantitative analysis of X-ray fluorescence spectral intensities. An approach is outlined for application to the general case of thick homogeneous samples at high counting rates, A simplified approach can be taken with the more specific case represented, by atmospheric particulates collected on filters. The details and sample results of this approach for this specific case are given for an energy dispersive X-ray fluorescence analyzer. The results indicate that the least-squares method as developed and applied here is valid and should prove generally useful to X-ray analysts.
The error introduced by sample scattering in EDXRF analysis is evaluated by Monte Carlo simulation. This is accomplished by deriving a Monte Carlo model capable of simulating single Compton and Rayleigh scatters from the exciting photon source and from fluorescent X rays in homogeneous samples. The model also includes primary, secondary, and tertiary fluorescence events. (1) Results are given for Ni-Fe-Cr ternary samples for various exciting energies with and without scattering and indicate that errors as large as 2% can be attributed to this effect.
A review of the application of the Monte Carlo, fundamental parameters method to XRF fluorescence analysis for the reduction of matrix effects is made. The analytical solutions arising from theoretical equations are given along with the restrictive assumptions that are necessary to this approach. The extensions of the fundamental parameters method by the Monte Carlo simulation to practical situations that require much less restrictive assumptions are outlined. The average angle approach to the use of the analytical solutions is investigated by comparison with the Monte Carlo method. Future extensions of the fundamental parameters method by the Monte Carlo approach are discussed.
The Meat Standards Australia (MSA) grading scheme has the ability to predict beef eating quality for each ‘cut×cooking method combination’ from animal and carcass traits such as sex, age, breed, marbling, hot carcass weight and fatness, ageing time, etc. Following MSA testing protocols, a total of 22 different muscles, cooked by four different cooking methods and to three different degrees of doneness, were tasted by over 19 000 consumers from Northern Ireland, Poland, Ireland, France and Australia. Consumers scored the sensory characteristics (tenderness, flavor liking, juiciness and overall liking) and then allocated samples to one of four quality grades: unsatisfactory, good-every-day, better-than-every-day and premium. We observed that 26% of the beef was unsatisfactory. As previously reported, 68% of samples were allocated to the correct quality grades using the MSA grading scheme. Furthermore, only 7% of the beef unsatisfactory to consumers was misclassified as acceptable. Overall, we concluded that an MSA-like grading scheme could be used to predict beef eating quality and hence underpin commercial brands or labels in a number of European countries, and possibly the whole of Europe. In addition, such an eating quality guarantee system may allow the implementation of an MSA genetic index to improve eating quality through genetics as well as through management. Finally, such an eating quality guarantee system is likely to generate economic benefits to be shared along the beef supply chain from farmers to retailors, as consumers are willing to pay more for a better quality product.
Accurately quantifying a consumer’s willingness to pay (WTP) for beef of different eating qualities is intrinsically linked to the development of eating-quality-based meat grading systems, and therefore the delivery of consistent, quality beef to the consumer. Following Australian MSA (Meat Standards Australia) testing protocols, over 19 000 consumers from Northern Ireland, Poland, Ireland, France and Australia were asked to detail their willingness to pay for beef from one of four categories that best described the sample; unsatisfactory, good-every-day, better-than-every-day or premium quality. These figures were subsequently converted to a proportion relative to the good-every-day category (P-WTP) to allow comparison between different currencies and time periods. Consumers also answered a short demographic questionnaire. Consumer P-WTP was found to be remarkably consistent between different demographic groups. After quality grade, by far the greatest influence on P-WTP was country of origin. This difference was unable to be explained by the other demographic factors examined in this study, such as occupation, gender, frequency of consumption and the importance of beef in the diet. Therefore, we can conclude that the P-WTP for beef is highly transferrable between different consumer groups, but not countries.
H2CO and OH masers in the H II-region/molecular-cloud complex Sgr B2 have been observed with the VLA and combined with other observations of OH and H2O masers. It is found that groups of the masers and compact continuum components are located along a north-south line extending across the complex. The overall alignment suggests that star formation is being triggered by a single large-scale event such as an interaction between molecular clouds.
This study examined the response of forage crops to composted dairy waste (compost) applied at low rates and investigated effects on soil health. The evenness of spreading compost by commercial machinery was also assessed. An experiment was established on a commercial dairy farm with target rates of compost up to 5 t ha−1 applied to a field containing millet [Echinochloa esculenta (A. Braun) H. Scholz] and Pasja leafy turnip (Brassica hybrid). A pot experiment was also conducted to monitor the response of a legume forage crop (vetch; Vicia sativa L.) on three soils with equivalent rates of compost up to 20 t ha−1 with and without ‘additive blends’ comprising gypsum, lime or other soil treatments. Few significant increases in forage biomass were observed with the application of low rates of compost in either the field or pot experiment. In the field experiment, compost had little impact on crop herbage mineral composition, soil chemical attributes or soil fungal and bacterial biomass. However, small but significant increases were observed in gravimetric water content resulting in up to 22.4 mm of additional plant available water calculated in the surface 0.45 m of soil, 2 years after compost was applied in the field at 6 t ha−1 dried (7.2 t ha−1 undried), compared with the nil control. In the pot experiment, where the soil was homogenized and compost incorporated into the soil prior to sowing, there were significant differences in mineral composition in herbage and in soil. A response in biomass yield to compost was only observed on the sandier and lower fertility soil type, and yields only exceeded that of the conventional fertilizer treatment where rates equivalent to 20 t ha−1 were applied. With few yield responses observed, the justification for applying low rates of compost to forage crops and pastures seems uncertain. Our collective experience from the field and the glasshouse suggests that farmers might increase the response to compost by: (i) increasing compost application rates; (ii) applying it prior to sowing a crop; (iii) incorporating the compost into the soil; (iv) applying only to responsive soil types; (v) growing only responsive crops; and (vi) reducing weed burdens in crops following application. Commercial machinery incorporating a centrifugal twin disc mechanism was shown to deliver double the quantity of compost in the area immediately behind the spreader compared with the edges of the spreading swathe. Spatial variability in the delivery of compost could be reduced but not eliminated by increased overlapping, but this might represent a potential 20% increase in spreading costs.
Araucaria goroensis R.R.Mill & Ruhsam sp. nov., a new monkey puzzle species from New Caledonia, is described and illustrated with photographs from the field and from herbarium specimens. Previously confused with Araucaria muelleri, it is more similar to A. rulei. It is distinguished from the latter species by its larger leaves, microsporophylls without a shouldered base, and shorter female cone bracts. It occurs in a very limited area of south-east New Caledonia, where its existence is threatened by nickel mining. Using the guidelines of the International Union for Conservation of Nature, we propose an assessment of Endangered for the new species and reassess Araucaria muelleri also as Endangered. A key to the seven species in the ‘large-leaved clade’ of New Caledonian species of Araucaria is given. The name Eutassa latifolia de Laub. is synonymised with Araucaria muelleri, and the recent typification of the latter name by Vieillard 1276 is rejected. Detailed reasoning is given for these nomenclatural acts.
The beef industry must become more responsive to the changing market place and consumer demands. An essential part of this is quantifying a consumer’s perception of the eating quality of beef and their willingness to pay for that quality, across a broad range of demographics. Over 19 000 consumers from Northern Ireland, Poland, Ireland and France each tasted seven beef samples and scored them for tenderness, juiciness, flavour liking and overall liking. These scores were weighted and combined to create a fifth score, termed the Meat Quality 4 score (MQ4) (0.3×tenderness, 0.1×juiciness, 0.3×flavour liking and 0.3×overall liking). They also allocated the beef samples into one of four quality grades that best described the sample; unsatisfactory, good-every-day, better-than-every-day or premium. After the completion of the tasting panel, consumers were then asked to detail, in their own currency, their willingness to pay for these four categories which was subsequently converted to a proportion relative to the good-every-day category (P-WTP). Consumers also answered a short demographic questionnaire. The four sensory scores, the MQ4 score and the P-WTP were analysed separately, as dependant variables in linear mixed effects models. The answers from the demographic questionnaire were included in the model as fixed effects. Overall, there were only small differences in consumer scores and P-WTP between demographic groups. Consumers who preferred their beef cooked medium or well-done scored beef higher, except in Poland, where the opposite trend was found. This may be because Polish consumers were more likely to prefer their beef cooked well-done, but samples were cooked medium for this group. There was a small positive relationship with the importance of beef in the diet, increasing sensory scores by about 4% in Poland and Northern Ireland. Men also scored beef about 2% higher than women for most sensory scores in most countries. In most countries, consumers were willing to pay between 150 and 200% more for premium beef, and there was a 50% penalty in value for unsatisfactory beef. After quality grade, by far the greatest influence on P-WTP was country of origin. Consumer age also had a small negative relationship with P-WTP. The results indicate that a single quality score could reliably describe the eating quality experienced by all consumers. In addition, if reliable quality information is delivered to consumers they will pay more for better quality beef, which would add value to the beef industry and encourage improvements in quality.
Quantifying consumer responses to beef across a broad range of demographics, nationalities and cooking methods is vitally important for any system evaluating beef eating quality. On the basis of previous work, it was expected that consumer scores would be highly accurate in determining quality grades for beef, thereby providing evidence that such a technique could be used to form the basis of and eating quality grading system for beef. Following the Australian MSA (Meat Standards Australia) testing protocols, over 19 000 consumers from Northern Ireland, Poland, Ireland, France and Australia tasted cooked beef samples, then allocated them to a quality grade; unsatisfactory, good-every-day, better-than-every-day and premium. The consumers also scored beef samples for tenderness, juiciness, flavour-liking and overall-liking. The beef was sourced from all countries involved in the study and cooked by four different cooking methods and to three different degrees of doneness, with each experimental group in the study consisting of a single cooking doneness within a cooking method for each country. For each experimental group, and for the data set as a whole, a linear discriminant function was calculated, using the four sensory scores which were used to predict the quality grade. This process was repeated using two conglomerate scores which are derived from weighting and combining the consumer sensory scores for tenderness, juiciness, flavour-liking and overall-liking, the original meat quality 4 score (oMQ4) (0.4, 0.1, 0.2, 0.3) and current meat quality 4 score (cMQ4) (0.3, 0.1, 0.3, 0.3). From the results of these analyses, the optimal weightings of the sensory scores to generate an ‘ideal meat quality 4 score (MQ4)’ for each country were calculated, and the MQ4 values that reflected the boundaries between the four quality grades were determined. The oMQ4 weightings were far more accurate in categorising European meat samples than the cMQ4 weightings, highlighting that tenderness is more important than flavour to the consumer when determining quality. The accuracy of the discriminant analysis to predict the consumer scored quality grades was similar across all consumer groups, 68%, and similar to previously reported values. These results demonstrate that this technique, as used in the MSA system, could be used to predict consumer assessment of beef eating quality and therefore to underpin a commercial eating quality guarantee for all European consumers.
In 1985 we began a search for OH/IR objects in the Magellanic Clouds. The first detection was reported by Wood, Bessell & Whiteoak (1986). Subsequent searches have yielded several of these objects and other highly-evolved stars obscured by thick circumstellar shells.
The 1612-MHz OH observations were made using the Parkes 64-m radio telescope. Most of the observations utilized a dual-channel cryogenic receiver providing a system temperature of around 38 K on cold sky. The OH spectra were obtained with the Parkes digital correlator split into 512-channel segments. Bandwidths of 2 MHz provided a resolution of 7.8 kHz (equivalent to 1.5 km s−1 in radial velocity) after Hanning smoothing. The mode of observation has been described by Whiteoak and Gardner (1976). Typically, an integration period of 60 minutes was used; this yielded a detection limit (3a) of around 50 mJy for an OH feature. Detected emission was reobserved with a 1-MHz bandwidth. A search was also made for 1665-MHz OH emission.
The luminosity function of galaxies is central to many problems in cosmology, including the interpretation of faint number counts. The near-infrared provides several advantages over the optical for statistical studies of galaxies, including smooth and well-understood K-corrections and expected luminosity evolution. The K–band is dominated by near-solar mass stars which make up the bulk of the galaxy. The absolute K magnitude is a measure of the visible mass in a galaxy, and thus the K–band luminosity function is an observational counterpart of the mass function of galaxies.