To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
With the emergence of modern techniques of environmental analysis and widespread availability of accessible tools and quantitative data, the question of environmental determinism is once again on the agenda. This paper is theoretical in character, attempting, for the benefit of drawing up research designs, to understand and evaluate the character of environmental determinism. We reach three main conclusions: (1) in a typical pattern of research design, studies seek to detect simultaneous shifts in the environmental and archaeological records, variously positing the former to have influenced, triggered or caused the latter; (2) the question of determinism involves uncertainty about the justification for the above research design in particular in what comes to biologism and the concept of environmental thresholds on the one hand and the externality of the drivers of transformation in human groups and societies on the other; (3) adapting the concepts of the social production of vulnerability and the social basis of hazards from anthropology may help to clarify the available research design choices at hand.
The common prawn (Palaemon serratus) supports a small-scale but economically important seasonal static-gear fishery in Cardigan Bay, Wales (UK). Due to a lack of statutory obligation and scientific evidence, the fishery has operated to date without any harvest-control rules that afford protection from overfishing. In response to fluctuations in landings and in pursuit of increased economic returns for their catch, some members of the fishing industry have adopted a size-selective harvesting regime, which we evaluate here using baseline data. Monthly samples were obtained from fishers operating out of five ports between October 2013 and May 2015 (N = 4233). All prawn were sexed, weighed and measured, whilst the fecundity of females was estimated for 273 (44%) individuals. Peak spawning occurred during the spring and females were estimated to undergo a ‘puberty moult’ at a carapace length (CL) of 7.7 mm, whilst functional maturity was estimated at a CL of 9.9 mm. The sampled population exhibited sexual dimorphism, with females attaining a greater size than males. The current harvesting regime results in a sex bias in landings as even large mature males remained under the recruitment size to the fishery, unlike the large mature females. The temporal trend in sex-ratio indicates a continual decrease in the catchability of female prawn through the fishing season; however, whether this is caused by depletion via fishing mortality or migratory behaviour is yet to be resolved. Here, we provide a comprehensive baseline evaluation of population biology and discuss the implications of our findings for fisheries management.
Optimal control problems of stochastic switching type appear frequently when making decisions under uncertainty and are notoriously challenging from a computational viewpoint. Although numerous approaches have been suggested in the literature to tackle them, typical real-world applications are inherently high dimensional and usually drive common algorithms to their computational limits. Furthermore, even when numerical approximations of the optimal strategy are obtained, practitioners must apply time-consuming and unreliable Monte Carlo simulations to assess their quality. In this paper, we show how one can overcome both difficulties for a specific class of discrete-time stochastic control problems. A simple and efficient algorithm which yields approximate numerical solutions is presented and methods to perform diagnostics are provided.
Neuroimaging measures of behavioral and emotional dysregulation can yield biomarkers denoting developmental trajectories of psychiatric pathology in youth. We aimed to identify functional abnormalities in emotion regulation (ER) neural circuitry associated with different behavioral and emotional dysregulation trajectories using latent class growth analysis (LCGA) and neuroimaging.
A total of 61 youth (9–17 years) from the Longitudinal Assessment of Manic Symptoms study, and 24 healthy control youth, completed an emotional face n-back ER task during scanning. LCGA was performed on 12 biannual reports completed over 5 years of the Parent General Behavior Inventory 10-Item Mania Scale (PGBI-10M), a parental report of the child's difficulty regulating positive mood and energy.
There were two latent classes of PGBI-10M trajectories: high and decreasing (HighD; n = 22) and low and decreasing (LowD; n = 39) course of behavioral and emotional dysregulation over the 12 time points. Task performance was >89% in all youth, but more accurate in healthy controls and LowD versus HighD (p < 0.001). During ER, LowD had greater activity than HighD and healthy controls in the dorsolateral prefrontal cortex, a key ER region, and greater functional connectivity than HighD between the amygdala and ventrolateral prefrontal cortex (p's < 0.001, corrected).
Patterns of function in lateral prefrontal cortical–amygdala circuitry in youth denote the severity of the developmental trajectory of behavioral and emotional dysregulation over time, and may be biological targets to guide differential treatment and novel treatment development for different levels of behavioral and emotional dysregulation in youth.
We utilized the new high-order (250-378 mode) Magellan Adaptive Optics system (MagAO) to obtain very high-resolution science in the visible with MagAO's VisAO CCD camera. In the good-median seeing conditions of Magellan (0.5–0.7″) we find MagAO delivers individual short exposure images as good as 19 mas optical resolution. Due to telescope vibrations, long exposure (60s) r' (0.63μm) images are slightly coarser at FWHM = 23-29 mas (Strehl ~ 28%) with bright (R < 9 mag) guide stars. These are the highest resolution filled-aperture images published to date. Images of the young (~ 1 Myr) Orion Trapezium θ1 Ori A, B, and C cluster members were obtained with VisAO. In particular, the 32 mas binary θ1 Ori C1C2 was easily resolved in non-interferometric images for the first time. Relative positions of the bright trapezium binary stars were measured with ~ 0.6–5 mas accuracy. In the second commissioning run we were able to correct 378 modes and achieved good contrasts (Strehl>20% on young transition disks at Hα). We discuss the contrasts achieved at Hα and the possibility of detecting low mass (~ 1–5 Mjup) planets (past 5AU) with our new SAPPHIRES survey with MagAO at Hα.
We present the first ground-based adaptive optics images of a silhouette disk. This disk, Orion 218-354, is seen in silhouette against the bright nebular background of Orion, and was resolved using the new Magellan Adaptive Secondary AO system and its VisAO camera in Simultaneous Differential Imaging (SDI) mode. PSF subtraction of Orion 218-354 reveals a disk ~1″ (400 AU) in radius, with the degree of absorption increasing steadily towards the center of the disk. By virtue of the central star being unsaturated, these data probe inward to a much smaller radius than previous HST images. Our data present a different picture than previous observers had hypothesized, namely that the disk is likely optically thin at Hα at least as far inward as ~20AU. In addition to being among the first high-resolution AO images taken in the optical on a large telescope, these data reveal the power of SDI imaging to illuminate disk structure, and speak to a bright future for visible AO imaging. Analysis of the results described briefly here can be found in full detail in Follette et al. (2013).
We present preliminary results from two parallel programs to search for new substellar companions to nearby, young M-stars and to characterize the atmospheres of known planetary mass and temperature substellar companions. For the M-star survey, we are analyzing high angular resolution archival data on systems within 15pc, complementing a subset with well-determined young ages based on measurements of several age indicators. The results include stellar and substellar companion candidates, which we are currently pursuing with follow-up second epoch images. The characterization component of the project involves using LBT LMIRCam and MMT ARIES direct imaging and spectroscopy data to investigate the atmospheres of known young substellar companions with masses overlapping the planetary regime. These atmospheric studies will represent an analogous comparison to the atmospheres of young imaged planets, and provide a means to fundamentally test evolutionary models, enhancing our understanding of the overall substellar population.
MagAO is the newly-commissioned adaptive optics (AO) instrument on the Magellan Clay telescope at Las Companas Observatory, Chile. MagAO has two co-mounted science cameras: VisAO for visible-light direct and spectral-differential imaging; and Clio for near to thermal IR direct imaging, non-redundant-mask interference, and prism spectroscopy. We demonstrate MagAO's simultaneous visible and infrared AO performance via direct images of exoplanet Beta Pictoris b. The planet was detected in 5 passbands from 0.9–5μm. Here we show the infrared images; the visible observations are presented in Males et al. 2013. MagAO is the first AO system to offer good performance with extensive coverage across the O/IR spectrum and thus offers an unprecedented opportunity to study the spectral energy distributions of directly-imaged extrasolar planetary atmospheres.
The magnetic method is based on variations in the magnetic field derived from lateral differences in the magnetization of the subsurface. As a result, an understanding of the magnetization of Earth materials, and the physical and geologic factors that control it, is essential in planning surveys as well as interpreting magnetic anomalies.
Magnetization consists of the vectorial addition of induced and remanent components. Induced magnetization depends on the magnetic susceptibility of the material and the magnitude and direction of the ambient magnetic field, while remanent magnetization reflects the past magnetic history of the material. This makes the prediction of the magnetization of Earth materials difficult in many geological situations. This problem is amplified because, unlike rock densities which vary by only a few orders of magnitude, magnetizations commonly have a range of 103 or more. The resulting uncertainty in estimating magnetization is made greater by the fact it is controlled by a few minerals that occur only as accessory constituents in essentially all Earth materials. As a result, material types do not have diagnostic magnetic properties, but useful generalizations can be made based on an understanding of the nature of the constituent magnetic minerals and the thermal and magnetic history of a specific geologic formation. Measurements of magnetic susceptibilities generally are made on samples using an induction balance, and remanent magnetism is determined by measuring the total effect on a magnetic sensor of rotating an oriented sample around three perpendicular axes.
The acquisition of magnetic data is relatively simple, rapid, and less complex than are the observations of data of most geophysical methods. Significant improvements continue to be made in magnetic instrumentation which facilitate accurate observation of the geomagnetic field on the Earth's surface as well as on a variety of airborne platforms, and from satellites of the Earth, Moon, and planets of the solar system. Most observations are made of the scalar, total intensity of the field, with alkali-vapor (resonance) magnetometers which readily achieve a sensitivity of better than a nanotesla with rates of several observations per second from a moving platform. These measurements are supplemented for special purposes by measurements of gradients, vectors, and tensors. Vector and tensor measurements are made with flux-gate magnetometers and increasingly with the highly sensitive superconducting quantum interference device magnetometers.
Surface or near-surface surveys are conducted on grids or parallel lines to map with high resolution the near-surface, local magnetic anomalies associated with a variety of archaeological, engineering, and environmental problems, but most magnetic surveys are conducted from a wide variety of airborne platforms. Although helicopter surveys may use outboard sensors placed in an aerodynamically stable housing at the end of a cable towed by the helicopter to place the sensor close to the surface to achieve the highest possible resolution, most airborne surveys use inboard sensors which require that extraneous magnetic effects of the aircraft are compensated by passive and active systems.
As in the gravity method, magnetic measurements include effects from a wide variety of sources – from terrain, natural and man-made surface features, as well as instrumental, geological, and planetary sources. These sources have a range of amplitudes and periods in the case of temporal variations and wavelengths in spatial contributors which can mask or at least distort magnetic effects from subsurface sources of interest in a survey. Accordingly, observed data are processed to eliminate or at least minimize these effects. The product of this processing is the magnetic anomaly. Fortunately, in most magnetic survey campaigns these extraneous variations are considerably less effective in distorting effects of subsurface sources than those of the gravity method. Thus, the requirements for auxiliary data and reduction procedures are much less stringent for the magnetic method than in the gravity method. The corrections for these effects are largely obtained empirically from measurements of changes in temporal and spatial characteristics of magnetic fields.
Spatial extraneous variations directly perturb the distribution of subsurface effects over the Earth's surface, while extraneous temporal variations cause errors in the measurements. Processing in magnetic exploration removes the extraneous variations directly from the observations, rather than modeling the field at the observation site as in gravity data processing. However, the result is the same. It is equivalent to the anomaly obtained by subtracting the theoretical field from the observations. Magnetic anomalies for interpretational purposes are commonly transformed into the equivalent anomaly that would be observed at the geomagnetic pole. This reduction-to-the pole shifts the anomaly to a position directly over the source and increases the anomaly's symmetry, simplifying the magnetic interpretation.
The gravity method measures horizontal spatial changes in the gravitational field that result from mass differentials which in turn are controlled by the volume and contrasting densities of anomalous masses. As a result, an understanding of the density of Earth materials is essential in planning surveys as well as in interpreting gravity anomalies.
Density is a function of the mineral composition of Earth materials as well as their void volume and the material filling the voids. As a result, densities of rocks can be estimated by considering their origin and the processes that subsequently have acted upon them. However, it is advisable to measure densities either directly or indirectly wherever possible, preferably in situ, because of the difficulties in obtaining samples that are representative of the actual geological setting. In situ measurements may be obtained from the relationship of gravity anomalies to topography or determined indirectly from correlative measurements such as seismic wave velocity or attenuation of gamma rays.
Knowledge of germane Earth material densities within a study region is required for effective planning and implementation of gravity surveys. Accordingly, as an introduction to the gravity method, the sections below describe the fundamentals of this property and the controls on it, together with methods of determining density. Representative values are presented of the density of a variety of Earth materials including igneous, metamorphic, and sedimentary rocks, sediments, and soils to aid the explorationist in the use of the gravity method.
Like the gravity method described in Chapter 3, the magnetic method is a potential field method. Magnetic potential is defined by the amount of work done in moving a point dipole from one position to another in the presence of a magnetic force field acting upon the point dipole. The rate of change of the potential is the component of force in that direction which is related to the magnetic field strength measured in the magnetic method. As in the gravity method, Laplace's and Poisson's equations, Gauss' law, and Poisson's theorem are useful in understanding the properties of the magnetic field and in its analysis.
The magnetic effects of an arbitrary distribution of magnetization are evaluated by the summed effects of the point dipoles that fill out the volume. Magnetic effects of idealized bodies with simple symmetric shapes can be modeled with closed-form analytical expressions. For a general source with irregular shape, however, the magnetic effect must be numerically integrated by filling out the volume with idealized sources and summing the idealized source effects at the observation point. The numerical integration can be carried out with least-squares accuracy by distributing the idealized sources throughout the irregular body according to its Gauss-Legendre quadrature decomposition. Thus, the magnetic effects of all conceivable distributions of magnetization can be modeled to investigate the significance of magnetic anomalies.
The magnetic method is the oldest and one of the most widely used geophysical techniques for exploring the Earth's subsurface. It is a relatively easy and inexpensive tool to apply to a wide variety of subsurface exploration problems involving horizontal magnetic property variations from near the base of the Earth's crust to within the uppermost meter of soil. These variations cause anomalies in the Earth's normal magnetic field that are mapped by the magnetic method.
The geomagnetic field is caused by electrical currents associated with convective movements in the electrically conducting outer core of the Earth. Additional significant components of the field are derived from variations in the magnetic properties of the lithosphere and electrical currents in the ionosphere as well as in the Earth. The main field roughly coincides with the Earth's axis of rotation and is dipolar in nature. The force of attraction between opposite magnetic poles, or repulsion between like poles, is proportional to the product of the strength of the poles and inversely proportional to the square of the distance between the poles in a manner analogous to the gravitational attraction between masses. For most exploration purposes the magnetic force per unit pole, or magnetic field strength, is measured either in a directional mode where the sensor is spatially oriented or in a non-directional mode where the sensor obtains total field measurements that are presumed to be collinear with the Earth's magnetic field. Modern magnetic measurements are obtained electronically by measuring the precession of atomic particles which is proportional to the ambient magnetic field.
Magnetic interpretation, as in gravity interpretation, operates at several levels of complexity. It can range from simple identification and location of anomalous magnetic bodies in the subsurface to three-dimensional modeling leading to complete characterization of anomaly sources. However, there are numerous differences in the interpretation of these two potential fields. For example, magnetic anomalies in most investigations are primarily caused by contrasting crystalline rocks containing variable amounts of the trace mineral magnetite. This limits the range of possible anomaly sources that need to be considered, although as in all potential field interpretations no interpretation is unique.
Magnetic interpretation is complicated by the dipolar nature of the magnetic field, resulting in both attractive and repulsive effects from an anomaly source, and by the large range of variables that enter into determining the character of a magnetic anomaly. Anomaly characteristics vary significantly with the location and orientation of the source in the geomagnetic field and may be further complicated by the effects of remanent magnetization and internal demagnetization. Nonetheless, various interpretational techniques have been developed for dealing with these complications and shown to be successful in identifying and characterizing magnetic sources. Magnetic anomalies are particularly sensitive to their depth of origin, so special emphasis is placed on depth determination methods. Although these methods generally involve simplifying assumptions of theoretical formulations, with care errors commonly can be limited to roughly 10%.
Investigations of terrestrial gravity and magnetic fields are among the oldest methods for determining the nature and processes of the Earth. Despite the development of an increasing number of additional complementary investigative methods, some of which have better subsurface resolution, the gravity and magnetic methods continue to have an important, often decisive role, in a wide variety of terrestrial investigations. In contrast to the several available texts on specific topics in gravity and magnetics, this book provides an overall, modern resource on the principles, practices, and applications of both the gravity and magnetic methods to exploring the Earth. Although these aspects of the gravity and magnetic methods are well grounded in widely described and accepted principles, they are continually undergoing practical improvements and expanded understanding. The continued improvements result from enhanced technology for acquiring, processing, and interpreting data made possible largely by increasing computational power and new techniques. Special emphasis in gravity and magnetic exploration is being placed on high sensitivity mapping of anomalies at the extremities of their spectra, both shorter and longer wave-lengths, increasing the vertical and horizontal resolution of individual anomaly sources, and investigating the temporal variations in the gravity and magnetic fields of the Earth which permit new insights into Earth processes. It is hoped that this book which sets a benchmark in our current knowledge of gravity and magnetic exploration will encourage further developments in these methods and applications.
Geophysics, the science of the physics of the Earth from its magnetosphere to the deep interior, is useful in characterizing the subsurface Earth. Solid-Earth geophysics employs techniques involving the measurement of force fields to study subsurface features and the processes that act upon them. Thus, geophysical studies serve a broad variety of geologic, natural resource, engineering, and environmental purposes. Gravity and magnetic methods, which measure very small spatial and temporal changes in the terrestrial gravity and magnetic force fields, have a wide range of uses from submeter to global scales. Although these methods in most cases fail to match the resolution and precision of direct observations, they are rapid, cost-effective, and non-invasive procedures of studying the inaccessible Earth and optimizing the location of drill holes for direct studies and other remote sensing studies which have higher resolution capabilities.
The application of gravity and magnetic methods generally involves a common approach consisting of planning, data acquisition, data processing, interpretation, and reporting phases. During the planning phase the appropriate method(s) are selected for meeting the objective of the study, and procedures for data acquisition, processing, and interpretation are established. These decisions are reached on the basis of experience, model studies, or test surveys. Special care is taken to determine an error or noise budget for the survey and to consider the propagation of errors, both random and systematic, through the data acquisition and processing chain. Selection of the distribution of observations in the survey region includes consideration of the objective of the study, the geologic, topographic, vegetative cover, and cultural features of the area, access over the region, and financial considerations.