To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Donjek Glacier has an unusually short and regular surge cycle, with eight surges identified since 1935 from aerial photographs and satellite imagery with a ~12 year repeat interval and ~2 year active phase. Recent surges occurred during a period of long-term negative mass balance and cumulative terminus retreat of 2.5 km since 1874. In contrast to previous work, we find that the constriction where the valley narrows and bedrock lithology changes, 21 km from the terminus, represents the upper limit of surging, with negligible surface speed or elevation change up-glacier from this location. This positions the entire surge-type portion of the glacier in the ablation zone. The constriction geometry does not act as the dynamic balance line, which we consistently find at 8 km from the glacier terminus. During the 2012–2014 surge event, the average lowering rate in the lowest 21 km of the glacier was 9.6 m a−1, while during quiescence it was 1.0 m a−1. Due to reservoir zone refilling, the ablation zone has a positive geodetic balance in years immediately following a surge event. An active surge phase can result in a strongly negative geodetic mass balance over the surge-type portion of the glacier.
Astrophysics Telescope for Large Area Spectroscopy Probe is a concept for a National Aeronautics and Space Administration probe-class space mission that will achieve ground-breaking science in the fields of galaxy evolution, cosmology, Milky Way, and the Solar System. It is the follow-up space mission to Wide Field Infrared Survey Telescope (WFIRST), boosting its scientific return by obtaining deep 1–4 μm slit spectroscopy for ∼70% of all galaxies imaged by the ∼2 000 deg2 WFIRST High Latitude Survey at z > 0.5. Astrophysics Telescope for Large Area Spectroscopy will measure accurate and precise redshifts for ∼200 M galaxies out to z < 7, and deliver spectra that enable a wide range of diagnostic studies of the physical properties of galaxies over most of cosmic history. Astrophysics Telescope for Large Area Spectroscopy Probe and WFIRST together will produce a 3D map of the Universe over 2 000 deg2, the definitive data sets for studying galaxy evolution, probing dark matter, dark energy and modifications of General Relativity, and quantifying the 3D structure and stellar content of the Milky Way. Astrophysics Telescope for Large Area Spectroscopy Probe science spans four broad categories: (1) Revolutionising galaxy evolution studies by tracing the relation between galaxies and dark matter from galaxy groups to cosmic voids and filaments, from the epoch of reionisation through the peak era of galaxy assembly; (2) Opening a new window into the dark Universe by weighing the dark matter filaments using 3D weak lensing with spectroscopic redshifts, and obtaining definitive measurements of dark energy and modification of General Relativity using galaxy clustering; (3) Probing the Milky Way’s dust-enshrouded regions, reaching the far side of our Galaxy; and (4) Exploring the formation history of the outer Solar System by characterising Kuiper Belt Objects. Astrophysics Telescope for Large Area Spectroscopy Probe is a 1.5 m telescope with a field of view of 0.4 deg2, and uses digital micro-mirror devices as slit selectors. It has a spectroscopic resolution of R = 1 000, and a wavelength range of 1–4 μm. The lack of slit spectroscopy from space over a wide field of view is the obvious gap in current and planned future space missions; Astrophysics Telescope for Large Area Spectroscopy fills this big gap with an unprecedented spectroscopic capability based on digital micro-mirror devices (with an estimated spectroscopic multiplex factor greater than 5 000). Astrophysics Telescope for Large Area Spectroscopy is designed to fit within the National Aeronautics and Space Administration probe-class space mission cost envelope; it has a single instrument, a telescope aperture that allows for a lighter launch vehicle, and mature technology (we have identified a path for digital micro-mirror devices to reach Technology Readiness Level 6 within 2 yr). Astrophysics Telescope for Large Area Spectroscopy Probe will lead to transformative science over the entire range of astrophysics: from galaxy evolution to the dark Universe, from Solar System objects to the dusty regions of the Milky Way.
A new deep level transient spectroscopy (DLTS) technique is described, called half-width at variable intensity analysis. This method utilizes the width and normalized intensity of a DLTS signal to determine the activation energy and capture cross section of the trap that generated the signal via a variable, kO. This constant relates the carrier emission rates giving rise to the differential capacitance signal associated with a given trap at two different temperatures: the temperature at which the maximum differential capacitance is detected, and an arbitrary temperature at which some nonzero differential capacitance signal is detected. The extracted activation energy of the detected trap center is used along with the position of the peak maximum to extract the capture cross section of the trap center.
The realization of an electrically driven organic solid-state laser is an ambitious but highly desirable goal. Many obstacles need to be solved before a working device can be realized. One of the most challenging tasks is an incorporation of intracavity metal contacts, which, on the one hand, would not substantially degrade optical properties of the whole device and, on the other hand, would ensure sufficient current density to reach lasing. In this paper, we present different contact compositions aiming to realize high-quality intracavity metal contacts. We build a top contact consisting of 0.5 nm of aluminum and 4 nm of silver which has a conductivity of 1.9 × 107 (Ω/m) and is not increasing the optical lasing threshold of an organic microcavity. To get a better understanding of charge carriers influencing the device performance, we have performed a set of measurements, where a hybrid OLED–MC device was excited both optically and electrically at the same time. These experiments suggest that the charge carriers do not degrade electrical performance, at least for current densities in the range of A/cm2. Moreover, our observations suggest that, in some cases, simultaneous optical excitation can contribute to more efficient electrical pumping of the OLED-MC device.
Climate Change Adaptation Strategies and Sustainability of Philippine Agriculture
Majah-Leah V. Ravago, Research Faculty at the Department of Economics, Ateneo de Manila University, and previously Assistant Professor at the School of Economics, University of the Philippines, Diliman.,
James A. Roumasset, Professor (emeritus) at the Department of Economics, University of Hawaii, USA.,
Karl Robert L. Jandoc, Assistant Professor at the School of Economics, University of the Philippines, Diliman.
The Philippines is inherently vulnerable to adverse natural events of extreme intensity purely based on its geographic location.1 The warm western Pacific waters, normally around 28°C, contribute to the formation of typhoons, 18–20 of which reach the Philippines each year on average. Cagayan Valley (Region 2), Central Luzon (Region 3), and the Cordillera Administrative Region (CAR) are particularly vulnerable, averaging about seven to nine typhoons per year (Figure 8.1). Flooding occurs in a number of regions, the Western Visayas registering the highest incidence. The Philippines also rests on the Pacific “Ring of Fire”, where most of the earth's volcanic eruptions and earthquakes occur. Geophysical events, such as earthquakes and tsunamis, occur with regularity, albeit at long intervals. The Bicol Region, home of the active Mayon Volcano, experienced the greatest number of volcanic eruptions during 1991–2006. Earthquakes of moderate and high magnitude occur most frequently in the Central Visayas and Bicol regions (Figure 8.1).
Climate projections for the Philippines are similar to those in many other parts of the world (Chapters 2 and 4, this volume). Using the Intergovernmental Panel on Climate Change (IPCC) for the “A1B scenario” most relevant to the Philippines, Cinco et al. (2013) projected that mean yearly temperatures will rise between 1.9°C and 2.2°C by 2050, over baseline levels of between 25.5°C and 27.6°C (derived as averages of minimum and maximum temperatures for the 1971–2000 period). Increasing rainfall concentration and mean rainfall levels indicate that the wet seasons of June–August and September–November will become wetter in Luzon and Visayas towards 2050, yet higher rainfall concentrations combined with higher temperatures are likely to increase moisture stress in the dry season. In particular, it is expected that the frequency of damaging storms will increase. Although disputed by some (Cruz et al. 2007), evidence also suggests that the frequency of droughts will increase (Miyan 2015). One implication of these changes is that farmers’ experience of the frequency, duration, strength, and timing of rainfall and the frequency of droughts will be less reliable than previously; hence, the accuracy of their subjective decision-making processes will decline, causing their level of risk to rise. Past experience will become — and is already becoming — less useful as a predictor of future experience. The bottom line is that risk and uncertainty facing farmers are increasing.
This chapter deals with promoting the common good through better energy, resource and environmental policies as well as improved management of natural disaster risks, including climate change. Increasing gross domestic product (GDP) will be insufficient to meet the aspirations of the Philippine people for higher levels of living, inasmuch as GDP does not measure welfare. Largely because of the omission of these elements, we begin with a discussion of green accounting — the method of extending national income accounting to include the degradation of the environment and the depletion of natural resources.
As we discuss in the second part, comprehensive national income accounting can be further extended to include natural disasters and other shocks to the ecological–economic system. Even policy distortions can be accounted for by including them as constraints to the system. Thus, environmental resource conservation, disaster preparedness and policy reform all become potential sources of welfare growth. The later section deals with the mission of sustainable development, particularly how the Sustainable Development Goals (SDGs) relate to the mission of improving the welfare of Filipinos.
Increasing Levels of Living in the Face of Environmental Degradation and Resource Depletion
Stewardship of natural resources and the environment should not be treated as a separate objective from management of the economy (World Commission on Environment and Development 1987). The fundamental premise of sustainable income and green accounting, which have a long history in the Philippines and other countries, is that nature and the economy are part of the same system (the environomy) as shown in Figure 6.1. And one system requires one unifying measure of performance.
To convert the most common indicator of the size of an economy, GDP, into a measure of national well-being, several adjustments must be made. It is well known that GDP overestimates public welfare by failing to deduct depreciation — that portion of investment that simply replaces capital which has worn out or become obsolete. Deducting capital depreciation from GDP yields net domestic product (NDP). And since income is a better measure of welfare than production, we need to subtract the income earned in the Philippines by foreigners, add income earned by Philippine citizens abroad, and add remittances to the Philippines by non-citizens.
Population-based registries report 95% 5-year survival for children undergoing surgery for CHD. This study investigated paediatric cardiac surgical outcomes in the Australian indigenous population.
All children who underwent cardiac surgery between May, 2008 and August, 2014 were studied. Demographic information including socio-economic status, diagnoses and co-morbidities, and treatment and outcome data were collected at time of surgery and at last follow-up.
A total of 1528 children with a mean age 3.4±4.6 years were studied. Among them, 123 (8.1%) children were identified as indigenous, and 52.7% (62) of indigenous patients were in the lowest third of the socio-economic index compared with 28.2% (456) of non-indigenous patients (p⩽0.001). The indigenous sample had a significantly higher Comprehensive Aristotle Complexity score (indigenous 9.4±4.2 versus non-indigenous 8.7±3.9, p=0.04). The probability of having long-term follow-up did not differ between groups (indigenous 93.8% versus non-indigenous 95.6%, p=0.17). No difference was noted in 30-day mortality (indigenous 3.2% versus non-indigenous 1.4%, p=0.13). The 6-year survival for the entire cohort was 95.9%. The Cox survival analysis demonstrated higher 6-year mortality in the indigenous group – indigenous 8.1% versus non-indigenous 5.0%; hazard ratio (HR)=2.1; 95% confidence intervals (CI): 1.1, 4.2; p=0.03. Freedom from surgical re-intervention was 79%, and was not significantly associated with the indigenous status (HR=1.4; 95% CI: 0.9, 1.9; p=0.11). When long-term survival was adjusted for the Comprehensive Aristotle Complexity score, no difference in outcomes between the populations was demonstrated (HR=1.6; 95% CI: 0.8, 3.2; p=0.19).
The indigenous population experienced higher late mortality. This apparent relationship is explained by increased patient complexity, which may reflect negative social and environmental factors.
Abstract An adequate theory of partial computable functions should provide a basis for defining computational complexity measures and should justify the principle of computational induction for reasoning about programs on the basis of their recursive calls. There is no practical account of these notions in type theory, and consequently such concepts are not available in applications of type theory where they are greatly needed. It is also not clear how to provide a practical and adequate account in programming logics based on set theory.
This paper provides a practical theory supporting all these concepts in the setting of constructive type theories. We first introduce an extensional theory of partial computable functions in type theory. We then add support for intensional reasoning about programs by explicitly reflecting the essential properties of the underlying computation system. We use the resulting intensional reasoning tools to justify computational induction and to define computational complexity classes. Complexity classes take the formof complexity-constrained function types. These function types are also used in conjunction with the propositions-as-types principle to define a resource-bounded logic in which proofs of existence can guarantee feasibility of construction.
Introduction Over the past two decades, type theory has become the formalism of choice to support programming, verification and the logical foundations of computer science. The language of types underlies modern programming languages like Java and ML, and the theory of types drives significant efforts in compilation [29, 50, 36, 39, 42, 6, 47] and semantics [7, 23, 16]. Theorem proving systems based on type theory have been used for the verification of both hardware and software, and have also been very widely used for the formalization of mathematics [8, 10, 26, 24, 41, 46].
One of the major reasons type theory has enjoyed such wide successes is that it is a natural high-level language for computational mathematics and programming, a point that Sol Feferman has effectively made over the years [20, 22]. However, this advantage can sometimes pose a problem because the needs of mathematics and programming can diverge. In mathematics, equality is extensional, where only an object's value is significant. That is, if the result of f(a) is b then f(a) = b, and functions f and g are equal (in A → B, the space of functions from A to B) exactly when f(a) = g(a) for every a in A.
NASA's Program for Arctic Regional Climate Assessment (PARCA) includes measurements of ice velocity and ice thickness along the 2000 m elevation contour line in the western part of the ice sheet. Here we use these measurements together with published estimates of snow-accumulation rates to infer die mass balance, or rate of thickening/thinning, of the ice-sheet catchment area inland from the velocity traverse. Within the accuracy to which we know snow-accumulation rates, the entire area is in balance, but localized regions inland from Upernavik Isstrom and Jakobshavn Isbra both appear to be thickening by about 10 cm a-1.
The NOSAMS facility at Woods Hole Oceanographic Institution has started to develop and apply techniques for measuring very small samples on a standard Tandetron accelerator mass spectrometry (AMS) system with high-current hemispherical Cs sputter ion sources. Over the past year, results on samples ranging from 7 to 160 μg C showed both the feasibility of such analyses and the present limitations on reducing the size of solid carbon samples. One of the main factors affecting the AMS results is the dependence of a number of the beam optics parameters on the extracted ion beam current. The extracted currents range from 0.5 to 10 μA of 12C− for the sample sizes given above. We here discuss the setup of the AMS system and methods for reliable small-sample measurements and give the AMS-related limits to sample size and the measurement uncertainties.
We present a status report of the accelerator mass spectrometry (AMS) facility at the University of California, Irvine, USA. Recent spectrometer upgrades and repairs are discussed. Modifications to preparation laboratory procedures designed to improve sample throughput efficiency while maintaining precision of 2–3‰ for 1-mg samples (Santos et al. 2007c) are presented.
Gas-accepting ion sources for radiocarbon accelerator mass spectrometry (AMS) have permitted the direct analysis of CO2 gas, eliminating the need to graphitize samples. As a result, a variety of analytical instruments can be interfaced to an AMS system, processing time is decreased, and smaller samples can be analyzed (albeit with lower precision). We have coupled a gas chromatograph to a compact 14C AMS system fitted with a microwave ion source for real-time compound-specific 14C analysis. As an initial test of the system, we have analyzed a sample of fatty acid methyl esters and biodiesel. Peak shape and memory was better then existing systems fitted with a hybrid ion source while precision was comparable. 14C/12C ratios of individual components at natural abundance levels were consistent with those determined by conventional methods. Continuing refinements to the ion source are expected to improve the performance and scope of the instrument.
In 2008, a large African baobab (Adansonia digitata L.) from Makulu Makete, South Africa, split vertically into 2 sections, revealing a large enclosed cavity. Several wood samples collected from the cavity were processed and radiocarbon dated by accelerator mass spectrometry (AMS) for determining the age and growth rate dynamics of the tree. The 14C date of the oldest sample was found to be of 1016 ± 22 BP, which corresponds to a calibrated age of 1000 ± 15 yr. Thus, the Makulu Makete tree, which eventually collapsed to the ground and died, becomes the second oldest African baobab dated accurately to at least 1000 yr. The conventional growth rate of the trunk, estimated by the radial increase, declined gradually over its life cycle. However, the growth rate expressed more adequately by the cross-sectional area increase and by the volume increase accelerated up to the age of 650 yr and remained almost constant over the past 450 yr.
For very small samples, it is difficult to prepare graphitic targets that will yield a useful and steady sputtered ion beam. Working with materials separated by preparative capillary gas chromatography, we have succeeded with amounts as small as 20 μg C. This seems to be a practical limit, as it involves 1) multiple chromatographic runs with trapping of effluent fractions, 2) recovery and combustion of the fractions, 3) graphitization and 4) compression of the resultant graphite/cobalt matrix into a good sputter target. Through such slow and intricate work, radiocarbon ages of lignin derivatives and hydrocarbons from coastal sediments have been determined. If this could be accomplished as an “online” measurement by flowing the analytes directly into a microwave gas ion source, with a carrier gas, then the number of processing steps could be minimized. Such a system would be useful not just for chromatographic effluents, but for any gaseous material, such as CO2 produced from carbonates. We describe tests using such an ion source.
The article reports the first radiocarbon dating of a live African baobab (Adansonia digitata L.), by investigating wood samples collected from 2 inner cavities of the very large 2-stemmed Platland tree of South Africa. Some 16 segments extracted from determined positions of the samples, which correspond to a depth of up to 15–20 cm in the wood, were processed and analyzed by accelerator mass spectrometry (AMS). Calibrated ages of segments are not correlated with their positions in the stems of the tree. Dating results indicate that the segments originate from new growth layers, with a thickness of several centimeters, which cover the original old wood. Four new growth layers were dated before the reference year AD 1950 and 2 layers were dated post-AD 1950, in the post-bomb period. Formation of these layers was triggered by major damage inside the cavities. Fire episodes are the only possible explanation for such successive major wounds over large areas or over the entire area of the inner cavities of the Platland tree, able to trigger regrowth.
Techniques for making precise and accurate radiocarbon accelerator mass spectrometry (AMS) measurements on samples containing less than a few hundred micrograms of carbon are being developed at the NOSAMS facility. A detailed examination of all aspects of the sample preparation and data analysis process shows encouraging results. Small quantities of CO2 are reduced to graphite over cobalt catalyst at an optimal temperature of 605°. Measured 14C/12C ratios of the resulting targets are affected by machine-induced isotopic fractionation, which appears directly related to the decrease in ion current generated by the smaller sample sizes. It is possible to compensate effectively for this fractionation by measuring samples relative to small standards of identical size. Examination of the various potential sources of background 14C contamination indicates that the sample combustion process is the largest contributor, adding ca. 1 μg of carbon with a less-than-modern 14C concentration.