To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Populations of Critically Endangered White-rumped Gyps bengalensis and Slender-billed G. tenuirostris Vultures in Nepal declined rapidly during the 2000s, almost certainly because of the effects of the use in livestock of the non-steroidal anti-inflammatory drug diclofenac, which is nephrotoxic to Gyps vultures. In 2006, veterinary use of diclofenac was banned in Nepal and this was followed by the gradual implementation, over most of the geographical range of the two vulture species in Nepal, of a Vulture Safe Zone (VSZ) programme to advocate vulture conservation, raise awareness about diclofenac, provide vultures with NSAID-free food and encourage the veterinary use in livestock of a vulture-safe alternative NSAID (meloxicam). We report the results of long-term monitoring of vulture populations in Nepal before and after this programme was implemented, by means of road transects. Piecewise regression analysis of the count data indicated that a rapid decline of the White-rumped Vulture population from 2002 up to about 2013 gave way to a partial recovery between about 2013 and 2018. More limited data for the Slender-billed Vulture indicated that a rapid decline also gave way to partial recovery from about 2012 onwards. The rates at which populations were increasing in the 2010s exceeded the upper end of the range of increase rates expected in a closed population under optimal conditions. The possibility that immigration from India is contributing to the changes cannot be excluded. We present evidence from open and undercover pharmacy surveys that the VSZ programme had apparently become effective in reducing the availability of diclofenac in a large part of the range of these species in Nepal by about 2011. Hence, community-based advocacy and awareness-raising actions, and possibly also provisioning of safe food, may have made an important contribution to vulture conservation by augmenting the effects of changes in the regulation of toxic veterinary drugs.
It is always interesting to visualize the content of a scene with high spatial and spectral resolutions. However, constraints such as the trade-off between high spatial and spectral resolutions of the sensor, channel bandwidth, and on-board storage capability of a satellite system place limitations on capturing images having high spectral and spatial resolutions. Due to this, many commercial remote sensing satellites such as Quickbird, Ikonos, and Worldview-2 capture the earth's information with two types of images: a single panchromatic (Pan) and a number of multispectral (MS) images. Pan has high spatial resolution with lower spectral resolution, while MS image has higher spectral resolving capability with low spatial resolution. An image with high spatial and spectral resolutions, i.e., a fused image of MS and Pan data can lead to better land classification, map updating, soil analysis, feature extraction, etc. Also, since fused image increases the spatial resolution of the MS image, it results in sharpening the image content which makes it easy to obtain greater details of the classified maps. The multi-resolution image fusion or pan-sharpening is an algorithmic approach to increase the spatial resolution of the MS image, while preserving spectral contents by making use of the high spatial resolution Pan image. This book is an attempt to present the state-of-the-art in current research on this topic.
The book covers the recent advances in research in the area of multi-resolution image fusion. Different fusion methods are presented using spatial and transform domains, in addition to model based fusion approaches. We extend our model based fusion work to super-resolution of the natural images, which is one of the important areas of research in the field of computer vision. The book also includes comprehensive literature review on multi-resolution image fusion techniques.
The book is addressed to an audience including, but not limited to, academicians, practitioners, and industrial researchers, working on various fusion applications, and project managers requiring an overview of the on-going research in the field of multi-resolution image fusion. The topics discussed in this book have been covered with sufficient detail and we have also illustrated them with a large number of figures for better understanding. Finally, much of the material would be of value for post-graduate and doctorate level students who attend related courses or are engaged in research in multi-resolution image fusion and related fields.
In this chapter, we introduce a concept called self-similarity and use the same for obtaining the initial fused image. We also use a new prior called Gabor prior for regularizing the solution. In Chapter 4, degradation matrix entries were estimated by modelling the relationship between the Pan-derived initial estimate of the fused MS image and the LR MS image. This may lead to inaccurate estimate of the final fused image since we make use of the Pan data suffering from low spectral resolution in getting the initial estimate. However, if we derive the initial fused image using the available LR MS image, which has high spectral resolution, mapping between LR and HR would be better and the derived degradation matrix entries are more accurate. This makes the estimated degradation matrix better represent the aliasing since we now have an initial estimate that has both high spatial and spectral resolutions. To do this, we need to obtain the initial estimate using only the available LR MS image since the true fused image is not available. We perform this by using the property of natural images that the probability of the availability of redundant information in the image and its downsampled versions is high . We exploit this self-similarity in the LR observation and the sparse representation theory in order to obtain the initial estimate of the fused image. Finally, we solve the Pan-sharpening or multi-resolution image fusion problem by using a model based approach in which we regularize the solution by proposing a new prior called the Gabor prior.
Before we discuss our proposed approach, we review few works carried out by researchers on image fusion that use compressive sensing (CS) theory since our work also uses sparse representation involved in CS. Li and Yang  applied CS to obtain the fusion of remotely sensed images in which a dictionary was constructed from sample images having high spatial resolution. They obtained the fused image as a linear combination of HR patches available in the dictionary. The performance of their method depends on the availability of high resolution MS images that have spectral components similar to that of the test image.
One of the major achievements of human beings is the ability to record observational data in the form of photographs, a science which dates back to 1826. Humans have always tried to reach greater heights (treetops, mountains, platforms, and so on) to observe phenomenon of interest, to decide on habitable places, farming and such other activities. Curiosity motivates human beings to take photographs of the earth from elevated platforms. In the initial days of photography, balloons, pigeons, and kites were used to capture such photographs. With the invention of the aircraft in 1903, the first aerial photograph on a stable platform was made possible in 1909 . In the 1960s and 1970s, the primary platform that was used to carry remote sensing instruments shifted from aircraft to satellites . It was during this period that the word ‘remote sensing’ replaced the frequently used word ‘aerial photograph’. Satellites can cover wider land space than planes and can monitor areas on a regular basis.
The new era in remote sensing began when the United States launched the first earth observation satellite called earth resources technology satellite (ERTS-1) dedicated primarily for land observation . This was followed by many other satellites like Landsat 1-5, satellite pour l’ observation de la terre (SPOT), Indian remote sensing (IRS), Quickbird, Ikonos, etc. Change in image format from analog to digital was another major step towards the processing and interpretation of remotely sensed data . The digital format made it possible to display and analyze imagery using computers, a technology that was also undergoing rapid change during this period. Due to the advancement of technology and development of new sensors, the capture of the earth's surface through different portions of the electromagnetic spectrum is possible these days. One can now view the same area by acquiring the data as several images in different portions of the spectrum, beyond what the human eye can view. Remote sensing technology has made it possible to see things occurring on the earth's surface which may not be detected by the human eye.
The formal definition of remote sensing can be given as follows : it refers to the sensing of the earth's surface from space by making use of the properties of electromagnetic waves emitted, reflected or diffracted by the sensed objects, for the purpose of improving natural resource management, land use and the protection of the environment.
Remote sensing satellites capture data in the form of images which are processed and utilized in various applications such as land area classification, map updating, weather forecast, urban planning, etc. However, due to the constraints on the hardware of the sensors and the available transmission bandwidth of the transponder, many commercial satellites provide earth information by capturing images which have complementary characteristics. In this book, we have addressed the problem of multi-resolution image fusion or Pan-sharpening. Here, the low spatial resolution MS image and high spatial resolution Pan image are combined to obtain a single fused image which has both high spatial and spectral resolutions. We seek a fused image which has spectral resolution of the MS image and the spatial resolution of the Pan image. Although the MS and Pan images capture the same geographical area, the complementary nature of these images in terms of the spatial and spectral resolutions gives rise to variation in the two images. Because of this, when we fuse the given images by using direct pixel intensity values, the resultant fused data suffers from spatial as well as spectral distortions. Another important issue in the problem of multi-resolution image fusion is the registration of MS and Pan images. Accurate registration is a difficult task and in this book, we have not addressed it; instead we have used registered data. Here, we present the conclusions which are drawn based on the different proposed methods for Pan-sharpening/ image fusion.
We began our work by proposing two new fusion techniques based on the edge-preserving filters. The Pan image has high frequency details that can be extracted with the help of edge-preserving filter. These extracted details are injected into the upsampled MS image. In our work, we used two edge-preserving filters, namely, the guided filter and difference of Gaussians (DoGs) in order to extract the required details present in the Pan image. The extension of the guided filter in multistages is introduced which effectively extracts the details from Pan and MS images. Similarly, the concept of DoGs is also used to extract the high frequency features from the Pan image. The potential of the proposed methods were evaluated by conducting the experiments on the original as well as the degraded datasets captured using various satellites. The results were compared with state-of-the art methods.
In this chapter, we discuss image fusion approaches using two edge-preserving filters, namely, guided filter and difference of Gaussians (DoGs). Since the MS and Pan images have high spectral and high spatial resolutions, respectively, one can obtain the resultant fused image using these two images by injecting the missing high frequency details from the Pan image into the MS image. The quality of the final fused image will then depend on the method used for the extraction of high frequency details and also on the technique for injecting those details into the MS image. In the literature on multi-resolution image fusion, various approaches have been proposed based on the aforementioned process that also include state-of-the-art methods such as additive wavelet luminance proportional (AWLP)  and generalized Laplacian pyramid-context based decision (GLP-CBD) . Motivated by these works, we first address the fusion problem by using different edge-preserving filters in order to extract the high frequency details from the Pan image. Specifically, we have chosen the guided filter and difference of Gaussians (DoGs) for detail extraction since these are more versatile in applications involving feature extraction, denoising, etc.
A large number of techniques have been proposed for the fusion of Pan and MS images, which are based on extracting the high frequency details from the Pan image and injecting them into the MS image. They were discussed in detail in the chapter on literature survey. These methods broadly cover categories such as projection substitution methods, that is, those based on principal component analysis (PCA), intensity hue saturation (IHS) [50, 231], and multi-resolution approaches based on obtaining a scale-by-scale description of the information content of both MS and Pan images [144, 174]. Among these, the multi-resolution based methods have proven to be successful . Most multi-resolution techniques are based on wavelet decomposition [144, 174], in which the MS and Pan images are decomposed into approximation and detail sub-bands; the detail sub-band coefficients of the Pan image are injected into the corresponding sub-band of the MS image by a predefined rule in which the MS image is first interpolated to make it to the size of Pan image.
Recently, many researchers have attempted to solve the problem of multi-resolution image fusion by using model based approaches, with emphasis on improving the fused image quality and reducing color distortion [273, 121]. They model the low resolution (LR) MS image as a blurred and noisy version of its ideal high resolution (HR) fused image. Solving the problem of fusion by the model based approach is desirable since the aliasing present due to undersampling of the MS image can be taken care of while modelling. Fusion using the interpolation of MS images and edge-preserving filters as given in Chapter 3 do not consider the effect of aliasing which is due to undersampling of MS images. The aliasing in the acquired image causes distortion and, hence, there exists degradation in the LR MS image. In this chapter, we propose a model based approach in which a learning based method is used to obtain the required degradation matrix that accounts for aliasing. Using the proposed model, the final solution is obtained by considering the model as an inverse problem. The proposed approach uses sub-sampled as well as non sub-sampled contourlet transform based learning and a Markov random field (MRF) prior for regularizing the solution.
As stated earlier, many researchers have used the model based approach for fusion with the emphasis on improving fusion quality and reducing color distortion [6, 149, 105, 273, 143, 116, 283, 76, 121]. Aanaes et al.  have proposed a spectrally consistent method for pixel-level fusion based on the model of the imaging sensor. The fused image is obtained by optimizing an energy function consisting of a data term and a prior term by using pixel neighborhood regularization. Image fusion based on a restoration framework is suggested by Li and Leung  who modelled the LR MS image as a blurred and noisy version of its ideal. They also modelled the Pan image as a linear combination of true MS images. The final fused image was obtained by using a constrained least squares (CLS) framework. The same model with maximum a posteriori (MAP) framework was used by Hardie et al. and Zhang et al. [105, 273]. Hardie et al.  used the model based approach to enhance the hyper-spectral images using the Pan image.
Increasing the spatial resolution of a given test image is of interest to the image processing community since the enhanced resolution of the image has better details when compared to the corresponding low resolution image. Super-resolution (SR) is an algorithmic approach in which a high spatial resolution image is obtained by using single/multiple low resolution observations or by using a database of LR–HR pairs. The linear image formation model discussed for image fusion in Chapter 4 is extended here to obtain an SR image for a given LR test observation. In the image fusion problem, the available Pan image was used in obtaining a high resolution fused image. Similar to the fusion problem, SR is also concerned with the enhancement of spatial resolution. However, we do not have a high resolution image such as a Pan image as an additional observation. Hence, we make use of a database of LR–HR pairs in order to obtain the SR for the given LR observation. Here, we use contourlet based learning to obtain the initial SR estimate which is then used in obtaining the degradation as well as the MRF parameter. Similar to the fusion problem discussed in Chapter 4, an MAP–MRF framework is used to obtain the final SR image. Note that we are not using the self-learning and sparse representation based approach proposed in Chapter 5 to obtain the fused image since the objective of this chapter is to illustrate a new approach for SR using the data model used in fusion.
The low cost and ease of operation have significantly contributed to the growing popularity of digital imaging systems. Low cost cameras are fitted with low precision optics and lesser density detectors. Images captured using such a camera suffer from the drawback of reduced spatial resolution compared to traditional film cameras. Images captured using a camera fitted with high precision optics and image sensors comprising high density detectors provide better details that are essential in many imaging applications such as medical imaging, remote sensing and surveillance. However, the cost of such a camera is prohibitively high and obtaining a high resolution image is an important concern in many commercial applications requiring HR imaging. Images captured using a low cost camera represent the under-sampled images of a scene containing aliasing, blur and noise.
Written in an easy-to-follow approach, the text will help the readers to understand the techniques and applications of image fusion for remotely sensed multi-spectral images. It covers important multi-resolution fusion concepts along with the state-of-the-art methods including super resolution and multi stage guided filters. It includes in depth analysis on degradation estimation, Gabor Prior and Markov Random Field (MRF) Prior. Concepts such as guided filter and difference of Gaussian are discussed comprehensively. Novel techniques in multi-resolution fusion by making use of regularization are explained in detail. It also includes different quality assessment measures used in testing the quality of fusion. Real-life applications and plenty of multi-resolution images are provided in the text for enhanced learning.
This experimental study focuses on the effect of horizontal boundaries with pyramid-shaped roughness elements on the heat transfer in rotating Rayleigh–Bénard convection. It is shown that the Ekman pumping mechanism, which is responsible for the heat transfer enhancement under rotation in the case of smooth top and bottom surfaces, is unaffected by the roughness as long as the Ekman layer thickness
is significantly larger than the roughness height
. As the rotation rate increases, and thus
decreases, the roughness elements penetrate the radially inward flow in the interior of the Ekman boundary layer that feeds the columnar Ekman vortices. This perturbation generates additional thermal disturbances which are found to increase the heat transfer efficiency even further. However, when
, the Ekman boundary layer is strongly perturbed by the roughness elements and the Ekman pumping mechanism is suppressed. The results suggest that the Ekman pumping is re-established for
as the faces of the pyramidal roughness elements then act locally as a sloping boundary on which an Ekman layer can be formed.
We present simultaneous multi-frequency observations of PSR J1822–2256 for the first time, utilizing the unique capabilities of upgraded Giant Meterwave Radio Telescope (uGMRT). No emission is detected in about 10 % of pulses. At least two drift modes and a possibly third rare mode, occur for 66, 21 and 2 % pulses respectively (P3 ~ 17, 7.5 and 5 P0 respectively). The three drift modes and the nulls occur concurrently from 250 to 1500 MHz. Modal average profiles are distinct with their widths increasing with drift rate. These sub-pulse drift related profile mode-changes can provide independent probes of beam geometry and polar gap physics.
Crab Pulsar (PSR B0531+21) is known to emit pulsed emission in all bands of the electromagnetic spectrum. It also emits giant radio pulses (GRPs) frequently, which are roughly a hundred to million times brighter than the normal pulses. We aim to study whether there is a significant X-ray enhancement correlated with the occurrence of GRPs, using simultaneous observations with the ASTROSAT, the Giant Meterwave Radio telescope (1300 MHz) and the Ooty Radio telescope (325 MHz). This required determination of fixed pipeline offsets between different instruments. We find the offset between ASTROSAT and GMRT to be −30.181 ± 0.095 ms and that between ASTROSAT and ORT to be −18.4 ± 0.2 ms. Our preliminary results with 1300 MHz data also show a break in pulse intensity distribution at ~ 33 Jy in the main pulse and ~ 28 Jy in the inter-pulse.