To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The aim of this paper is to introduce a new stochastic order based on the residual lifetimes of two nonnegative dependent random variables and the stochastic precedence order. We develop some characterizations and preservation properties of this stochastic order. In addition, we study some of its reliability properties and its relation with other existing stochastic orders. One of the possible applications in reliability theory has also been discussed.
Resistance to colistin, a last resort antibiotic, has emerged in India. We investigated colistin-resistant Klebsiella pneumoniae(ColR-KP) in a hospital in India to describe infections, characterize resistance of isolates, compare concordance of detection methods, and identify transmission events.
Retrospective observational study.
Case-patients were defined as individuals from whom ColR-KP was isolated from a clinical specimen between January 2016 and October 2017. Isolates resistant to colistin by Vitek 2 were confirmed by broth microdilution (BMD). Isolates underwent colistin susceptibility testing by disk diffusion and whole-genome sequencing. Medical records were reviewed.
Of 846 K. pneumoniae isolates, 34 (4%) were colistin resistant. In total, 22 case-patients were identified. Most (90%) were male; their median age was 33 years. Half were transferred from another hospital; 45% died. Case-patients were admitted for a median of 14 days before detection of ColR-KP. Also, 7 case-patients (32%) received colistin before detection of ColR-KP. All isolates were resistant to carbapenems and susceptible to tigecycline. Isolates resistant to colistin by Vitek 2 were also resistant by BMD; 2 ColR-KP isolates were resistant by disk diffusion. Moreover, 8 multilocus sequence types were identified. Isolates were negative for mobile colistin resistance (mcr) genes. Based on sequencing analysis, in-hospital transmission may have occurred with 8 case-patients (38%).
Multiple infections caused by highly resistant, mcr-negative ColR-KP with substantial mortality were identified. Disk diffusion correlated poorly with Vitek 2 and BMD for detection of ColR-KP. Sequencing indicated multiple importation and in-hospital transmission events. Enhanced detection for ColR-KP may be warranted in India.
The paper presents a new coplanar waveguide (CPW)-fed rectangular patch antenna with a square-shaped ground plane that can be employed in modern advanced navigation systems. For realizing broad impedance bandwidth in the proposed antenna, a wide slot is introduced in the square ground plane and the rectangular patch is shifted toward the left edge of the ground surface. In addition, by means of introducing square-shaped stubs near the left and right edge of the ground plane, the circular polarization is achieved at L1, L2, and L5 satellite bands. As per the simulation results, the proposed antenna provides a wide impedance bandwidth (S11<−10 dB) of 123% (1.12–4.72 GHz) and 3 dB axial ratio bandwidth of 11% (1.15–1.29 GHz) and 18% (1.5–1.8 GHz) suitable for multipurpose wireless applications. The designed single feed circularly polarized antenna is low profile, small size, light weight and easily integrable with other high-frequency communication devices. To validate radiation performance of the proposed structure, the antenna is fabricated and integrated with the commercially available Global Positioning System (GPS) receiver and it is found that the measured values are in close agreement with the desired results.
Optimization problems are used to model many real-life problems. Therefore, solving these problems is one of the most important goals of algorithm design. A general optimization problem can be defined by specifying a set of constraints that defines a subset in some underlying space (like the Euclidean space) called the feasible subset and an objective function that we are trying to maximize or minimize, as the case may be, over the feasible set. The difficulty of solving such problems typically depends on how ‘complex’ the feasible set and the objective function are. For example, a very important class of optimization problems is linear programming. Here the feasible subset is specified by a set of linear inequalities (in the Euclidean space); the objective function is also linear. A more general class of optimization problems is convex programming, where the feasible set is a convex subset of a Euclidean space and the objective function is also convex. Convex programs (and hence, linear programs) have a nice property that any local optimum is also a global optimum for the objective function. There are a variety of techniques for solving such problems – all of them try to approach a local optimum (which we know would be a global optimum as well). These notions are discussed in greater detail in a later section in this chapter. The more general problem, the so-called non-convex programs, where the objective function and the feasible subset could be arbitrary can be very challenging to solve. In particular, discrete optimization problems, where the feasible subset could be a (large) discrete subset of points falls under this category.
In this chapter, we first discuss some of the most intuitive approaches for solving such problems. We begin with heuristic search approaches, which try to search for an optimal solution by exploring the feasible subset in some principled manner. Subsequently, we introduce the idea of designing algorithms based on the greedy heuristic.
Heuristic Search Approaches
In heuristic search, we explore the search space in a structured manner. Observe that in general, the size of the feasible set (also called the set of feasible solutions) can be infinite.
The problem of searching is basic in the computer science field and vast amount of literature is devoted to many fascinating aspects of this problem. Starting with searching for a given key in a pre-processed set to the more recent techniques developed for searching documents, the modern civilization forges ahead using Google Search. Discussing the latter techniques is outside the scope of this chapter, so we focus on the more traditional framework. Knuth  is one of the most comprehensive sources of the earlier techniques; all textbooks on data structures address common techniques like binary search and balanced tree-based dictionaries like AVL (Adelson-Velsky and Landis) trees, red–black trees, B-trees, etc. We expect the reader to be familiar with such basic methods. Instead, we focus on some of the simpler and lesser known alternatives to the traditional data structures. Many of these rely on innovative use of randomized techniques, and are easier to generalize for a variety of applications. They are driven by a somewhat different perspective of the problem of searching that enables us to get a better understanding including practical scenarios where the universe is much smaller. The underlying assumption in comparison-based searching is that the universe may be infinite, that is, we can be searching real numbers. While this is a powerful framework, we miss out on many opportunities to develop faster alternatives based on hashing in a bounded universe. We will address both these frameworks so that the reader can make an informed choice for a specific application.
Skip-Lists – A Simple Dictionary
Skip-list is a data structure introduced by Pugh  as an alternative to balanced binary search trees for handling dictionary operations on ordered lists. The reader may recall that linked lists are very amenable to modifications in O(1) time although they do not support fast searches like binary search trees. We substitute complex book-keeping information used for maintaining balance conditions for binary trees by random sampling techniques. It has been shown that given access to random bits, the expected search time in a skip-list of n elements is O(logn), which compares very favorably with balanced binary trees.
This book embodies a distillation of topics that we, as educators, have frequently covered in the past two decades in various postgraduate and undergraduate courses related to Design and Analysis of Algorithms in IIT Delhi. The primary audience were the junior level (3rd year) computer science (CS) students and the first semester computer science post-graduate students. This book can also serve the purpose of material for a more advanced level algorithm course where the reader is exposed to alternate and more contemporary computational frameworks that are becoming common and more suitable.
A quick glance through the contents will reveal that about half of the topics are covered by many standard textbooks on algorithms like those by Aho et al. , Horowitz et al. , Cormen et al. , and more recent ones like those by Kleinberg and Tardos  and Dasgupta et al. . The first classic textbook in this area, viz., that by Aho et al., introduces the subject with the observation ‘The study of algorithms is at the very heart of computer science’ and this observation has been reinforced over the past five decades of rapid development of computer science as well as of the more applied field of information technology. Because of its foundational nature, many of the early algorithms discovered about five decades ago continue to be included in every textbook written including this one – for example, algorithms like FFT, quicksort, Dijkstra's shortest paths, etc.
What motivated us to write another book on algorithms are the several important and subtle changes in the understanding of many computational paradigms and the relative importance of techniques emerging out of some spectacular discoveries and changing technologies. As teachers and mentors, it is our responsibility to inculcate the right focus in the younger generation so that they continue to enjoy this intellectually critical activity and contribute to the enhancement of the field of study. As more and more human activities are becoming computer-assisted, it becomes obligatory to emphasize and reinforce the importance of efficient and faster algorithms, which is the core of any automated process. We are often limited and endangered by the instictive use of ill-designed and brute force algorithms, which are often erroneous, leading to fallacious scientific conclusions or incorrect policy decisions.
There is a perpetual need for faster computation which is unlikely to be ever satisfied. With device technologies hitting physical limits, alternate computational models are being explored. The Big Data phenomenon precedes the coinage of this term by many decades. One of the earliest and natural direction to speed-up computation was to deploy multiple processors instead of a single processor for running the same program. The ideal objective is to speed-up a program p-fold by using p processors simultaneously. A common caveat is that an egg cannot be boiled faster by employing multiple cooks! Analogously, a program cannot be executed faster indefinitely by using more and more processors. This is not just because of physical limitations but dependencies between various fragments of the code, imposed by precedence constraints.
At a lower level, namely, in digital hardware design, parallelism is inherent – any circuit can be viewed as a parallel computational model. Signals travel across different paths and components and combine to yield the desired result. In contrast, a program is coded in a very sequential manner and the data flows are often dependent on each other – just think about a loop that executes in a sequence. Second, for a given problem, one may have to re-design a sequential algorithm to extract more parallelism. In this chapter, we focus on designing fast parallel algorithms for fundamental problems.
A very important facet of parallel algorithm design is the underlying architecture of the computer, viz., how do the processors communicate with each other and access data concurrently. Moreover, is there a common clock across which we can measure the actual running time? Synchronization is an important property that makes parallel algorithm design somewhat more tractable. In more generalized asynchronous models, there are additional issues like deadlock and even convergence, which are very challenging to analyze.
In this chapter, we will consider synchronous parallel models (sometimes called SIMD) and look at two important models – parallel random access machine (PRAM) and the interconnection network model. The PRAM model is the parallel counterpart of the popular sequential RAM model where p processors can simultaneously access a common memory called shared memory.