To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Bottom-up assembly of nanomaterials using solution-processed methods is ideally suited for use in fabrication of large-area optoelectronic devices. Tailorable visible and near-infrared absorption in shaped nanostructured noble metals is strongly influenced by localized plasmon resonance effects. Obtaining sharp and selective absorption with solution-processed methods is a challenge and requires suitable control on the growth kinetics, which ultimately results in appropriate size and morphology of the final product. In this work, a photo-assisted multigenerational growth process for synthesis of silver nanotriangle ink with narrow linewidth absorbance is developed. This technique combines photochemical and seed-mediated growth approaches. The resulting ink exhibits a sharp absorption at 700 nm with full width at half maximum of ∼170 nm, verified by absorption as well as dynamic light scattering, transmission electron microscopy, and field emission scanning electron microscopy measurements. Numerical modeling using finite-difference time-domain calculations yields a close match with observed absorption and is used to examine electric field distribution and enhancement factor resonating at 720 nm. The synthesis technique is potentially useable for production of highly selective absorbers in solution phase.
One of the major and widely known small scale problem with the Lambda CDM model of cosmology is the “core-cusp” problem. In this study we investigate whether this problem can be resolved using bar instabilities. We see that all the initial bars are thin (b/a < 0.3) in our simulations and the bar becomes thick ( b /a > 0.3) faster in the high resolution simulations. By increasing the resolution, we mean a larger number of disk particles. The thicker bars in the high resolution simulations transfer less angular momentum to the halo. Hence, we find that in the high resolution simulations it takes around 7 Gyr for the bar to remove inner dark matter cusp which is too long to be meaningful in galaxy evolution timescales. Physically, the reason is that as the resolution increases, the bar buckles faster and becomes thicker much earlier on.
We investigate the minor interactions of two disk galaxies with mass ratio of 10:1 in fly-by encounters that do not lead to the merging of the galaxies. In our N-body simulations, we vary only the pericenter distances to see the effect of the fly-by on the bulge of the major galaxy over the course of the trajectory. At different time steps of the evolution, we did two-dimensional fittings of disk, bulge and bar to trace the variation in the sersic index of the bulge. Our results suggest that galaxy bulges can become boxy/disky through flyby interactions of galaxies.
Optimization problems are used to model many real-life problems. Therefore, solving these problems is one of the most important goals of algorithm design. A general optimization problem can be defined by specifying a set of constraints that defines a subset in some underlying space (like the Euclidean space) called the feasible subset and an objective function that we are trying to maximize or minimize, as the case may be, over the feasible set. The difficulty of solving such problems typically depends on how ‘complex’ the feasible set and the objective function are. For example, a very important class of optimization problems is linear programming. Here the feasible subset is specified by a set of linear inequalities (in the Euclidean space); the objective function is also linear. A more general class of optimization problems is convex programming, where the feasible set is a convex subset of a Euclidean space and the objective function is also convex. Convex programs (and hence, linear programs) have a nice property that any local optimum is also a global optimum for the objective function. There are a variety of techniques for solving such problems – all of them try to approach a local optimum (which we know would be a global optimum as well). These notions are discussed in greater detail in a later section in this chapter. The more general problem, the so-called non-convex programs, where the objective function and the feasible subset could be arbitrary can be very challenging to solve. In particular, discrete optimization problems, where the feasible subset could be a (large) discrete subset of points falls under this category.
In this chapter, we first discuss some of the most intuitive approaches for solving such problems. We begin with heuristic search approaches, which try to search for an optimal solution by exploring the feasible subset in some principled manner. Subsequently, we introduce the idea of designing algorithms based on the greedy heuristic.
Heuristic Search Approaches
In heuristic search, we explore the search space in a structured manner. Observe that in general, the size of the feasible set (also called the set of feasible solutions) can be infinite.
The problem of searching is basic in the computer science field and vast amount of literature is devoted to many fascinating aspects of this problem. Starting with searching for a given key in a pre-processed set to the more recent techniques developed for searching documents, the modern civilization forges ahead using Google Search. Discussing the latter techniques is outside the scope of this chapter, so we focus on the more traditional framework. Knuth  is one of the most comprehensive sources of the earlier techniques; all textbooks on data structures address common techniques like binary search and balanced tree-based dictionaries like AVL (Adelson-Velsky and Landis) trees, red–black trees, B-trees, etc. We expect the reader to be familiar with such basic methods. Instead, we focus on some of the simpler and lesser known alternatives to the traditional data structures. Many of these rely on innovative use of randomized techniques, and are easier to generalize for a variety of applications. They are driven by a somewhat different perspective of the problem of searching that enables us to get a better understanding including practical scenarios where the universe is much smaller. The underlying assumption in comparison-based searching is that the universe may be infinite, that is, we can be searching real numbers. While this is a powerful framework, we miss out on many opportunities to develop faster alternatives based on hashing in a bounded universe. We will address both these frameworks so that the reader can make an informed choice for a specific application.
Skip-Lists – A Simple Dictionary
Skip-list is a data structure introduced by Pugh  as an alternative to balanced binary search trees for handling dictionary operations on ordered lists. The reader may recall that linked lists are very amenable to modifications in O(1) time although they do not support fast searches like binary search trees. We substitute complex book-keeping information used for maintaining balance conditions for binary trees by random sampling techniques. It has been shown that given access to random bits, the expected search time in a skip-list of n elements is O(logn), which compares very favorably with balanced binary trees.
This book embodies a distillation of topics that we, as educators, have frequently covered in the past two decades in various postgraduate and undergraduate courses related to Design and Analysis of Algorithms in IIT Delhi. The primary audience were the junior level (3rd year) computer science (CS) students and the first semester computer science post-graduate students. This book can also serve the purpose of material for a more advanced level algorithm course where the reader is exposed to alternate and more contemporary computational frameworks that are becoming common and more suitable.
A quick glance through the contents will reveal that about half of the topics are covered by many standard textbooks on algorithms like those by Aho et al. , Horowitz et al. , Cormen et al. , and more recent ones like those by Kleinberg and Tardos  and Dasgupta et al. . The first classic textbook in this area, viz., that by Aho et al., introduces the subject with the observation ‘The study of algorithms is at the very heart of computer science’ and this observation has been reinforced over the past five decades of rapid development of computer science as well as of the more applied field of information technology. Because of its foundational nature, many of the early algorithms discovered about five decades ago continue to be included in every textbook written including this one – for example, algorithms like FFT, quicksort, Dijkstra's shortest paths, etc.
What motivated us to write another book on algorithms are the several important and subtle changes in the understanding of many computational paradigms and the relative importance of techniques emerging out of some spectacular discoveries and changing technologies. As teachers and mentors, it is our responsibility to inculcate the right focus in the younger generation so that they continue to enjoy this intellectually critical activity and contribute to the enhancement of the field of study. As more and more human activities are becoming computer-assisted, it becomes obligatory to emphasize and reinforce the importance of efficient and faster algorithms, which is the core of any automated process. We are often limited and endangered by the instictive use of ill-designed and brute force algorithms, which are often erroneous, leading to fallacious scientific conclusions or incorrect policy decisions.
There is a perpetual need for faster computation which is unlikely to be ever satisfied. With device technologies hitting physical limits, alternate computational models are being explored. The Big Data phenomenon precedes the coinage of this term by many decades. One of the earliest and natural direction to speed-up computation was to deploy multiple processors instead of a single processor for running the same program. The ideal objective is to speed-up a program p-fold by using p processors simultaneously. A common caveat is that an egg cannot be boiled faster by employing multiple cooks! Analogously, a program cannot be executed faster indefinitely by using more and more processors. This is not just because of physical limitations but dependencies between various fragments of the code, imposed by precedence constraints.
At a lower level, namely, in digital hardware design, parallelism is inherent – any circuit can be viewed as a parallel computational model. Signals travel across different paths and components and combine to yield the desired result. In contrast, a program is coded in a very sequential manner and the data flows are often dependent on each other – just think about a loop that executes in a sequence. Second, for a given problem, one may have to re-design a sequential algorithm to extract more parallelism. In this chapter, we focus on designing fast parallel algorithms for fundamental problems.
A very important facet of parallel algorithm design is the underlying architecture of the computer, viz., how do the processors communicate with each other and access data concurrently. Moreover, is there a common clock across which we can measure the actual running time? Synchronization is an important property that makes parallel algorithm design somewhat more tractable. In more generalized asynchronous models, there are additional issues like deadlock and even convergence, which are very challenging to analyze.
In this chapter, we will consider synchronous parallel models (sometimes called SIMD) and look at two important models – parallel random access machine (PRAM) and the interconnection network model. The PRAM model is the parallel counterpart of the popular sequential RAM model where p processors can simultaneously access a common memory called shared memory.