To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
You have the hardware and understand its architecture. You have a large problem to solve. You suspect that a parallel program may be helpful. Where do you begin? Before we can answer that question, an understanding of the software infrastructure is required. In this chapter, we will discuss general organization of parallel programs, that is, typical software architecture. Chapter 5 elaborates this further and discusses how to design solutions to different types of problems.
Question: How are execution engines and data organized into a parallel program?
As we have noted, truly sequential processors hardly exist, but they execute sequential programs fully well. Some parts of the sequential program may even be executed in parallel, either directly by the hardware's design, or with the help of a parallelizing compiler. On the other hand, we are likely to achieve severely sub-par performance by relying solely on the hardware and the compiler. With only a little more thought, it is often possible to simply organize a sequential program into multiple components and turn it into a truly parallel program.
Question: What are some common types of parallel programs?
This chapter introduces parallel programming models. Parallel programming models characterize the anatomy or structure of parallel programs. This structure is somewhat more complex than that of a sequential program, and one must understand this structure to develop parallel programs. These programming models will also provide the context for the performance analysis methodology discussed in Chapter 3 as well as the parallel design techniques described in Chapter 5.
We will see in Chapter 7 that many efficient sequential algorithms are not so efficient if trivially parallelized. Many problems instead require specially designed parallel algorithms suitable for the underlying system architecture. These parallel algorithms are often designed directly in terms of these programming models.
A program broadly consists of executable parts and memory where data is held, in addition to input and output. A large parallel program usually performs input and output through a parallel file system. We will discuss parallel file systems in Section 5.4, but in the context of the current discussion they behave much like memory – data of some size can be fetched from an address or written to an address by executable parts.
We are now ready to start implementing parallel programs. This requires us to know:
Question: Where do I begin to program? What building blocks can I program on top of?
• How to create and manage fragments (and tasks).
• How to provide the code for the fragments.
• How to organize, initialize, and access shared memory.
• How to cause tasks to communicate.
• How to synchronize among tasks.
This chapter discusses popular software tools that provide answers to these questions. It offers a broad overview of these tools in order to familiarize the reader with the core concepts employed in tools like these, and their relative strengths. This discussion must be supplemented with detailed documentation and manuals that are available for these tools before one starts to program.
The minimal requirement from a parallel programming platform is that it supports the creation of multiple tasks or threads and allows data communication and synchronization among them. Modern programming languages, Java, Python, and so on, usually have these facilities – either as a part of language constructs or through standard library functions. We start with OpenMP, which is designed for parallel programming on a single computing system with memory shared across threads of a processor. It is supported by many C/C++ and Fortran compilers. We will use the C-style.
Language-based support for parallel programming is popular, especially for single node computing systems. Compiling such programs produces a single executable, which can be loaded into a process for execution, similar to sequential programs. The process then generates multiple threads for parallel execution. OpenMP is a compiler-directive-based shared-memory programming model, which allows sequential programmers to quickly graduate to parallel programming. In fact, an OpenMP program stripped off its directives is nothing but a sequential program. A compiler that does not support the directives could just ignore them. (For some things, OpenMP provides library functions – these are not ignored by the compiler.) Some compilers that support OpenMP pragmas still require a compile time flag to enable that support.
C/C++ employs #pragma directives to provide instructions to the compiler. OpenMP directives all are prefixed with #pragma omp followed by the name of the directive and possible further options for the directive as a sequence of clauses, as shown in Listing 6.1.
This chapter introduces some general principles of parallel algorithm design. We will consider a few case studies to illustrate broad approaches to parallel algorithms. As already discussed in Chapter 5, the underlying goal for these algorithms is to pose the solution into parcels of relatively independent computation, with occasional interaction. In order to abstract the details of synchronization, we will assume the parallel RAM (PRAM) or the bulk-synchronous parallel (BSP) model to describe and analyze these algorithms. It is a good time for the reminder that going from, say, a PRAM algorithm to one that is efficient on a particular architecture requires refinement and careful design for a particular platform. This is particularly true when “constant time” concurrent read and write operations are assumed. Concurrent reads and writes are particularly inefficient for distributed-memory platforms, and are inefficient for shared-memory platforms as well. It requires synchronization of the processors’ views of the shared memory, which can be expensive.
Question: How do parallel algorithms differ from sequential algorithms?
Recall that PRAM models focus mainly on the computational aspect of algorithm, whereas practical algorithms also require close attention to memory, communication, and synchronization overheads. PRAM algorithms may not always be practical, but they are easier to design than those for more general models. In reality, PRAM algorithms are only the first step toward more practical algorithms, particularly on distributed-memory systems.
Parallel algorithm design often seeks to maximize parallelism and minimize the time complexity. Even if the number of actually available processors is limited, higher parallelism translates to higher scalability in practice. Nonetheless, the work-time scheduling principle (Section 3.5) indicates that low work complexity is paramount for fast execution in practice. In general, if the best sequential complexity of solving the given problem is, say To(n), we would like the parallel work complexity to be O(To(n)). It is a common algorithm design pattern to assume up to To(n) processors and then try to minimize the time complexity. With maximal parallelism, the target time complexity using To(n) processors is O(1). This is not always achievable, and there is often a trade-off between time and work complexity. We then try to reduce the work complexity to O(To(n)), without significantly increasing the time complexity.
Parallel programming is challenging. There are many parts interacting in a complex manner: algorithm-imposed dependency, scheduling on multiple execution units, synchronization, data communication capacity, network topology, memory bandwidth limit, cache performance in the presence of multiple independent threads accessing memory, program scalability, heterogeneity of hardware, and so on. It is useful to understand each of these aspects separately. We discuss general parallel design principles in this chapter. These ideas largely apply to both shared-memory style and message-passing style programming, as well as task-centric programs.
Question: How to devise the parallel solution to a given problem?
At first cut, there are two approaches to start designing parallel applications:
Question: What is the detailed structure of parallel programs?
1. Given a problem, design and implement a sequential algorithm, and then turn it into a parallel program based on the type of available parallel architecture.
2. Start ab initio. Design a parallel algorithm suitable for the underlying architecture and then implement it.
In either case, performance, correctness, reusability, and maintainability are important goals. We will see that for many problems, starting with a sequential algorithm and then dividing it into independent tasks that can execute in parallel leads to a poor parallel algorithm. Instead, another algorithm that is designed to maximize independent parts, may yield better performance. If a good parallel solution cannot be found – and there do exist inherently sequential problems, for which parallel solutions are not sufficiently faster than sequential ones – it may not be a problem worth solving in parallel.
Once a parallel algorithm is designed, it may yet contain parts that are sequential. Further, the parallel parts can also be executed on a sequential machine in an arbitrary sequence. Such “sequentialization” allows the developer to test parts of a parallel program. If a purely sequential version is already available, or can be implemented with only small effort, it can also serve as a starting point for parallel design. The sequential version can be exploited to develop the parallel application incrementally, gradually replacing sequential parts with their parallel versions. The sequential version also provides performance targets for the parallel version and allows debugging by comparing partial results.
Programs need to be correct. Programs also need to be fast. In order to write efficient programs, one surely must know how to evaluate efficiency. One might take recourse to our prior understanding of efficiency in the sequential context and compare observed parallel performance to observed sequential performance. Or, we can define parallel efficiency independent of sequential performance. We may yet draw inspiration from the way efficiency is evaluated in a sequential context. Into that scheme, we would need to incorporate the impact of an increasing number of processors deployed to solve the given problem.
Question: How do you reason about how long an algorithm or program takes?
Efficiency has two metrics. The first is in an abstract setting, for example, the asymptotic analysis of the underlying algorithm. The second is concrete – how well does the algorithm's implementation behave in practice on the available hardware and on data sizes of interest. Both are important.
There is no substitute for measuring the performance of the real implementation on real data. On the other hand, developing and testing iteratively on large parallel systems is prohibitively expensive. Most development occurs on a small scale: using only a few processors, p, on small input of size n. The extrapolation of these tests to a much larger scale is deceptively hard, and we often must resort to simplified models and analysis tools.
Asymptotic analysis on simple models is sometimes criticized because it oversimplifies several complex dynamics (like cache behavior, out-of-order execution on multiple execution engines, instruction dependencies, etc.) and conceals constant multipliers. Nonetheless, with large input sizes that are common in parallel applications, asymptotic measures do have value. They can be computed somewhat easily, in a standardized setting and without requiring iterations on large supercomputers. And, concealing constants is a choice to some degree. Useful constants can and should be retained. Nonetheless, the abstract part of our analysis will employ the big-O notation to describe the number of steps an algorithm takes. It is a function of the input size n and the number of processors p.
Asymptotic notation or not, the time t(n, p) to solve a problem in parallel is a function of n and p. For this purpose, we will generally count in p the number of sequential processors – they complete their program instructions in sequence.
This chapter is not designed for a detailed study of computer architecture. Rather, it is a cursory review of concepts that are useful for understanding the performance issues in parallel programs. Readers may well need to refer to a more detailed treatise on architecture to delve deeper into some of the concepts.
There are two distinct facets of parallel architecture: the structure of the processors, that is, the hardware architecture, and the structure of the programs, that is, the software architecture. The hardware architecture has three major components:
Question: What are execution engines and how are instructions executed?
1. Computation engine: it carries out program instructions.
2. Memory system: it provides ways to store values and recall them later.
3. Network: it forms the connections among processors and memory.
An understanding of the organization of each architecture and their interaction with each other is important to write efficient parallel programs. This chapter is an introduction to this topic. Some of these hardware architecture details can be hidden from application programs by well-designed programming frameworks and compilers. Nonetheless, a better understanding of these generally leads to more efficient programs. One must similarly understand the components of the program along with the programming environment. In other words, a programmer must ask:
1. How do the multiple processing units operate and interact with each other?
2. How is the program organized so it can start and control all processing units? How is it split into cooperating parts and how do parts merge? How do parts cooperate with other parts (or programs)?
One way to view the organization of hardware as well as software is as graphs (see Sections 1.6 and 2.3). Vertices in these graphs represent processors or program components, and edges represent network connection or program communication. Often, implementation simplicity, higher performance, and cost-effectiveness can be achieved with restrictions on the structure of these graphs. The hardware and software architectures are, in principle, independent of each other. In practice, however, certain software organizations are more suited to certain hardware organizations. We will discuss these graphs and their relationship starting in section 2.3.
Another way to categorize the hardware organization was proposed by Flynn and is based on the relationship between the instructions different processors execute at a time. This is popularly known as Flynn’s taxonomy.
SISD: Single Instruction, Single Data
A processor executes program instructions, operating on some input to produce some output. An SISD processor is a serial processor.
Lessons in programming often start with a definition of the term algorithm. Webster's dictionary defines algorithm as “a step-by-step procedure for solving a problem.” Not only does this definition lend itself naturally to an imperative programming style, but it often also leads to a focus on sequential programming. However, the truth is that program execution is hardly ever in a step-by-step fashion, even if it may sometimes appear to be so. This nonsequentiality can be due to multiple instructions being in flight simultaneously, that is, the instructions are in various stages of their executions at the same time. This is true even when a program is presented as a linear sequence of instructions, and its correctness depends on their execution in that exact sequence. This is also true when the program is “parallel” instead, that is, the order among instructions is not necessarily specified.
In this book, we focus on this parallel programming, where instructions are neither specified nor expected to be in a single sequence. Further, the execution of these programs is also in a parallel context, where potentially several thousand instructions, or even more, execute at any given time.
Concurrency and Parallelism
Sometimes the terms “concurrent” and “parallel” are informally used interchangeably, but it is important to recognize the distinction. Parallelism may be defined as performing two activities at the same time. These activities may be related in some manner or not. Usually, these activities are not instantaneous: each takes a finite time. Two related activities are said to be concurrent if there is no predetermined order between them – they may or may not overlap in time when they do occur. We will see that in certain situations, concurrency is not desirable, and a relative order is imposed. When such an order is enforced on two activities, they clearly cannot be executed in parallel.
Although our focus in this book is on parallel programming, concurrency must often be managed in a parallel program, and we discuss practical aspects of concurrency as well.
Why Study Parallel Programming
Natural processes are inherently parallel, whether they be molecular and nuclear behavior, weather and geological phenomena, or biological and genetic manifestation. By no means does that imply that their simulation and computation must be parallel.
Interaction between concurrently executing fragments is an essential characteristic of parallel programs and the major source of difference between sequential programming and parallel programming. Synchronization and communication are the two ways in which fragments directly interact, and these are the subjects of this chapter. We begin with a brief review of basic operating system concepts, particularly in the context of parallel and concurrent execution. If you already have a good knowledge of operating systems concepts, browse lightly or skip ahead.
Question: Who controls the executing fragments? How do different executing fragments interact and impact each other’s execution?
Threads and Processes
Computing systems are managed by a program: an operating system. Process is the mechanism that operating systems use to start and control the execution of other programs. A process provides one or more ranges of addresses for the executing program to use. Each address has a value (which remains undefined until it is initialized). Each range is mapped to a block of memory (which may reside on one or more attached devices). These blocks of memory are under management of the operating system. A range of addresses and the data that they map to are collectively called an address space. An address space is divided into fixed-size units called pages. Address space and pages provide a logical or a virtual view of the memory. This view is also called virtual memory. The operating system maintains a mapping between pages and their locations on the device. One advantage of virtual memory is that not all pages need to be resident in the physical memory device – some may be relegated to slower storage (not unlike the cache strategy), while others that remain undefined need not be mapped to any storage at all.
Being an executing program, the operating system comprises a set of processes, which start and schedule other processes. For example, an application starts with some running process launching a new process to execute that application's code. These processes may execute concurrently, sharing the available hardware by turn. An executing process may be forced to turn over to a waiting process via a mechanism of hardware interrupts.
In modern computer science, there exists no truly sequential computing system; and most advanced programming is parallel programming. This is particularly evident in modern application domains like scientific computation, data science, machine intelligence, etc. This lucid introductory textbook will be invaluable to students of computer science and technology, acting as a self-contained primer to parallel programming. It takes the reader from introduction to expertise, addressing a broad gamut of issues. It covers different parallel programming styles, describes parallel architecture, includes parallel programming frameworks and techniques, presents algorithmic and analysis techniques and discusses parallel design and performance issues. With its broad coverage, the book can be useful in a wide range of courses; and can also prove useful as a ready reckoner for professionals in the field.
Background: Healthcare-associated infections (HAIs) are a major global threat to patient safety. Systematic surveillance is crucial for understanding HAI rates and antimicrobial resistance trends and to guide infection prevention and control (IPC) activities based on local epidemiology. In India, no standardized national HAI surveillance system was in place before 2017. Methods: Public and private hospitals from across 21 states in India were recruited to participate in an HAI surveillance network. Baseline assessments followed by trainings ensured that basic microbiology and IPC implementation capacity existed at all sites. Standardized surveillance protocols for central-line–associated bloodstream infections (CLABSIs) and catheter-associated urinary tract infections (CAUTIs) were modified from the NHSN for the Indian context. IPC nurses were trained to implement surveillance protocols. Data were reported through a locally developed web portal. Standardized external data quality checks were performed to assure data quality. Results: Between May 2017 and April 2019, 109 ICUs from 37 hospitals (29 public and 8 private) enrolled in the network, of which 33 were teaching hospitals with >500 beds. The network recorded 679,109 patient days, 212,081 central-line days, and 387,092 urinary catheter days. Overall, 4,301 bloodstream infection (BSI) events and 1,402 urinary tract infection (UTI) events were reported. The network CLABSI rate was 9.4 per 1,000 central-line days and the CAUTI rate was 3.4 per 1,000 catheter days. The central-line utilization ratio was 0.31 and the urinary catheter utilization ratio was 0.57. Moreover, 3,542 (73%) of 4,742 pathogens reported from BSIs and 868 (53%) of 1,644 pathogens reported from UTIs were gram negative. Also, 1,680 (26.3%) of all 6,386 pathogens reported were Enterobacteriaceae. Of 1,486 Enterobacteriaceae with complete antibiotic susceptibility testing data reported, 832 (57%) were carbapenem resistant. Of 951 Enterobacteriaceae subjected to colistin broth microdilution testing, 62 (7%) were colistin resistant. The surveillance platform identified 2 separate hospital-level HAI outbreaks; one caused by colistin-resistant K. pneumoniae and another due to Burkholderia cepacia. Phased expansion of surveillance to additional hospitals continues. Conclusions: HAI surveillance was successfully implemented across a national network of diverse hospitals using modified NHSN protocols. Surveillance data are being used to understand HAI burden and trends at the facility and national levels, to inform public policy, and to direct efforts to implement effective hospital IPC activities. This network approach to HAI surveillance may provide lessons to other countries or contexts with limited surveillance capacity.
Background: Globally, surgical site infections (SSIs) not only complicate the surgeries but also lead to $5–10 billion excess health expenditures, along with the increased length of hospital stay. SSI rates have become a universal measure of quality in hospital-based surgical practice because they are probably the most preventable of all healthcare-associated infections. Although, many national regulatory bodies have made it mandatory to report SSI rates, the burden of SSI is still likely to be significant underestimated due to truncated SSI surveillance as well as underestimated postdischarge SSIs. A WHO survey found that in low- to middle-income countries, the incidence of SSIs ranged from 1.2 to 23.6 per 100 surgical procedures. This contrasted with rates between 1.2% and 5.2% in high-income countries. Objectives: We aimed to leverage the existing surveillance capacities at our tertiary-care hospital to estimate the incidence of SSIs in a cohort of trauma patients and to develop and validate an indigenously developed, electronic SSI surveillance system. Methods: A prospective cohort study was conducted at a 248-bed apex trauma center for 18 months. This project was a part of an ongoing multicenter study. The demographic details were recorded, and all the patients who underwent surgery (n = 770) were followed up until 90 days after discharge. The associations of occurrence of SSI and various clinico-microbiological variables were studied. Results: In total, 32 (4.2%) patients developed SSI. S. aureus (28.6%) were the predominant pathogen causing SSI, followed by E. coli (14.3%) and K. pneumoniae (14.3%). Among the patients who had SSI, higher SSI rates were associated in patients who were referred from other facilities (P = .03), had wound class-CC (P < .001), were on HBOT (P = .001), were not administered surgical antibiotics (P = .04), were not given antimicrobial coated sutures (P = .03) or advanced dressings (P = .02), had a resurgery (P < .001), had a higher duration of stay in hospital from admission to discharge (P = .002), as well as from procedure to discharge (P = .002). SSI was cured in only 16 patients (50%) by 90 days. SSI data collection, validation, and analyses are essential in developing countries like India. Thus, it is very crucial to implement a surveillance system and a system for reporting SSI rates to surgeons and conduct a robust postdischarge surveillance using trained and committed personnel to generate, apply, and report accurate SSI data.
Resistance to colistin, a last resort antibiotic, has emerged in India. We investigated colistin-resistant Klebsiella pneumoniae(ColR-KP) in a hospital in India to describe infections, characterize resistance of isolates, compare concordance of detection methods, and identify transmission events.
Retrospective observational study.
Case-patients were defined as individuals from whom ColR-KP was isolated from a clinical specimen between January 2016 and October 2017. Isolates resistant to colistin by Vitek 2 were confirmed by broth microdilution (BMD). Isolates underwent colistin susceptibility testing by disk diffusion and whole-genome sequencing. Medical records were reviewed.
Of 846 K. pneumoniae isolates, 34 (4%) were colistin resistant. In total, 22 case-patients were identified. Most (90%) were male; their median age was 33 years. Half were transferred from another hospital; 45% died. Case-patients were admitted for a median of 14 days before detection of ColR-KP. Also, 7 case-patients (32%) received colistin before detection of ColR-KP. All isolates were resistant to carbapenems and susceptible to tigecycline. Isolates resistant to colistin by Vitek 2 were also resistant by BMD; 2 ColR-KP isolates were resistant by disk diffusion. Moreover, 8 multilocus sequence types were identified. Isolates were negative for mobile colistin resistance (mcr) genes. Based on sequencing analysis, in-hospital transmission may have occurred with 8 case-patients (38%).
Multiple infections caused by highly resistant, mcr-negative ColR-KP with substantial mortality were identified. Disk diffusion correlated poorly with Vitek 2 and BMD for detection of ColR-KP. Sequencing indicated multiple importation and in-hospital transmission events. Enhanced detection for ColR-KP may be warranted in India.
A trauma registry is a disease-specific data collection composed of a file of uniform data elements that describe the injury even, demographics, prehospital information, diagnosis, care, outcomes, and costs of treatment for injured patients.
To establish a trauma registry system on an electronic platform enabling data capturing through Android phones.
A software has been developed for the registry data collection for road traffic injury patients arriving at JPNATC, AIIMS, New Delhi. The software has been designed to use in the Emergency Department on Android phones/laptops with internet access.
A detailed registry data set has been prepared to enter prehospital, in-hospital, and post-discharge details of all the admitted patients. This includes demographic data, prehospital data, injury event data, vital signs within 24-hrs of arrival, ED disposition (date and time), operative procedures within 48 hours of arrival, chest x-ray (date and time), CT (date and time), ventilation days, ICU-stay days, hospital disposition (date and time), injury coding data (region, severity level, ISS, AIS, ICD-10) and Others, e.g., first neurosurgical consultation (date and time) and first blood transfusion (date and time). There are two panels for this software; one for user panel and another for the administrative panel. User panel is being used for data collection by the trained data collectors 24/7 at the emergency department on a rotation basis. The administrative panel is accessible to only the investigator or other authorized persons. The administrative panel and user panels are password protected. The entered data is being saved in a spreadsheet in the backend and can be used for periodic data quality check and data analysis.
There is no trauma registry in India so far for the road traffic injury patients. Present innovation would lay the foundation of national Trauma Registry in India.
The effect of self-focusing and defocusing on terahertz (THz) generation by amplitude-modulated Gaussian laser beam in rippled density plasma is investigated. A stronger transient transverse current is generated by transverse component of ponderomotive force exerted by laser on electrons that drives radiation at the modulation frequency (which is chosen to be in the THz domain) because of the variation in intensity in the direction transverse to the laser propagation. Numerical simulations indicate the enhancement of THz yield by many folds due to self-focusing of laser beam in comparison with that without self-focusing. The transient focusing of laser beam and its effect on the generated THz amplitude has also been studied.