We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The current utilization of immunohistochemistry (IHC) in a diagnostic context is discussed. The modern facility requirements, the various roles IHC is tasked with and the key concept of standardization are covered. Common terminologies are addressed and explained within an IHC context. The terms 'validation' and 'verification' provide one example of words which may cause confusion. The present status in terms of protocol set-up, antibody clones and epitope retrieval are offered to emphasize current best practice. A treatise is given concerning IHC’s special relationship with emerging molecular technologies and how these two analytical devices are shaping diagnoses and treatment strategies for patients. Specific examples are taken from melanoma, breast, lung and bowel cancers. The reader should be able to ascertain the role of IHC in today’s pathology laboratories.
Diamesic variation, a concept coined by variationist researchers in Italy, helps to understand the impact of media on linguistic structures and their evolution. By distinguishing medial, sociolinguistic, and functional aspects of diamesic variation, it becomes clear that media not only affect the (phonic and graphic) materiality of signifiers but also interact strongly with diasystematic variation along regional, social, and situational parameters and with variability of verbalization strategies between the poles of communicative immediacy and communicative distance. The latter, functional aspect of diamesic variation is anthropologically rooted and forms a universal, omnipresent background in the history of the Romance languages. Several paradigmatic analyses of Latin and Romance historical data show how the communicative variation between, for example, the maximal degree of familiarity between partners vs formal, public encounters, a high degree of spontaneity vs a high degree of planning, intensive involvement in a situation vs decontextualized speaking and writing determines standardization processes of elaboration, centralization, and codification as well as linguistic change ‘from below’ or ‘from above’.
Until relatively recently, knowledge of the history of Romance languages was based on written sources. Writing traditions are usually conservative and rarely reflect more informal and sociolinguistically lower registers. Nonetheless, one must acknowledge the important function of written documentation in the understanding of a complex, multi-faceted, but partly inaccessible linguistic reality: a careful and circumspect use of written sources remains the main path for a critical interpretation of the linguistic facts of the past, together with historical-comparative reconstructions. From the first century BCE, there was an increasing diaphasic differentiation in the Latin-speaking world between a formal register and an informal one, so called ‘vulgar (or Late) Latin’. Deviations from norm often expose the linguistic structures of the emerging Romance languages, the earliest attestations of which date back to the ninth–tenth centuries and show a clear awareness of the difference between Latin and Romance. From the twelfth–thirteenth centuries, some areas began to codify certain scriptae which, despite their importance, present several linguistic problems. In the second half of the nineteenth century, dialectological studies acquired an important role, leading to dialectometry and scriptology, the latter at the crossroad of geolinguistics and corpus-linguistics.
In addition to time and place, which are inseparable from sociolinguistic variation, language may vary according to age, social class, sex or (social) gender, ethnicity, medium, style, and register. Contact between speakers often leads to change, and different patterns result according to whether this contact involves first-language (L1) or second language (L2) acquisition. Thus, ‘family tree’ aspects of language change are largely accounted for by transmission (involving L1 acquisition), whilst ‘wave model’ changes can be explained in terms of diffusion (involving L2 acquisition). Languages with a high degree of L2 contact will tend to simplify, whilst stable bilingualism or isolation will often lead to complexification. Contact may be interlinguistic or intralinguistic, sometimes resulting in complex linguistic repertoires, with up to four different levels existing simultaneously (national standard, regional standard, interdialectal koiné, local dialect). Contact may also result in code-switching, the emergence of contact vernaculars, and ‘language death’. The receptiveness of a variety to contact influence depends on the extent to which its social networks are open or closed and on the social attitudes of its speakers. Standard languages emerge through a variety of conscious and unconscious processes, and attempts may be made to give non-standard speech varieties a distinct linguistic identity through codification and the creation of literature.
The information-structural categories of focus and topic are examined with respect to the constructions in which they can feature, including fronting, dislocation, subject inversion, and presentational sentences. The role and effects of illocutionary force distinctions (e.g., declarative, interrogative, exclamative) and different predication types (thetic vs non-thetic predications) are also taken into consideration. Despite the many similarities, (micro)variation in this area proves quite considerable. In relation to Romance comparative data, this chapter shows that this variation can be accounted for by pragmatic-discourse factors and structural licensing principles that are indeed related to information structure.
Many parameters are associated with IHC testing assays. With so many variables, it is quite easy to accumulate errors within the system. To make things more manageable, these considerations are categorized into three main groups. Pre-analytic aspects occur before the assay, analytic factors are concerned with the staining protocol and post-analytic elements relate to interpreting of results. It has also come to reason that any one variable can impact the reliability and consistency of the overall IHC assay. In this regard, standardization requirements have been enlisted to assist laboratories achieve optimal results. In addition, monitoring proficiency testing regimens and various organizations are in place to ensure high levels of standards are attained. All these endeavours are known as quality assurance and quality control measures. They are arranged under the overall umbrella of a facility’s quality management system.
Institutions play an important role in the management of multilingualism and can have a defining impact on language use. By granting more or less official status to certain forms of expression and language varieties, institutions legitimize some forms and varieties as more desirable targets of linguistic accommodation than others, which can affect speakers’ dominant language environments and influence the selection process of language change. This chapter outlines a socio-political approach to language standardization and interprets selected language policy and planning measures in terms of common mechanisms and outcomes of language contact in three western European states: France, Spain, and the United Kingdom. Using the same historical timeline, it proposes a comparison of the circumstances under which national identities emerged in early modern and modern times, state boundaries were expanded through conquest, and more or less cultural homogeneity was achieved and enforced through language use. It is argued that even though ethnolinguistic diversity decreased considerably over time, different institutional responses to multilingualism led to different state-specific compromises that continue to shape language policies and planning in each of the three states today.
In this study, we analyze extensive segmented and standardized agricultural fields in the marginally productive terrain of the Pampa de Guereque in the Jequetepeque Valley on the north coast of Peru. Although portions of the associated canal system were constructed continuously from late Formative to Chimú times, the segmented fields date to the late Chimú–Inka period and were only partially finished, apparently never fully used, and ultimately abandoned. We provide description of field plots and irrigation canals and discuss the implications of state-level construction and labor management of the fields, as well as the probable reasons for their abandonment.
This article looks at the implementation of food standards of identity by the U.S Food and Drug Administration from the 1930s to the 1960s, a period in the FDA’s history wedged between the “era of adulteration” of the early twentieth century and the agency’s turn to “informational regulation” starting in the 1970s. The article describes the origin of food standards in the early twentieth century and outlines the political economy of government-mandated food standards in the 1930s. While consumer advocates believed government standards would be important to consumer empowerment because they would simplify choices at the grocery store, many in the food industry believed government standards would clash with private brands. The FDA faced challenges in defining what were “customary” standards for foods in an increasingly industrial food economy, and new diet-food marketing campaigns in the 1950s and 1960s ultimately led to the food standards system's undoing. The article concludes by looking at how FDA food standards came to be framed cynically, even though voluntary food standardization continued and the system of informative labeling that replaced FDA standards led to precisely the problem government standards were intended to solve.
This article addresses the question of how standards were determined and disseminated in an era before the formation of agreed upon standards or the existence of governing bodies, by examining the case of nineteenth-century Brazil. It argues that the experience in Brazil was similar to that of other nations: individuals engaged in mathematical, scientific, engineering, and statistical organizations created networks of professional societies, intertwined with international diplomacy and domestic legislators, to promote the adoption of the metric system. It analyzes the process from idea to advocacy culminating in national implementation on the eve of the 1875 International Convention of the Meter, to which Brazil was signatory.
For human dental cementum research, sample preparation protocol is now widely tested, validated, and standardized, thanks to the low variability in teeth morphology. For non-human mammals, posterior teeth are typically preferred. However, the taxa diversity implies a significant variation in morphology or specific characteristics for certain species (equids, suids), leading to multiple unstandardized protocols. This work aims to improve protocols for producing a thin section by optimizing the parameters, minimizing the risk of errors, and offering an easily reproducible quality of thin-sections. The result of 26 experiments and 124 analyses during stages of consolidation (embedding), cutting, gluing, and finishing (grinding) allowed the co-authors' combined experience from multiple laboratories to propose standardized humans and ungulates (large teeth) protocols for the systematic analysis of dental research collections.
This paper argues that the Dutch sociolinguistic situation in the 17th and 18th centuries should be analyzed as diaglossic, that is, involving a wide spectrum of variation in between localized spoken dialects and the supposed written standard. In fact, multiple instances of norm selection for writing render this diaglossic situation even more complex. The paper shows that multiple norm selection even occurred in cases when a strict and simple norm was selected early on, that is, in the late 16th–early 17th century. The case study is based on the Letters as Loot Corpus comprising private letters from the 1660s–1670s and the 1770s–1780s and focuses on the object form of the 1st person singular personal pronoun, namely, mij or mijn. Despite the early selection of mij, some language users in the late 17th and 18th century adopted mijn in writing. The analysis shows a normative split in written Dutch of the time, with most language users either converging to or diverging from the supposed standard form mij.*
Corruption is a current and complex problem with significant effects on trade. For example, at the time of writing the US Justice Department was intervening in a case against a large a pharmaceutical company. It was alleged that the company was responsible for a scheme of drug price increases in the US, as it “ … bribed doctors and their staffs to increase sales.” The price of the drug, addressing infant seizure disorder, had increased 97,000% since the year 2000. Also, many of the affected sales were driven by Medicare reimbursements.1 This case suggests that corruption may:
Corruption is also a concern in a global perspective. The UN Secretary-General António Guterres, when addressing the UN Security Council on the issue of corruption in post-war territories, stated that “Corruption robs schools, hospitals and others of vitally needed funds,” with negative effects on people’s rights, foreign investment, and the economy. Based on the World Economic Forum, the cost of corruption is at least $2.6 trillion, or five percent of the global gross domestic product (GDP). According to the World Bank, businesses and individuals pay more than $1 trillion in bribes every year.2 Finally, the Organisation for Economic Co-operation and Development (OECD), through its Clean Government Initiative, has identified at least four negative effects of corruption, namely:
The Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP) builds upon international instruments on corruption. It addresses them differently, by adhering to their principles, encouraging their observance, or mandating their ratification or accession. The list of instruments includes (Art. 26.6)
In today’s broader context of resistance to the expansion of international trade, particularly in the United States (with its rejection of the Trans-Pacific Partnership (TPP)) and the United Kingdom (with its rejection of the European Union), the regulatory coherence chapter of the TPP is highly relevant. Its cross-cutting sectoral approach and use of industry-specific annexes to reduce technical or regulatory barriers to trade is likely to serve as the model for all future multilateral trade treaties. Moreover, even though the TPP was rejected by the Trump administration, it later rose as the CPTPP. The United States may still participate in some form. Recently, the United Kingdom expressed an interest in becoming a party. Ambitious trade deals like the TPP sometimes take decades to finalize. In any case, both the substantive provisions and the architectural structure of a highly negotiated free trade agreement (FTA) like the TPP are quite enduring, so the TPP’s regulatory coherence mechanics are likely to re-surface in future FTAs. In other words, the current form of the TPP will serve as a model for any future TPP or similar mega-regional FTA.
While some heritage languages enjoy large numbers of speakers and vibrant communities, centuries-old and ongoing sociohistorical and sociolinguistic oppression has resulted in the extreme endangerment of many Indigenous languages. To counter this linguistic and cultural loss, a growing number of communities have engaged in language revitalization efforts that are tied to broader objectives of ethnic reclamation and cultural resistance, aiming not only to maintain but also to strengthen what has been lost. Heritage language revitalization is a long-term project that demands change and engagement across many aspects of community life, work that is ripe with tensions and contradictions. This chapter considers three recurrent questions in heritage language revitalization: what efforts should be prioritized in language revitalization, who should take responsibility in revitalizing a language, and how should revitalization efforts navigate the perceived need to establish linguistic norms and standards while concomitantly supporting linguistic diversity. To date, these questions have been described as tensions or problems that reveal conflicting priorities, often the result of historical inequalities, and that frequently hinder language revitalization efforts. Rather than framing these questions as problems, the present chapter considers how communities have responded to these challenges to create new opportunities for collaboration and new approaches that embrace ambiguity and pluralism.
The primary benefit of piezo-ICSI lies in the standardization and simplification of the ICSI procedure, resulting in a considerably shorter learning curve compared to the conventional technique. The success of the ICSI procedure becomes independent of the embryologist, providing the clinic with more robust and predictable laboratory output. Piezo might be the first step towards automatizing the ICSI procedure. Piezo-ICSI also has promise to be the method of choice in cases of more sensitive/fragile oocytes and in older age groups. In patients older than 35 years evidence seems to show further benefits after ICSI in significantly decreased oocyte degeneration rate and increased blastocyst rate. Clinical experiences highlight the spread of the technique in human IVF; however, the operating liquid currently used in the injecting microcapillary is an obstacle for human registration.
Digitalization in the legal domain is an amazing example of the way information technology (IT) can displace or enrich typically human tasks. Fueled by the recent progress in artificial intelligence (AI) (big data, machine learning, natural language processing, etc.), this phenomenon of digitalization affects more and more legal tasks and functions. Effective examples of digitalization in the legal domain are very diverse, ranging from exploration of patent classifications1 to prediction of legal cases’ outcomes (e.g., anticipation of foreseeable damages from an action).2 One can also mention e-discovery,3 as well as the digitalization of the organization and review of legal documents.4
In this chapter, we seek to understand key economic consequences of network effects. First, in Section 3.1, we analyze the impacts that network effects have on the demand for participation on a platform. The main lesson we draw is that the interdependence between individual demands leads to unconventional aggregate demands; in particular, we show that a given price for accessing the platform may be compatible with several levels of participation. Next, in Section 3.2, we explore the pricing of access to a platform, which is made complex by the presence of network effects. Finally, in Section 3.3, we discuss other strategic decisions that platforms need to combine with pricing to manage network effects; in particular, a platform has to decide the extent to which its services are compatible with alternative services.