To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter discusses some principal themes of the Cambridge Companion to Music in Digital Culture, emphasising the social and cultural dimensions of digital music. A historical introduction ranges from the embedding of digital technology in everyday life to the emergence of virtual realities, from digital-only genres like vaporwave to Second Life and Hatsune Miku, the virtual diva whose holographic performances are seen as emblematic of posthumanism: I sketch out an aesthetics of digital culture that emphasises continuities across its expressions, from digital multimedia and internet memes to playfulness on Reddit. Attention is also given to the real-world dimensions of digital culture, including the transition from downloads to streaming, internet-based participation, and so called Web 2.0 businesses. The digital revolution has brought about a radical restructuring of the music industry, culminating in a bizarre situation whereby music is economically underpinned by the collection of commercially valuable personal data on listeners.
The introduction begins with Ghana’s president Kwame Nkrumah (1957-1966) laying the foundation stone of the first reactor building at the new Ghana Atomic Energy Commission in Kwabenya. He promoted “scientific equity” and access to science for all citizens. The nuclear energy project, headed by the engineering professor R.P. Baffour, topped Nkrumah’s plans for scientific development. Nkrumah sent Baffour to the Soviet Union to negotiate with Prime Minister Khrushchev to see what resources Ghana might provide in exchange for technical assistance and a reactor. Nkrumah and his closest advisors asserted a new African vision for nuclear power, predicated on the idea that all countries had citizens with equal intellectual capabilities. Nkrumah expressed, “no country has monopoly of ability.” Ghana was among several independent African nations interested in nuclear energy and the peaceful uses of the atom including Tanzania, Libya, and Nigeria. Julius Nyere and Kenya’s Ali Mazuri stressed that Africans would be more capable of managing nuclear energy than Europeans. The introduction interrogates this assertion through a discussion of scientific equity, manpower and human capacity, and urban dynamics at Atomic Junction. It locates Ghana’s story within scholarship on the rise of nuclear power elsewhere, especially in India and South Africa.
This chapter shows how Ghanaian scientists tapped resources from different countries in their quest for a nuclear reactor, from the United Kingdom and the Soviet Union, to Canada, the United Arab Republic, India, and China. While many people living near Ghana’s Atomic Energy Commission believe the site has hosted a reactor since around 1966, it actually took several more decades to install the first low power research reactor, the GHARR-1, at Kwabenya. Nkrumah’s bid to obtain a reactor provoked the wrath of France, the United States, and the United Kingdom, all of whom backed the coup d’état against him. It then considers how subsequent governments tried to rekindle the Soviet nuclear reactor initiative and explored other possible reactors from Western powers, but finally settled on a Chinese offer. The chapter relies on a variety of sources including GAEC records, British spy reports, and correspondence between Nkrumah and Khrushchev of the Soviet Union. The chapter provides critical insights, including difficulties the Ghanaian government had in maintaining payments to the Soviets, problems with the storage of the unfinished reactor components while post-Nkrumah regimes mothballed the nuclear program, and several subsequent contracts for reactors that fell apart due to political instability.
The literature on party change has shown how the advent of the digital revolution and the diffusion of new Information and Communication Technologies (ICTs) in the democracies of the 21st century have influenced the way political parties communicate and perform their functions. Less investigated, however, is the organizational reaction of political parties to the challenges posed by the transformation of the communications environment. The aim of this paper is to scrutinize whether parties evince a transformative tendency towards virtual models in which new digital ICTs are used as ‘functional equivalents’ of the old organizational infrastructures. To this end, the paper focuses on the Spanish democracy – a paradigmatic case of the political transformations that European democracies have undergone since the 2008 economic crisis – comparing the organizational models of the main political parties: the Partido Socialista Obrero Español, the Partido Popular, Podemos and Ciudadanos. Particularly the analysis – through the use of parties' documents – focuses on whether and how digital tools are used by the Spanish parties in three dimensions: the participants in the organization, the organizational configuration and the decision-making process. The main conclusions are: new challenger parties make a more intense and radical use of new ICTs introducing ‘disrupting innovations’ in their organization, while old and mainstream parties gradually adapt their organization to the new digital environment introducing ‘sustaining innovations’; parties on the left make greater use of ICTs in order to foster greater internal democracy when compared to their corresponding parties on the centre-right.
Economic pressures continue to mount on modern-day livestock farmers, forcing them to increase herds sizes in order to be commercially viable. The natural consequence of this is to drive the farmer and the animal further apart. However, closer attention to the animal not only positively impacts animal welfare and health but can also increase the capacity of the farmer to achieve a more sustainable production. State-of-the-art precision livestock farming (PLF) technology is one such means of bringing the animals closer to the farmer in the facing of expanding systems. Contrary to some current opinions, it can offer an alternative philosophy to ‘farming by numbers’. This review addresses the key technology-oriented approaches to monitor animals and demonstrates how image and sound analyses can be used to build ‘digital representations’ of animals by giving an overview of some of the core concepts of PLF tool development and value discovery during PLF implementation. The key to developing such a representation is by measuring important behaviours and events in the livestock buildings. The application of image and sound can realise more advanced applications and has enormous potential in the industry. In the end, the importance lies in the accuracy of the developed PLF applications in the commercial farming system as this will also make the farmer embrace the technological development and ensure progress within the PLF field in favour of the livestock animals and their well-being.
Kernza® intermediate wheatgrass (Thinopyrum intermedium) is a novel perennial grain and forage crop with the potential to provide multiple ecosystem services, which recently became commercially available to farmers in the USA. The viability and further expansion of this promising crop require understanding how it may fit the needs of farmers’ livelihoods and the structure of their farming systems. However, no prior research has studied the perceptions and experiences of Kernza growers. The goals of this research were to understand why farmers grow Kernza, how Kernza fits into their systems and identify challenges for future research. We conducted in-depth interviews with ten growers in the North Central USA during the summer of 2017, who accounted for a third of the Kernza farmers in the USA at the time. All farmers had a positive attitude toward experimentation and trying new practices, and they were interested in Kernza for its simultaneous ecological and economic benefits. Kernza was marginal in terms of area, quality of fields and resources allocated in the farm systems, which also meant that farmers maintained low costs and risks. Growers utilized and valued Kernza as a dual-use crop (grain and forage), sometimes not harvesting grain but almost always grazing or harvesting hay and straw for bedding. Weeds were perceived as a challenge in some cases, but Kernza was valued as a highly weed-suppressive crop in others. Farmers requested information on optimal establishment practices, assessment of forage nutritive value, how to maintain grain yields over years, weed management, markets and economic assessment of Kernza systems. These results agree with other cases on sustainable practices adoption showing that engaging farmers in the research process from the beginning, identifying knowledge gaps and testing management alternatives are critical for the success and expansion of novel agricultural technologies.
From 1946 to 1975, Aristotle Onassis consolidated his position as a charismatic shipping tycoon and international businessman. During this period, he pioneered what would become the basic structure of the global shipping group. Between 1946 and 1975, he purchased 140 vessels of about 3.7 million grt and received more than two hundred and fifty million dollars of finance (in current prices) for these purchases from American banks. This chapter analyzes how he built his shipping business Empire, examining his sale and purchase (S&P) methods and showing how he constructed his fleet by gathering capital resources, and how he managed his resources and exploited the choices given. Onassis confronted both opportunity and crisis as he built his fleet from Liberty ships, and tankers despite the lack of any support from the Greek shipping milieu to whom he channelled his anger through a long memorandum called the Onassiad”. He expanded his business Empire through various conflicts. His most notorious venture in oil was through an agreement with Saudi Arabia that brought him into direct confrontation with the oil majors, the US government and the European and American shipping world.
The government is unique among speakers because of its coercive power as sovereign, its considerable resources, its privileged access to key information, and its wide variety of speaking roles as policymaker, commander-in-chief, employer, educator, health care provider, property owner, and more. The government’s expressive choices—and by this I mean the government’s choices about whether and when to speak, what to say, how to say it, and to whom—are neither inevitably good nor evil, but instead vary widely in their effects as well as their motives. Through its speech, the government informs, challenges, teaches, and inspires. Through its speech, the government also threatens, deceives, distracts, and vilifies. This chapter shines a light on the government’s speech in its many manifestations, with its vast array of audiences, topics, means, motives, and consequences. The more we recognize the volume and variety of the government’s speech in our lives, the more thoughtfully we can puzzle over its constitutional implications. After identifying some of the motivations for and consequences of those choices, this chapter introduces available options for constructively influencing them.
Advanced 2-D and 3-D computer visualizations are increasingly being used for recording and documentation, analysis, dissemination, and public engagement purposes. Recent technological advances not only considerably improve data acquisition, processing, and analysis but also enable easy and efficient online presentation. This article evaluates the contributions of advanced 2-D and 3-D computer visualization and discusses the potential of 3-D modeling for recording basketry technology and documenting the state of preservation of baskets. It explores the available analysis, integration, and online dissemination tools, using as case studies recently excavated baskets from Cache Cave in southern California. Results indicate that the proposed methodology, which incorporates reflectance transformation imaging visualizations and photogrammetric 3-D models, which are further processed using 3-D modeling software and integrated analysis tools and then transformed to a Web-based format, is a useful addition to the basketry analysis toolkit.
We sought to assess the presence and reporting quality of peer-reviewed literature concerning the accuracy, precision, and reliability of home monitoring technologies for vital signs and glucose determinations in older adult populations.
A narrative literature review was undertaken searching the databases Medline, Embase, and Compendex. Peer-reviewed publications with keywords related to vital signs, monitoring devices and technologies, independent living, and older adults were searched. Publications between the years 2012 and 2018 were included. Two reviewers independently conducted title and abstract screening, and four reviewers independently undertook full-text screening and data extraction with all disagreements resolved through discussion and consensus.
Two hundred nine articles were included. Our review showed limited assessment and low-quality reporting of evidence concerning the accuracy, precision, and reliability of home monitoring technologies. Of 209 articles describing a relevant device, only 45 percent (n = 95) provided a citation or some evidence to support their validation claim. Of forty-eight articles that described the use of a comparator device, 65 percent (n = 31) used low-quality statistical methods, 23 percent (n = 11) used moderate-quality statistical methods, and only 12 percent (n = 6) used high-quality statistical methods.
Our review found that current validity claims were based on low-quality assessments that do not provide the necessary confidence needed by clinicians for medical decision-making purposes. This narrative review highlights the need for standardized health technology reporting to increase health practitioner confidence in these devices, support the appropriate adoption of such devices within the healthcare system, and improve health outcomes.
The aim of this research was to look at the emergence of wearable technology and the internet of things (IoT) and their current and potential use in the health and care area. There is a wide and ever-expanding range of wearables, devices, apps, data aggregators and platforms allowing the measurement, tracking and aggregation of a multitude of health and lifestyle measures, information and behaviours. The use and application of such technology and the corresponding richness of data that it can provide bring the health and care insurance market both potential opportunities and challenges. Insurers across a range of fields are already engaging with this type of technology in their proposition designs in areas such as customer engagement, marketing and underwriting. However, it seems like we are just at the start of the journey, on a learning curve to find the optimal practical applications of such technology with many aspects as yet untried, tested or indeed backed up with quantifiable evidence. It is clear though that technology is only part of the solution, on its own it will not engage or change behaviours and insurers will need to consider this in terms of implementation and goals. In the first weeks of forming this working party, it became evident that the potential scope of this technology, the information already out there and the pace of development of it, is almost overwhelming. With many yet-unanswered questions the paper focuses on pulling together in one place relevant information for the consideration of the health and care actuary, and also to open the reader’s eyes to potential future innovations by drawing on use of the technology in other markets and spheres, and the “science fiction–like” new technology that is just around the corner. The paper explores:
an overview of wearables and IoT and available measures,
examples of how this technology is currently being used,
risks and challenges,
future technology developments and
what this may mean for the future of insurance.
Insurers who engage now are likely to be on an evolving business case model and product development journey, over which they can build up their understanding and interpretation of the data that this technology can provide. An exciting area full of potential – when and how will you get involved?
explains what emoji are, the significance of their name and how they operate as a form of writing system - but a very different one from other established writing systems. It explains the distinction between standardised and non-standardised emoji (i.e. stickers) and how the standardised set work both from a technological and a semiotic perspective. In doing so it introduces the concepts of pictographic, ideographic and logographic writing.
Accounts of how modernity affected the arts frequently draw connections with the aesthetics of modernism. The concept of the modern was broader than this, however, and included all new developments that had produced marked effects on social and cultural life. In the early twentieth century, operetta frequently engaged with everyday life, and related to features of modernity such as trains, cars, planes, factories, cinemas and hotels. Familiar objects of the modern age that members of the audience might either possess or desire feature often on stage. Women’s issues surface in operetta, whether they involve marriage, voting rights, treatment at work, careers, or questions of sexuality. The chapter also includes an account of developments in stage technology and modern mobilities.
It is clear that the productions in the West End and on Broadway of The Merry Widow marked a distinctive new phase in operetta reception. The massive success of The Merry Widow opened up a flourishing market for operettas from Vienna and Berlin. This was confirmed by the huge success of Straus’s The Chocolate Soldier in New York (1909) and London (1910). The Berlin operettas of Jean Gilbert were soon in demand in the West End and on Broadway. Continental European operetta entered a marketplace dominated by musical comedy. The first major blow to the operetta market, especially in the UK, was the outbreak of the First World War. After the war, many creators of operetta were eager to escape to the comfort of historical romances. In the first three decades of the twentieth century, many people were prepared to pay for operetta, and an assortment of theatres and ticket prices enabled a broad social mixture to do so. In addition to critical-aesthetic reception, theatrical productions were open to moral concerns. The chapter ends with reflections on the reasons for the decline in productions on Broadway and in the West End post-1933.
Little is known about the implications of accessing an outdoor range for broiler chicken welfare, particularly in relation to the distance ranged from the shed. Therefore, we monitored individual ranging behaviour of commercial free-range broiler chickens and identified relationships with welfare indicators. The individual ranging behaviour of 305 mixed-sex Ross 308 broiler chickens was tracked on a commercial farm from the second day of range access to slaughter age (from 16 to 42 days of age) by radio frequency identification (RFID) technology. The radio frequency identification antennas were placed at pop-holes and on the range at 2.7 and 11.2 m from the home shed to determine the total number of range visits and the distance ranged from the shed. Chickens were categorised into close-ranging (CR) or distant-ranging (DR) categories based on the frequency of visits less than or greater than 2.7 m from the home shed, respectively. Half of the tracked chickens (n=153) were weighed at 7 days of age, and from 14 days of age their body weight, foot pad dermatitis (FPD), hock burn (HB) and gait scores were assessed weekly. The remaining tracked chickens (n=152) were assessed for fear and stress responses before (12 days of age) and after range access was provided (45 days of age) by quantifying their plasma corticosterone response to capture and 12 min confinement in a transport crate followed by behavioural fear responses to a tonic immobility (TI) test. Distant-ranging chickens could be predicted based on lighter BW at 7 and 14 days of age (P=0.05), that is before range access was first provided. After range access was provided, DR chickens weighed less every week (P=0.001), had better gait scores (P=0.01) and reduced corticosterone response to handling and confinement (P<0.05) compared to CR chickens. Longer and more frequent range visits were correlated with the number of visits further from the shed (P<0.01); hence distant ranging was correlated with the amount of range access, and consequently the relationships between ranging frequency, duration and distance were strong. These relationships indicate that longer, more frequent and greater ranging from the home shed was associated with improved welfare. Further research is required to identify whether these relationships between ranging behaviour and welfare are causal.
Technology use provides great benefits by extending human ability, but the negative effects cannot always be ignored. The author examined the dilemmas of technology use based on Shibata's analysis of McLuhan's extension theory and indicated two types of dilemmas in continuous technology use. First is the decreased human ability as the innate functionality is substituted by the technology. Second is excessive utilisation of the technology, which may instil a false sense of naturally extended ability.
Subsequently, the author considered and suggested approaches for mitigating both types of dilemmas. The decreased-ability dilemma might be resolved by continuously utilising the technology and designing technology relevant to the degree of human ability. Meanwhile, the excessive-utilisation dilemma might be resolved by regulating the technology use and designing technology that achieves the desired disposition change in users.
Finally, the possibility of advancing the existing design approaches to further resolve the dilemmas was discussed.
Traditionally, data has been presented in textual format and the interaction with the user confined to the keyboard or touch screen to input data and the screen to deliver information. However, with the advent of a data-driven society, an opportunity for more natural and efficient ways of presenting data and interacting with it has emerged.
Although the area of Human-Computer Interaction has existed for a long time, its focus has always been on the interaction with an artefact (the computer). Today instead, we face the challenge of interacting with an intangible object: data. As a result, a key requirement emerges: How do we make legible the enormous amounts of data produced per day to ordinary people?
Designers, able to devise natural and smooth interaction experiences, should play a relevant role in this new scenario. However, they might lack the basic technical knowledge required to understand the possibilities of these new systems.
In this paper we present a brief how-to manual for an undergraduate course on data materialisation: the process of transforming an intangible object (data) in an artefact that can be interacted with in a physical way.