Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgments
- 1 Introduction
- 2 Volume: Data Acquisition, Storage, and Retrieval
- 3 Vagueness: Natural Language and Semantics
- 4 Variety: Classification and Clustering
- 5 Virality: Networks and Information Propagation
- 6 Velocity: Online Methods and Data Streams
- 7 Volunteers: Humanitarian Crowdsourcing
- 8 Veracity: Misinformation and Credibility
- 9 Validity: Biases and Pitfalls of Social Media Data
- 10 Visualization: Crisis Maps and Beyond
- 11 Values: Privacy and Ethics
- 12 Conclusions and Outlook
- Bibliography
- Index
- Terms and Acronyms
2 - Volume: Data Acquisition, Storage, and Retrieval
Published online by Cambridge University Press: 05 July 2016
- Frontmatter
- Contents
- Preface
- Acknowledgments
- 1 Introduction
- 2 Volume: Data Acquisition, Storage, and Retrieval
- 3 Vagueness: Natural Language and Semantics
- 4 Variety: Classification and Clustering
- 5 Virality: Networks and Information Propagation
- 6 Velocity: Online Methods and Data Streams
- 7 Volunteers: Humanitarian Crowdsourcing
- 8 Veracity: Misinformation and Credibility
- 9 Validity: Biases and Pitfalls of Social Media Data
- 10 Visualization: Crisis Maps and Beyond
- 11 Values: Privacy and Ethics
- 12 Conclusions and Outlook
- Bibliography
- Index
- Terms and Acronyms
Summary
The 2010 earthquake in Haiti represented, in more than one sense, a collision between traditional crisis information processing practices and new information dynamics. Emergency relief organizations were not prepared to deal with high-volume data flows coming from two new sources. First, mobile-enabled communication technologies were being used to send a large number of messages by affected populations, who expected an answer from relief organizations. Second, vast quantities of data were being produced by volunteers in technical communities (Harvard Humanitarian Initiative, 2011, p. 19). In general, the amount of data generated during a crisis is overwhelming. Processing crisis-relevant social media messages requires careful attention to scalability issues, particularly because the production and consumption of data often surges unpredictably by several orders of magnitude.
This chapter focuses on the data volume, and presents scalable methods to acquire, store, index, and retrieve social media messages, with an emphasis on their textual content.We describe the data sizes that are typical of social media during disasters (§2.1), and methods to acquire (§2.2) and filter (§2.3) data.We then present methods for data representation (§2.4) as well as data indexing and storage (§2.5).
Social Media Data Sizes
Any characterization of social media risks becoming outdated quickly. The Internet Live Stats project maintains a dizzying display of visual statistics depicting how much content is generated every day by social media users.
Social media platforms usually report the number of users they have in terms of monthly active users, defined as peoplewho interact with the platform at least once during a month. For the large platforms, this figure is usually measured in the order of hundreds of millions. Every day, the number of messages posted in large social media platforms such as Twitter, Facebook and Instagram is in the order of tens of millions to hundreds of millions of messages, and hundreds of thousands of hours of video are uploaded to YouTube.
In the case of microtext, while each message is short (e.g., currently a maximum of 140 characters in Twitter, and 420 characters in Facebook status updates), meta-data attached to messages causes a blowup in data sizes. A data record for a Twitter message, typically serialized as a string in JSON,4 is around 4KBwhen all the formatting and metadata attached to each message are included.
- Type
- Chapter
- Information
- Big Crisis DataSocial Media in Disasters and Time-Critical Situations, pp. 18 - 34Publisher: Cambridge University PressPrint publication year: 2016
- 2
- Cited by