Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgments
- PART I INTRODUCTION
- PART II INFRASTRUCTURE SYSTEMS
- 2 Infrastructure Systems
- 3 Disruptions
- 4 Graphs and Networks
- 5 Big Data and Resilience Engineering
- 6 Graphical Models
- 7 Belief Functions
- 8 Tensors Applications
- 9 Resilience Index—Selected Examples
- 10 Epilogue
- Index
- References
5 - Big Data and Resilience Engineering
from PART II - INFRASTRUCTURE SYSTEMS
Published online by Cambridge University Press: 05 March 2016
- Frontmatter
- Contents
- Preface
- Acknowledgments
- PART I INTRODUCTION
- PART II INFRASTRUCTURE SYSTEMS
- 2 Infrastructure Systems
- 3 Disruptions
- 4 Graphs and Networks
- 5 Big Data and Resilience Engineering
- 6 Graphical Models
- 7 Belief Functions
- 8 Tensors Applications
- 9 Resilience Index—Selected Examples
- 10 Epilogue
- Index
- References
Summary
Introduction
Big data is about extremely large volumes of data originating from various sources such as databases, audio and video files, millions of sensors, and other systems. The sources of data in some cases provide outputs that are structured, but most are unstructured, semistructured, or poly-structured. Furthermore, these data are streaming in some cases at a high velocity, and the data exposes at a higher speed as it is generated. Figure 5.1 shows the general framework of big data. The main key to the application of the big data paradigm relies heavily on the selection of appropriate data science techniques.
Hu et al. (2014) presented an overview of big data analytics. The authors summarized three definitions of big data:
The attribute definition defines big data technologies as “a new generation of technologies and architectures designed to economically extract value fromvery large volumes of a wide variety of data by enabling high-velocity capture, discovery, and/or analysis” by (Cooper and Mell 2012).
The second definition is more subjective. Big data consists of “data sets whose size is beyond the ability of typical database software tools to capture, store, manage, and analyze.” This is based on the Mckinsey report (Manyika et al. 2011).
The final definition is often referred to as the architectural definition. Per this definition, big data is where the data volume, acquisition velocity, or data representation limits the ability to perform effective analysis using traditional relational approaches or requires the use of significant horizontal scaling for different processing (Cooper and Mell 2012).
Table 5.1 shows the comparison between big data and traditional data.
The big data analytics can be grouped into two alternative paradigms that are present in resilience engineering:
Streaming processing—The potential value of data depends on data freshness. Themajor characteristics data arrives in a stream;a continuous and only a limited portion can be stored.
Batch processing—In this application, the data are stored and analyzed later. In some cases, the data are analyzed in subsets.
Table 5.2 compares streaming processing and batch processing.
The development of advanced sensors and information technologies in critical infrastructure monitoring and control has provided a platform for the expansion and growth of data.
- Type
- Chapter
- Information
- Resilience EngineeringModels and Analysis, pp. 83 - 93Publisher: Cambridge University PressPrint publication year: 2016