Hostname: page-component-76fb5796d-22dnz Total loading time: 0 Render date: 2024-04-26T03:20:14.317Z Has data issue: false hasContentIssue false

Exploring Data Sonification to Enable, Enhance, and Accelerate the Analysis of Big, Noisy, and Multi-Dimensional Data

Workshop 9

Published online by Cambridge University Press:  29 August 2019

J. Cooke
Affiliation:
Centre for Astrophysics and Supercomputing, Swinburne University of Technology, Australia email: jcooke@astro.swin.edu.au
W. Díaz-Merced
Affiliation:
IAU Office of Astronomy for Development, SAAO, Cape Town, South Africa
G. Foran
Affiliation:
Centre for Astrophysics and Supercomputing, Swinburne University of Technology, Australia email: jcooke@astro.swin.edu.au
J. Hannam
Affiliation:
RMIT University, Melbourne, Australia
B. Garcia
Affiliation:
CNEA-CONICET-UNSAM, Mendoza, Argentina
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

We explore the properties of sound and human sound recognition as a means to enhance and accelerate visual-only data analysis methods. The aim of this work is to enable and improve the analysis of large data sets, data requiring rapid analysis, multi-dimensional data, and signal detection in data with low signal-to-noise ratio. We present a prototype tool, StarSound, to sonify data such as astronomical transient light curves, spectra, and power spectra. Stereophonic sound is used to ‘visualise’ and localise the data under examination, and 3-D sound is discussed in conjunction with virtual reality technology, as a means to enhance analysis efficiency and efficacy, including rapid data assessment and training machine learning software. In addition, we explore the use of higher-order harmonics as a means to examine simultaneously multi-dimensional data sets. Such an approach can allow the data to be interpreted in a holistic manner and facilitates the discovery of previously unseen connections and relationships. Furthermore, we exploit the capability of the human brain for selective or focused hearing that enables the identification of desired signals in noisy data, or amidst similar or more significant signals. Finally, we provide research examples that benefit directly from data sonification. The work presented here aims to help tackle the challenges of the upcoming era of Big Data and help optimise, speed up and expand aspects of data analysis requiring human interaction.

Type
Contributed Papers
Copyright
© International Astronomical Union 2019 

Footnotes

Australian Research Council Future Fellow

Australian Research Council Centre of Excellence for Gravitational Wave Discovery

References

Bregman, A. 1990, Auditory scene analysis: The perceptual organization of sound (MIT Press, Cambridge, USA)CrossRefGoogle Scholar
Bourne, R. R. A., et al. 2017, Lancet Global Health, 5, 9CrossRefGoogle Scholar
Candey, R. M., Schertenleib, A. M., & Díaz Merced, W. L. 2005, AGU, ED43B-0850Google Scholar
Candey, R. M., Schertenleib, A. M., & Díaz Merced, W. L. 2006, Proc. 12th Int. Conf. on Auditory Display, p. 289Google Scholar
Cooke, J. 2009, ApJ, 704, L62CrossRefGoogle Scholar
Cooke, J., Omori, Y., & Ryan-Weber, E. V. 2013, MNRAS, 433, 2122CrossRefGoogle Scholar
Díaz-Merced, , et al. 2008, Sun and Geosphere, 3, 42Google Scholar
Díaz-Merced, W. L. 2013, Ph.D. Thesis, Computer Science, Univ. Glasgow, 258Google Scholar
Elliott, T. M., Hamilton, L. S., & Theunissen, F. E. 1995, J. Accoust. Soc. America, 133, 389CrossRefGoogle Scholar
Foran, G., et al. 2018, in preparationGoogle Scholar
Hermann, T., Hunt, A., & Neuhoff, J. G. 2011, The Sonification Handbook (Logos Publishing House, Berlin)Google Scholar
Inhoff, U., Weger, W., & Albrecht, W. 2006, Psychol. Science, 17, 187Google Scholar
Law, D. R., et al. 2012, ApJ, 745, 85CrossRefGoogle Scholar
Law, D. R., et al. 2012, ApJ, 759, 29CrossRefGoogle Scholar
Martinez-Conde, S., Macknik, S., Troncoso, X., & Dyar, T. 2006, Neuron, 49, 297CrossRefGoogle Scholar
Posner, M., & Dehaene, S. 1994 Trends Neuroscience, 17, 75CrossRefGoogle Scholar
Rychtáriková, M., Muellner, H., Chmelík, V., Roozen, N. B., Urbán, D., Pelegrin-Garcia, D., & Glorieux, C. 2016, Acta Acustica united with Acustica, 102, 58CrossRefGoogle Scholar
Rucci, M., et al. 2007, Nature, 447, 852CrossRefGoogle Scholar
Puckette, M. 1988, Proc. ICMC (San Francisco: Int. Computer Music Association), p. 420Google Scholar
Puckette, M. 1991, Computer Music Journal, 15(3): 68-77CrossRefGoogle Scholar
Shams, L. 2000, Nature, 408, 788CrossRefGoogle Scholar
Shapley, A. E., et al. 2003, ApJ, 588, 65CrossRefGoogle Scholar
Schwartz, S., Vuilleumier, P., Hutton, C., Maravita, A., Dolan, R., & Driver, J. 2005, Cerebral Cortex, 15, 770CrossRefGoogle Scholar
Tutchton, R. M., et al. 2012, J. Southeast. Assoc. for Research in Astron., 6, 21Google Scholar
Welch, R. B., & Warren, D. H. 1986, in: Boff, K. R., Kaufman, L., & Thomas, J. P. (eds.), Handbook of Perception and Human Performance (Wiley), 1, Ch. 25Google Scholar