Hostname: page-component-848d4c4894-pjpqr Total loading time: 0 Render date: 2024-07-07T06:55:50.214Z Has data issue: false hasContentIssue false

GraviDy: a modular, GPU-based, direct-summation N-body code

Published online by Cambridge University Press:  07 March 2016

Cristián Maureira-Fredes
Affiliation:
Max-Planck-Institut für Gravitationsphysik (Albert Einstein Institut Potsdam-Golm, Germany email: cristian.maureira.fredes@aei.mpg.de
Pau Amaro-Seoane
Affiliation:
Max-Planck-Institut für Gravitationsphysik (Albert Einstein Institut Potsdam-Golm, Germany email: cristian.maureira.fredes@aei.mpg.de
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

The direct-summation of N gravitational forces is a complex problem for which there is no analytical solution. Dense stellar systems such as galactic nuclei and stellar clusters are the loci of different interesting problems. In this work we present a new GPU, direct-summation N-body integrator written from scratch and based on the Hermite scheme. The first release of the code consists of the Hermite integrator for a system of N bodies with softening. We find an acceleration factor of about ≈ 90 of the GPU version in a single node as compared to the Serial-Single-CPU one. We additionally investigate the impact of using softening in the dynamics of a dense cluster. We study how it affects the two body relaxation, as compared with another code, NBODY6, which uses KS regularization, so as to understand the role of softening in the evolution of the system. This initial release is the first step towards more and more realistic scenarios, starting for a proper treatment for binary evolution, close encounters and the role of a massive black hole.

Type
Contributed Papers
Copyright
Copyright © International Astronomical Union 2016 

References

Aarseth, S. J. 1999, The Publications of the Astronomical Society of the Pacific 111 13331346.Google Scholar
Spurzem, R. 1999, Journal of Computational and Applied Mathematics 109 407432.Google Scholar
Aarseth, S. J. 2003, ISBN 0521432723.Google Scholar
Portegies Zwart, S. F., McMillan, S. L. W., Hut, P., & Makino, J. 2001, MNRAS 321 199226.CrossRefGoogle Scholar
Portegies Zwart, S. F., Belleman, R. G., & Geldof, P. M. 2007, New A 12 641650.CrossRefGoogle Scholar
Hamada, T. & Iitaka, T. 2007, New A.Google Scholar
Belleman, R. G., Bédorf, J., & Portegies Zwart, S. F. 2008, New A 13 103112.Google Scholar
Berczik, P., Nitadori, K., Zhong, S., Spurzem, R., Hamada, T., Wang, X., Berentzen, I., Veles, A., & Ge, W. 2011, International conference on High Performance Computing, 8–18.Google Scholar
Nitadori, K. & Aarseth, S. J. 2012, MNRAS 424 545552.Google Scholar
Capuzzo-Dolcetta, R., Spera, M., & Punzo, D. 2013, Journal of Computational Physics 236 580593.Google Scholar
Kokubo, E., Yoshinaga, K., & Makino, J. 1998, MNRAS 297 10671072.CrossRefGoogle Scholar
Konstantinidis, S. & Kokkotas, K. D. 2010, A&A 522 A70.Google Scholar
Makino, J. & Aarseth, S. J. 1992, PASJ 44 141151.Google Scholar