Exascale Computing in Astrophysics


Centro Stefano Franscini

Monte Verita, Ascona, Switzerland

8 September - 13 September 2013

 

Computational science is a critically important tool for physicists, achieving a status equal to theory and experiment.  Simulations test theories and enable the assimilation and interpretation of data with ever increasing complexity. Computational astrophysics is  a leading breakthrough area in modern astrophysics. During the last decade, we devoted great effort to develop such algorithms and push computer hardware design. This past decade was the era of petascale computing, with the fastest computer in the world now delivering 10 quadrillions or «10 to the sixteen» floating point operations per second (or 10 petaflops).


We must now prepare for the “exascale era”, with a new generation of million-core machines exploiting GPUs with new connection topologies. Power consumption is now the main limitation, so energy efficient solutions are critical. With million-core systems, we require fault tolerant algorithms as failures are expected at the rate of one per minute. We must adapt and re-think our algorithms in order to exploit effectively such complex and peculiar architecture.


Our conference will bring together key players in the computational astrophysics community to address the challenges of exascale computing by defining the scientific cases with potential new discoveries that require exascale computing power and re-examine our basic algorithms to fit the architectures of exascale computing.


The conference is organized by Romain Teyssier and George Lake (University of Zurich), together with Claudio Gheller and Thomas Schulthess (Swiss Supercomputing Centre). Many thanks to our advisors Ben Moore, Matthias Liebendorfer and Tom Theuns.