Directional Sensitivity In Light-Mass Dark Matter Searches With Single-Electron Resolution Ionization Detectors [CL]

http://arxiv.org/abs/1703.05371


We present a method for using solid state detectors with directional sensitivity to dark matter interactions to detect low-mass Weakly Interacting Massive Particles (WIMPs) originating from galactic sources. In spite of a large body of literature for high-mass WIMP detectors with directional sensitivity, there is no available technique to cover WIMPs in the mass range <1 GeV. We argue that single-electron resolution semiconductor detectors allow for directional sensitivity once properly calibrated. We examine commonly used semiconductor material response to these low-mass WIMP interactions.

Read this paper on arXiv…

F. Kadribasic, N. Mirabolfathi, K. Nordlund, et. al.
Fri, 17 Mar 17
32/50

Comments: N/A

Quantum Nuclear Pasta and Nuclear Symmetry Energy [CL]

http://arxiv.org/abs/1703.01433


Complex and exotic nuclear geometries are expected to appear naturally in dense nuclear matter found in the crust of neutron stars and supernovae environment collectively referred to as nuclear pasta. The pasta geometries depend on the average baryon density, proton fraction and temperature and are critically important in the determination of many transport properties of matter in supernovae and the crust of neutron stars. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities $0.03 \leq \rho \leq 0.10$ fm$^{-3}$, proton fractions $0.05 \leq Y_p \leq 0.40$, and zero temperature. The full quantum simulations, in particular, allow us to thoroughly investigate the role and impact of the nuclear symmetry energy on pasta configurations. We use the Sky3D code that solves the Skyrme Hartree-Fock equations on a three-dimensional Cartesian grid. For the nuclear interaction we use the state of the art UNEDF1 parametrization, which was introduced to study largely deformed nuclei, hence is suitable for studies of the nuclear pasta. Density dependence of the nuclear symmetry energy is simulated by tuning two purely isovector observables that are insensitive to the current available experimental data. We find that a minimum total number of nucleons $A=2000$ is necessary to prevent the results from containing spurious shell effects and to minimize finite size effects. We find that a variety of nuclear pasta geometries are present in the neutron star crust and the result strongly depends on the nuclear symmetry energy. The impact of the nuclear symmetry energy is less pronounced as the proton fractions increase. Quantum nuclear pasta calculations at $T=0$ MeV are shown to get easily trapped in meta-stable states, and possible remedies to avoid meta-stable solutions are discussed.

Read this paper on arXiv…

F. Fattoyev, C. Horowitz and B. Schuetrumpf
Tue, 7 Mar 17
3/66

Comments: 23 pages, 18 figures, 8 tables

Magnetic Reconnection in Turbulent Diluted Plasmas [CL]

http://arxiv.org/abs/1703.01238


We study magnetic reconnection events in a turbulent plasma within the two-fluid theory. By identifying the diffusive regions, we measure the reconnection rates as function of the conductivity and current sheet thickness. We have found that the reconnection rate scales as the squared of the inverse of the current sheet’s thickness and is independent of the aspect ratio of the diffusive region, in contrast to other analytical, e.g. the Sweet-Parker and Petscheck, and numerical models. Furthermore, while the reconnection rates are also proportional to the square inverse of the conductivity, the aspect ratios of the diffusive regions, which exhibit values in the range of $0.1-0.9$, are not correlated to the latter. Our findings suggest a new expression for the magnetic reconnection rate, which, after experimental verification, can provide a further understanding of the magnetic reconnection process.

Read this paper on arXiv…

N. Offeddu and M. Mendoza
Mon, 6 Mar 17
21/47

Comments: 9 Pages, 6 figures

Higher Order Accurate Space-Time Schemes for Computational Astrophysics — Part I — Finite Volume Methods [IMA]

http://arxiv.org/abs/1703.01241


As computational astrophysics comes under pressure to become a precision science, there is an increasing need to move to high accuracy schemes for computational astrophysics. Hence the need for a specialized review on higher order schemes for computational astrophysics.
The focus here is on weighted essentially non-oscillatory (WENO) schemes, discontinuous Galerkin (DG) schemes and PNPM schemes. WENO schemes are higher order extensions of traditional second order finite volume schemes which are already familiar to most computational astrophysicists. DG schemes, on the other hand, evolve all the moments of the solution, with the result that they are more accurate than WENO schemes. PNPM schemes occupy a compromise position between WENO and PNPM schemes. They evolve an Nth order spatial polynomial, while reconstructing higher order terms up to Mth order. As a result, the timestep can be larger.
Time-dependent astrophysical codes need to be accurate in space and time. This is realized with the help of SSP-RK (strong stability preserving Runge-Kutta) schemes and ADER (Arbitrary DERivative in space and time) schemes. The most popular approaches to SSP-RK and ADER schemes are also described.
The style of this review is to assume that readers have a basic understanding of hyperbolic systems and one-dimensional Riemann solvers. Such an understanding can be acquired from a sequence of prepackaged lectures available from this http URL We then build on this understanding to give the reader a practical introduction to the schemes described here. The emphasis is on computer-implementable ideas, not necessarily on the underlying theory, because it was felt that this would be most interesting to most computational astrophysicists.

Read this paper on arXiv…

D. Balsara
Mon, 6 Mar 17
28/47

Comments: N/A

Comparative statistics of selected subgrid-scale models in large eddy simulations of decaying, supersonic MHD turbulence [CL]

http://arxiv.org/abs/1703.00858


Large eddy simulations (LES) are a powerful tool in understanding processes that are inaccessible by direct simulations due to their complexity, for example, in the highly turbulent regime. However, their accuracy and success depends on a proper subgrid-scale (SGS) model that accounts for the unresolved scales in the simulation. We evaluate the applicability of two traditional SGS models, namely the eddy-viscosity (EV) and the scale-similarity (SS) model, and one recently proposed nonlinear (NL) SGS model in the realm of compressible MHD turbulence. Using 209 simulations of decaying, supersonic (initial sonic Mach number of ~3) MHD turbulence with a shock-capturing scheme and varying resolution, SGS model and filter, we analyze the ensemble statistics of kinetic and magnetic energy spectra and structure functions. Furthermore, we compare the temporal evolution of lower and higher order statistical moments of the spatial distributions of kinetic and magnetic energy, vorticity, current density, and dilatation magnitudes. We find no statistical influence on the evolution of the flow by any model if grid-scale quantities are used to calculate SGS contributions. In addition, the SS models, which employ an explicit filter, have no impact in general. On the contrary, both EV and NL models change the statistics if an explicit filter is used. For example, they slightly increase the dissipation on the smallest scales. We demonstrate that the nonlinear model improves higher order statistics already with a small explicit filter, i.e. a three-point stencil. The results of e.g. the structure functions or the skewness and kurtosis of the current density distribution are closer to the ones obtained from simulations at higher resolution. We conclude that the nonlinear model with a small explicit filter is suitable for application in more complex scenarios when higher order statistics are important.

Read this paper on arXiv…

P. Grete, D. Vlaykov, W. Schmidt, et. al.
Fri, 3 Mar 17
36/62

Comments: 13 pages, 8 figures, accepted for publication in PRE

Detonability of white dwarf plasma: turbulence models at low densities [SSA]

http://arxiv.org/abs/1703.00432


We study the conditions required to produce self-sustained detonations in turbulent, carbon-oxygen degenerate plasma at low densities.
We perform a series of three-dimensional hydrodynamic simulations of turbulence driven with various degrees of compressibility. The average conditions in the simulations are representative of models of merging binary white dwarfs.
We find that material with very short ignition times is abundant in the case that turbulence is driven compressively. This material forms contiguous structures that persist over many ignition times, and that we identify as prospective detonation kernels. Detailed analysis of prospective kernels reveals that these objects are centrally-condensed and their shape is characterized by low curvature, supportive of self-sustained detonations. The key characteristic of the newly proposed detonation mechanism is thus high degree of compressibility of turbulent drive.
The simulated detonation kernels have sizes notably smaller than the spatial resolution of any white dwarf merger simulation performed to date. The resolution required to resolve kernels is 0.1 km. Our results indicate a high probability of detonations in such well-resolved simulations of carbon-oxygen white dwarf mergers. These simulations will likely produce detonations in systems of lower total mass, thus broadening the population of white dwarf binaries capable of producing Type Ia supernovae. Consequently, we expect a downward revision of the lower limit of the total merger mass that is capable of producing a prompt detonation.
We review application of the new detonation mechanism to various explosion scenarios of single, Chandrasekhar-mass white dwarfs.

Read this paper on arXiv…

D. Fenn and T. Plewa
Thu, 2 Mar 17
43/44

Comments: 13 pages, MNRAS in press

Multi-Dimensional Vlasov-Poisson Simulations with High-Order Monotonicity and Positivity Preserving Schemes [CL]

http://arxiv.org/abs/1702.08521


We develop new numerical schemes for Vlasov–Poisson equations with high-order accuracy. Our methods are based on a spatially monotonicity-preserving (MP) scheme, and are modified suitably so that positivity of the distribution function is also preserved. We adopt an efficient semi-Lagrangian time integration scheme which is more accurate and computationally less expensive than the three-stage TVD Runge-Kutta integration. We apply our spatially fifth- and seventh-order schemes to a suite of simulations of collisionless self-gravitating systems and electrostatic plasma simulations, including linear and nonlinear Landau damping in one-dimension and Vlasov–Poisson simulations in a six-dimensional phase space. The high-order schemes achieve a significantly improved accuracy in comparison with the third-order positive-flux-conserved scheme adopted in our previous study. With the semi-Lagrangian time integration, the computational cost of our high-order schemes does not significantly increase, but remains roughly the same as that of the third-order scheme.

Read this paper on arXiv…

S. Tanaka, K. Yoshikawa, T. Minoshima, et. al.
Wed, 1 Mar 17
31/67

Comments: 24 pages, 19 figures. Submitted to the Astrophysical Journal

The equilibrium-diffusion limit for radiation hydrodynamics [CL]

http://arxiv.org/abs/1702.07300


The equilibrium-diffusion approximation (EDA) is used to describe certain radiation-hydrodynamic (RH) environments. When this is done the RH equations reduce to a simplified set of equations. The EDA can be derived by asymptotically analyzing the full set of RH equations in the equilibrium-diffusion limit. We derive the EDA this way and show that it and the associated set of simplified equations are both first-order accurate with transport corrections occurring at second order. Having established the EDA’s first-order accuracy we then analyze the grey nonequilibrium-diffusion approximation and the grey Eddington approximation and show that they both preserve this first-order accuracy. Further, these approximations preserve the EDA’s first-order accuracy when made in either the comoving-frame (CMF) or the lab-frame (LF). While analyzing the Eddington approximation, we found that the CMF and LF radiation-source equations are equivalent when neglecting ${\cal O}(\beta^2)$ terms and compared in the LF. Of course, the radiation pressures are not equivalent. It is expected that simplified physical models and numerical discretizations of the RH equations that do not preserve this first-order accuracy will not retain the correct equilibrium-diffusion solutions. As a practical example, we show that nonequilibrium-diffusion radiative-shock solutions devolve to equilibrium-diffusion solutions when the asymptotic parameter is small.

Read this paper on arXiv…

J. Ferguson, J. Morel and R. Lowrie
Fri, 24 Feb 17
49/50

Comments: 16 pages, 1 figure, submitted for publication to the Journal of Quantitative Spectroscopy and Radiative Transfer

SHARP: A Spatially Higher-order, Relativistic Particle-in-Cell Code [CL]

http://arxiv.org/abs/1702.04732


Numerical heating in particle-in-cell (PIC) codes currently precludes the accurate simulation of cold, relativistic plasma over long periods, severely limiting their applications in astrophysical environments. We present a spatially higher order accurate relativistic PIC algorithm in one spatial dimension which conserves charge and momentum exactly. We utilize the smoothness implied by the usage of higher order interpolation functions to achieve a spatially higher order accurate algorithm (up to 5th order). We validate our algorithm against several test problems — thermal stability of stationary plasma, stability of linear plasma waves, and two-stream instability in the relativistic and non-relativistic regimes. Comparing our simulations to exact solutions of the dispersion relations, we demonstrate that SHARP can quantitatively reproduce important kinetic features of the linear regime. Our simulations have a superior ability to control energy non-conservation and avoid numerical heating in comparison to common second order schemes. We provide a natural definition for convergence of a general PIC algorithm: the complement of physical modes captured by the simulation, i.e., lie above the Poisson noise, must grow commensurately with the resolution. This implies that it is necessary to simultaneously increase the number of particles per cell and decrease the cell size. We demonstrate that traditional ways for testing for convergence fail, leading to plateauing of the energy error. This new PIC code enables to faithfully study the long-term evolution of plasma problems that require absolute control of the energy and momentum conservation.

Read this paper on arXiv…

M. Shalaby, A. Broderick, P. Chang, et. al.
Fri, 17 Feb 17
28/43

Comments: 25 pages, 18 figures, submitted to ApJ

Factorized Runge-Kutta-Chebyshev Methods [CL]

http://arxiv.org/abs/1702.03818


The second-order extended stability Factorized Runge-Kutta-Chebyshev (FRKC2) class of explicit schemes for the integration of large systems of PDEs with diffusive terms is presented. FRKC2 schemes are straightforward to implement through ordered sequences of forward Euler steps with complex stepsizes, and easily parallelised for large scale problems on distributed architectures.
Preserving 7 digits for accuracy at 16 digit precision, the schemes are theoretically capable of maintaining internal stability at acceleration factors in excess of 6000 with respect to standard explicit Runge-Kutta methods. The stability domains have approximately the same extents as those of RKC schemes, and are a third longer than those of RKL2 schemes. Extension of FRKC methods to fourth-order, by both complex splitting and Butcher composition techniques, is discussed.
A publicly available implementation of the FRKC2 class of schemes may be obtained from maths.dit.ie/frkc

Read this paper on arXiv…

S. OSullivan
Thu, 16 Feb 17
11/45

Comments: 9 pages, 6 figures, accepted to the proceedings of Astronum 2016 – 11th Annual International Conference on Numerical Modeling of Space Plasma Flows, June 6-10, 2016

Numerical aspects of Giant Impact simulations [EPA]

http://arxiv.org/abs/1701.08296


In this paper we present solutions to three short comings of Smoothed Particles Hydrodynamics (SPH) encountered in previous work when applying it to Giant Impacts. First we introduce a novel method to obtain accurate SPH representations of a planet’s equilibrium initial conditions based on equal area tessellations of the sphere. This allows one to imprint an arbitrary density and internal energy profile with very low noise which substantially reduces computation because these models require no relaxation prior to use. As a consequence one can significantly increase the resolution and more flexibly change the initial bodies to explore larger parts of the impact parameter space in simulations. The second issue addressed is the proper treatment of the matter/vacuum boundary at a planet’s surface with a modified SPH density estimator that properly calculates the density stabilizing the models and avoiding an artificially low density atmosphere prior to impact. Further we present a novel SPH scheme that simultaneously conserves both energy and entropy for an arbitrary equation of state. This prevents loss of entropy during the simulation and further assures that the material does not evolve into unphysical states. Application of these modifications to impact simulations for different resolutions up to $6.4 \cdot 10^6$ particles show a general agreement with prior result. However, we observe resolution dependent differences in the evolution and composition of post collision ejecta. This strongly suggests that the use of more sophisticated equations of state also demands a large number of particles in such simulations.

Read this paper on arXiv…

C. Reinhardt and J. Stadel
Tue, 31 Jan 17
22/58

Comments: N/A

Unifying the micro and macro properties of AGN feeding and feedback [HEAP]

http://arxiv.org/abs/1701.07030


We unify the feeding and feedback of supermassive black holes with the global properties of galaxies, groups, and clusters, by linking for the first time the physical mechanical efficiency at the horizon and Mpc scale. The macro hot halo is tightly constrained by the absence of overheating and overcooling as probed by X-ray data and hydrodynamic simulations ($\varepsilon_{\rm BH} \simeq$ 10$^{-3}\,T_{\rm x,7.4}$). The micro flow is shaped by general relativistic effects tracked by state-of-the-art GR-RMHD simulations ($\varepsilon_\bullet \simeq$ 0.03). The SMBH properties are tied to the X-ray halo temperature $T_{\rm x}$, or related cosmic scaling relation (as $L_{\rm x}$). The model is minimally based on first principles, as conservation of energy and mass recycling. The inflow occurs via chaotic cold accretion (CCA), the rain of cold clouds condensing out of the quenched cooling flow and recurrently funneled via inelastic collisions. Within 100 gravitational radii, the accretion energy is transformed into ultrafast 10$^4$ km s$^{-1}$ outflows (UFOs) ejecting most of the inflowing mass. At larger radii the energy-driven outflow entrains progressively more mass: at kpc scale, the velocities of the hot/warm/cold outflows are a few 10$^3$, 1000, 500 km s$^{-1}$, with median mass rates ~10, 100, several 100 M$_\odot$ yr$^{-1}$, respectively. The unified CCA model is consistent with the observations of nuclear UFOs, and ionized, neutral, and molecular macro outflows. We provide step-by-step implementation for subgrid simulations, (semi)analytic works, or observational interpretations which require self-regulated AGN feedback at coarse scales, avoiding the a-posteriori fine-tuning of efficiencies.

Read this paper on arXiv…

M. Gaspari and A. Sadowski
Thu, 26 Jan 17
37/68

Comments: 10 pages, 2 figures; submitted to ApJ – comments welcome

Kinetic and radiative power from optically thin accretion flows [HEAP]

http://arxiv.org/abs/1701.07033


We perform a set of general relativistic, radiative, magneto-hydrodynamical simulations (GR-RMHD) to study the transition from radiatively inefficient to efficient state of accretion on a non-rotating black hole. We study ion to electron temperature ratios ranging from $T_{\rm i}/T_{\rm e}=$ 10 to 100, and simulate flows corresponding to accretion rates as low as 10$^{-6}\,\dot M_{\rm Edd}$, and as high as 10$^{-2}\,\dot M_{\rm Edd}$. We have found that the radiative output of accretion flows increases with accretion rate, and that the transition occurs earlier for hotter electrons (lower $T_{\rm i}/T_{\rm e}$ ratio). At the same time, the mechanical efficiency hardly changes and accounts to $\approx$ 3% of the accreted rest mass energy flux, even at the highest simulated accretion rates. This is particularly important for the mechanical AGN feedback regulating massive galaxies, groups, and clusters. Comparison with recent observations of radiative and mechanical AGN luminosities suggests that the ion to electron temperature ratio in the inner, collisionless accretion flow should fall within 10 $<T_{\rm i}/T_{\rm e}<$ 30, i.e., the electron temperature should be several percent of the ion temperature.

Read this paper on arXiv…

A. Sadowski and M. Gaspari
Thu, 26 Jan 17
56/68

Comments: 7 pages, 3 figures; submitted to MNRAS | feedback is welcome

PATCHWORK: A Multipatch Infrastructure for Multiphysics/Multiscale/Multiframe Fluid Simulations [IMA]

http://arxiv.org/abs/1701.05610


We present a “multipatch” infrastructure for numerical simulation of fluid problems in which sub-regions require different gridscales, different grid geometries, different physical equations, or different reference frames. Its key element is a sophisticated client-router-server framework for efficiently linking processors supporting different regions (“patches”) that must exchange boundary data. This infrastructure may be used with a wide variety of fluid dynamics codes; the only requirement is that their primary dependent variables be the same in all patches, e.g., fluid mass density, internal energy density, and velocity. Its structure can accommodate either Newtonian or relativistic dynamics. The overhead imposed by this system is both problem- and computer cluster architecture-dependent. Compared to a conventional simulation using the same number of cells and processors, the increase in runtime can be anywhere from negligible to a factor of a few; however, one of the infrastructure’s advantages is that it can lead to a very large reduction in the total number of zone-updates.

Read this paper on arXiv…

H. Shiokawa, R. Cheng, S. Noble, et. al.
Mon, 23 Jan 17
25/55

Comments: 17 pages, 9 figures, submitted to ApJ

A Formulation of Consistent Particle Hydrodynamics in Strong Form [IMA]

http://arxiv.org/abs/1701.05316


In fluid dynamical simulations in astrophysics, large deformations are common and surface tracking is sometimes necessary. Smoothed Particle Hydrodynamics (SPH) method has been used in many of such simulations. Recently, however, it has been shown that SPH cannot handle contact discontinuities or free surfaces accurately. There are several reasons for this problem. The first one is that SPH requires that the density is continuous and differentiable. The second one is that SPH does not have the consistency, and thus the accuracy is zeroth order in space. In addition, we cannot express accurate boundary conditions with SPH. In this paper, we propose a novel, high-order scheme for particle-based hydrodynamics of compress- ible fluid. Our method is based on kernel-weighted high-order fitting polynomial for intensive variables. With this approach, we can construct a scheme which solves all of the three prob- lems described above. For shock capturing, we use a tensor form of von-Neumann-Richtmyer artificial viscosity. We have applied our method to many test problems and obtained excel- lent result. Our method is not conservative, since particles do not have mass or energy, but only their densities. However, because of the Lagrangian nature of our scheme, the violation of the conservation laws turned out to be small. We name this method Consistent Particle Hydrodynamics in Strong Form (CPHSF).

Read this paper on arXiv…

S. Yamamoto and J. Makino
Fri, 20 Jan 17
47/51

Comments: 42 pages, 34figures

PAMOP project: computations in support of experiments and astrophysical applications [CL]

http://arxiv.org/abs/1701.03962


Our computation effort is primarily concentrated on support of current and future measurements being carried out at various synchrotron radiation facilities around the globe, and photodissociation computations for astrophysical applications. In our work we solve the Schr\”odinger or Dirac equation for the appropriate collision problem using the R-matrix or R-matrix with pseudo-states approach from first principles. The time dependent close-coupling (TDCC) method is also used in our work. A brief summary of the methodology and ongoing developments implemented in the R-matrix suite of Breit-Pauli and Dirac-Atomic R-matrix codes (DARC) is presented.

Read this paper on arXiv…

B. McLaughlin, C. Ballance, M. Pindzola, et. al.
Tue, 17 Jan 17
13/81

Comments: 17 pages, 10 figures: chapter in the book, High Performance Computing in Science and Engineering’16, edited by W. E. Nagel, D. B. Kr\”oner, and M. Reich (Springer, New York and Berlin, 2017)

The Origins of Asteroidal Rock Disaggregation: Interplay of Thermal Fatigue and Microstructure [EPA]

http://arxiv.org/abs/1701.03510


The distributions of size and chemical composition in the regolith on airless bodies provides clues to the evolution of the solar system. Recently, the regolith on asteroid (25143) Itokawa, visited by the JAXA Hayabusa spacecraft, was observed to contain millimeter to centimeter sized particles. Itokawa boulders commonly display well-rounded profiles and surface textures that appear inconsistent with mechanical fragmentation during meteorite impact; the rounded profiles have been hypothesized to arise from rolling and movement on the surface as a consequence of seismic shaking. We provide a possible explanation of these observations by exploring the primary crack propagation mechanisms during thermal fatigue of a chondrite. We present the in situ evolution of the full-field strains on the surface as a function of temperature and microstructure, and observe and quantify the crack growth during thermal cycling. We observe that the primary fatigue crack path preferentially follows the interfaces between monominerals, leaving them intact after fragmentation. These observations are explained through a microstructure-based finite element model that is quantitatively compared with our experimental results. These results on the interactions of thermal fatigue cracking with the microstructure may ultimately allow us to distinguish between thermally induced fragments and impact products.

Read this paper on arXiv…

K. Hazeli, C. Mir, S. Papanikolaou, et. al.
Mon, 16 Jan 17
17/55

Comments: 23 pages, 7 figures

Massively Parallel Computation of Accurate Densities for N-body Dark Matter Simulations using the Phase-Space-Element Method [CL]

http://arxiv.org/abs/1612.09491


In 2012 a method to analyze N-body dark matter simulations using a tetrahedral tesselation of the three-dimensional dark matter manifold in six-dimensional phase space was introduced. This paper presents an accurate density computation approach for large N-body datasets, that is based on this technique and designed for massively parallel GPU-clusters. The densities are obtained by intersecting the tessellation with the cells of a spatially adaptive grid structure. We speed up this computational expensive part with an intersection algorithm, that is tailored to modern GPU architectures. We discuss different communication and dynamic load-balancing strategies and compare their weak and strong scaling efficiencies for several large N-body simulations.

Read this paper on arXiv…

R. Kaehler
Mon, 2 Jan 17
25/45

Comments: N/A

Performance Optimisation of Smoothed Particle Hydrodynamics Algorithms for Multi/Many-Core Architectures [CL]

http://arxiv.org/abs/1612.06090


We describe a strategy for code modernisation of Gadget, a widely used community code for computational astrophysics. The focus of this work is on node-level performance optimisation, targeting current multi/many-core Intel architectures. We identify and isolate a sample code kernel, which is representative of a typical Smoothed Particle Hydrodynamics (SPH) algorithm. The code modifications include threading parallelism optimisation, change of the data layout into Structure of Arrays (SoA), auto-vectorisation and algorithmic improvements in the particle sorting. We measure lower execution time and improved threading scalability both on Intel Xeon ($2.6 \times$ on Ivy Bridge) and Xeon Phi ($13.7 \times$ on Knights Corner) systems. First tests on second generation Xeon Phi (Knights Landing) demonstrate the portability of the devised optimisation solutions to upcoming architectures.

Read this paper on arXiv…

F. Baruffa, L. Iapichino, N. Hammer, et. al.
Tue, 20 Dec 16
85/88

Comments: 18 pages, 5 figures, submitted

Imprints of the ejecta-companion interaction in Type Ia supernovae: main sequence, subgiant, and red giant companions [SSA]

http://arxiv.org/abs/1612.04684


We study supernova ejecta-companion interactions in a sample of realistic semidetached binary systems representative of Type Ia supernova progenitor binaries in a single-degenerate scenario. We model the interaction process with the help of a high-resolution hydrodynamic code assuming cylindrical symmetry. We find that the ejecta hole has a half-opening angle of 40–50$^\circ$ with the density by a factor of 2-4 lower, in good agreement with the previous studies. Quantitative differences from the past results in the amounts and kinematics of the stripped companion material and levels of contamination of the companion with the ejecta material can be explained by different model assumptions and effects due to numerical diffusion.We analyse and, for the first time, provide simulation-based estimates of the amounts and of the thermal characteristics of the shock-heated material responsible for producing a prompt, soft X-ray emission. Besides the shocked ejecta material, considered in the original model by Kasen, we also account for the stripped, shock-heated envelope material of stellar companions, which we predict partially contributes to the prompt emission. The amount of the energy deposited in the envelope is comparable to the energy stored in the ejecta. The total energy budget available for the prompt emission is by a factor of about 2-4 smaller than originally predicted by Kasen. Although the shocked envelope has a higher characteristic temperature than the shocked ejecta, the temperature estimates of the shocked material are in good agreement with the Kasen’s model. The hottest shocked plasma is produced in the subgiant companion case.

Read this paper on arXiv…

P. Boehner, T. Plewa and N. Langer
Thu, 15 Dec 16
44/59

Comments: 18 pages, version as published

A numerical scheme for the compressible low-Mach number regime of ideal fluid dynamics [CL]

http://arxiv.org/abs/1612.03910


Based on the Roe solver a new technique that allows to correctly represent low Mach number flows with a discretization of the compressible Euler equations was proposed in Miczek et al.: New numerical solver for flows at various mach numbers. A&A 576, A50 (2015). We analyze properties of this scheme and demonstrate that its limit yields a discretization of the continuous limit system. Furthermore we perform a linear stability analysis for the case of explicit time integration and study the performance of the scheme under implicit time integration via the evolution of its condition number. A numerical implementation demonstrates the capabilities of the scheme on the example of the Gresho vortex which can be accurately followed down to Mach numbers of ~1e-10 .

Read this paper on arXiv…

W. Barsukow, P. Edelmann, C. Klingenberg, et. al.
Wed, 14 Dec 16
18/67

Comments: N/A

The Runge-Kutta-Wentzel-Kramers-Brillioun Method [CL]

http://arxiv.org/abs/1612.02288


We demonstrate the effectiveness of a novel scheme for numerically solving linear differential equations whose solutions exhibit extreme oscillation. We take a standard Runge-Kutta approach, but replace the Taylor expansion formula with a Wentzel-Kramers-Brillouin method. The method is demonstrated by application to the Airy equation, along with a more complicated burst-oscillation case. Finally, we compare our scheme to existing approaches.

Read this paper on arXiv…

W. Handley, A. Lasenby and M. Hobson
Thu, 8 Dec 16
54/69

Comments: 8 pages, 5 figures, submitted to the Journal of Computational Physics

Evolution of perturbed dynamical systems: analytical computation with time independent accuracy [IMA]

http://arxiv.org/abs/1611.08165


An analytical method for investigation of the evolution of dynamical systems {\it with independent on time accuracy} is developed for perturbed Hamiltonian systems. The error-free estimation using of computer algebra enables the application of the method to complex multi-dimensional Hamiltonian and dissipative systems. It also opens principal opportunities for the qualitative study of chaotic trajectories. The performance of the method is demonstrated on perturbed two-oscillator systems. It can be applied to various non-linear physical and astrophysical systems, e.g. to the long-term planetary dynamics.

Read this paper on arXiv…

A. Gurzadyan and A. Kocharyan
Mon, 28 Nov 16
75/75

Comments: 5 pages, 4 figures, Eur. Phys. J. C in press

No double detonations but core carbon ignitions in high-resolution, grid-based simulations of binary white dwarf mergers [SSA]

http://arxiv.org/abs/1611.05730


We study the violent phase of the merger of massive binary white dwarf systems. Our aim is to characterize the conditions for explosive burning to occur, and identify a possible explosion mechanism of Type Ia supernovae. The primary components of our model systems are carbon-oxygen (C/O) white dwarfs, while the secondaries are made either of C/O or of pure helium. We account for tidal effects in the initial conditions in a self-consistent way, and consider initially well-separated systems with slow inspiral rates. We study the merger evolution using an adaptive mesh refinement, reactive, Eulerian code in three dimensions, assuming symmetry across the orbital plane. We use a co-rotating reference frame to minimize the effects of numerical diffusion, and solve for self-gravity using a multi-grid approach. We find a novel detonation mechanism in C/O mergers with massive primaries. Here the detonation occurs in the primary’s core and relies on the combined action of tidal heating, accretion heating, and self-heating due to nuclear burning. The exploding structure is compositionally stratified, with a reverse shock formed at the surface of the dense ejecta. The existence of such a shock has not been reported elsewhere. The explosion energy ($1.6\times 10^{51}$ erg) and $^{56}$Ni mass (0.86 M$_\odot$) are consistent with a SN Ia at the bright end of the luminosity distribution, with an approximated decline rate of $\Delta m_{15}(B)\approx 0.99$. Our study does not support double-detonation scenarios in the case of a system with a 0.6 M$_\odot$ helium secondary and a 0.9 M$_\odot$ primary. Although the accreted helium detonates, it fails to ignite carbon at the base of the boundary layer or in the primary’s core.

Read this paper on arXiv…

D. Fenn, T. Plewa and A. Gawryszczak
Fri, 18 Nov 16
37/60

Comments: 22 pages, version as published

A numerical relativity scheme for cosmological simulations [CL]

http://arxiv.org/abs/1611.03437


Fully non-linear cosmological simulations may prove relevant in understanding relativistic/non-linear features and, therefore, in taking full advantage of the upcoming survey data. We propose a new 3+1 integration scheme which is based on the presence of a perfect fluid (hydro) field, evolves only physical states by construction and passes the robustness test on an FLRW space-time. Although we use General Relativity as an example, the idea behind that scheme is applicable to any generally-covariant modified gravity theory and/or matter content, including a N-body sector.

Read this paper on arXiv…

D. Daverio, Y. Dirian and E. Mitsou
Fri, 11 Nov 16
2/40

Comments: 6 pages, 1 figure

Magnetohydrodynamic Simulation Code CANS+: Assessments and Applications [IMA]

http://arxiv.org/abs/1611.01775


We present a new magnetohydrodynamic (MHD) simulation code with the aim of providing accurate numerical solutions to astrophysical phenomena where discontinuities, shock waves, and turbulence are inherently important. The code implements the HLLD approximate Riemann solver, the fifth-order-monotonicity-preserving interpolation scheme, and the hyperbolic divergence cleaning method for a magnetic field. This choice of schemes significantly improved numerical accuracy and stability, and saved computational costs in multidimensional problems. Numerical tests of one- and two-dimensional problems showed the advantages of using the high-order scheme by comparing with results from a standard second-order TVD scheme. The present code enabled us to explore long-term evolution of a three-dimensional global accretion disk, in which compressible MHD turbulence saturated at much higher levels via the magneto-rotational instability than that given by the second-order scheme owing to the adoption of the high-resolution, numerically robust algorithms.

Read this paper on arXiv…

Y. Matsumoto, Y. Asahina, Y. Kudoh, et. al.
Tue, 8 Nov 16
36/75

Comments: 32 pages, 11 figures, submitted to Publ. Astron. Soc. Japan

Predicted reentrant melting of dense hydrogen at ultra-high pressures [CL]

http://arxiv.org/abs/1611.01418


The phase diagram of hydrogen is one of the most important challenges in high-pressure physics and astrophysics. Especially, the melting of dense hydrogen is complicated by dimer dissociation, metallization and nuclear quantum effect of protons, which together lead to a cold melting of dense hydrogen when above 500 GPa. Nonetheless, the variation of the melting curve at higher pressures is virtually uncharted. Here we report that using ab initio molecular dynamics and path integral simulations based on density functional theory, a new atomic phase is discovered, which gives an uplifting melting curve of dense hydrogen when beyond 2 TPa, and results in a reentrant solid-liquid transition before entering the Wigner crystalline phase of protons. The findings greatly extend the phase diagram of dense hydrogen, and put metallic hydrogen into the group of alkali metals, with its melting curve closely resembling those of lithium and sodium.

Read this paper on arXiv…

H. Geng and Q. Wu
Mon, 7 Nov 16
43/48

Comments: 27 pages, 10 figures

GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling [IMA]

http://arxiv.org/abs/1610.07279


The tree method is a widely implemented algorithm for collisionless $N$-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate $N$-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named \texttt{GOTHIC}, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3–5 times compared to the shared time step. The measured elapsed time per step of \texttt{GOTHIC} is 0.30~s or 0.44~s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with $2^{24} =$~16,777,216 particles. The averaged performance of the code corresponds to 10–30\% of the theoretical single precision peak performance of the GPU.

Read this paper on arXiv…

Y. Miki and M. Umemura
Tue, 25 Oct 16
20/69

Comments: 22 pages, 10 figures, 4 tables, accepted for publication in New Astronomy

GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling [IMA]

http://arxiv.org/abs/1610.07279


The tree method is a widely implemented algorithm for collisionless $N$-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate $N$-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named \texttt{GOTHIC}, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3–5 times compared to the shared time step. The measured elapsed time per step of \texttt{GOTHIC} is 0.30~s or 0.44~s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with $2^{24} =$~16,777,216 particles. The averaged performance of the code corresponds to 10–30\% of the theoretical single precision peak performance of the GPU.

Read this paper on arXiv…

Y. Miki and M. Umemura
Tue, 25 Oct 16
31/69

Comments: 22 pages, 10 figures, 4 tables, accepted for publication in New Astronomy

Infrared Opacities in Dense Atmospheres of Cool White Dwarf Stars [SSA]

http://arxiv.org/abs/1610.07357


Dense, He-rich atmospheres of cool white dwarfs represent a challenge to the modeling. This is because these atmospheres are constituted of a dense fluid in which strong multi-atomic interactions determine their physics and chemistry. Therefore, the ideal-gas-based description of absorption is no longer adequate, which makes the opacities of these atmospheres difficult to model. This is illustrated with severe problems in fitting the spectra of cool, He-rich stars. Good description of the infrared (IR) opacity is essential for proper assignment of the atmospheric parameters of these stars. Using methods of computational quantum chemistry we simulate the IR absorption of dense He/H media. We found a significant IR absorption from He atoms (He-He-He CIA opacity) and a strong pressure distortion of the H$_2$-He collision-induced absorption (CIA). We discuss the implication of these results for interpretation of the spectra of cool stars.

Read this paper on arXiv…

P. Kowalski, S. Blouin and P. Dufour
Tue, 25 Oct 16
54/69

Comments: 6 pages, 5 figures, Proceedings of the EUROWD2016 workshop. To be published in ASPCS

The turbulent life of dust grains in the supernova-driven, multi-phase interstellar medium [GA]

http://arxiv.org/abs/1610.06579


Dust grains are an important component of the interstellar medium (ISM) of galaxies. We present the first direct measurement of the residence times of interstellar dust in the different ISM phases, and of the transition rates between these phases, in realistic hydrodynamical simulations of the multi-phase ISM. Our simulations include a time-dependent chemical network that follows the abundances of H^+, H, H_2, C^+ and CO and take into account self-shielding by gas and dust using a tree-based radiation transfer method. Supernova explosions are injected either at random locations, at density peaks, or as a mixture of the two. For each simulation, we investigate how matter circulates between the ISM phases and find more sizeable transitions than considered in simple mass exchange schemes in the literature. The derived residence times in the ISM phases are characterised by broad distributions, in particular for the molecular, warm and hot medium. The most realistic simulations with random and mixed driving have median residence times in the molecular, cold, warm and hot phase around 17, 7, 44 and 1 Myr, respectively. The transition rates measured in the random driving run are in good agreement with observations of Ti gas-phase depletion in the warm and cold phases in a simple depletion model, although the depletion in the molecular phase is under-predicted. ISM phase definitions based on chemical abundance rather than temperature cuts are physically more meaningful, but lead to significantly different transition rates and residence times because there is no direct correspondence between the two definitions.

Read this paper on arXiv…

T. Peters, S. Zhukovska, T. Naab, et. al.
Mon, 24 Oct 16
4/53

Comments: submitted to MNRAS, movies this https URL

Scaling Laws of Passive-Scalar Diffusion in the Interstellar Medium [GA]

http://arxiv.org/abs/1610.06590


Passive scalar mixing (metals, molecules, etc.) in the turbulent interstellar medium (ISM) is critical for abundance patterns of stars and clusters, galaxy and star formation, and cooling from the circumgalactic medium. However, the fundamental scaling laws remain poorly understood (and usually unresolved in numerical simulations) in the highly supersonic, magnetized, shearing regime relevant for the ISM.We therefore study the full scaling laws governing passive-scalar transport in idealized simulations of supersonic MHD turbulence, including shear. Using simple phenomenological arguments for the variation of diffusivity with scale based on Richardson diffusion, we propose a simple fractional diffusion equation to describe the turbulent advection of an initial passive scalar distribution. These predictions agree well with the measurements from simulations, and vary with turbulent Mach number in the expected manner, remaining valid even in the presence of a large-scale shear flow (e.g. rotation in a galactic disk). The evolution of the scalar distribution is not the same as obtained using simple, constant “effective diffusivity” as in Smagorinsky models, because the scale-dependence of turbulent transport means an initially Gaussian distribution quickly develops highly non-Gaussian tails. We also emphasize that these are mean scalings that only apply to ensemble behaviors (assuming many different, random scalar injection sites): individual Lagrangian “patches” remain coherent (poorly-mixed) and simply advect for a large number of turbulent flow-crossing times.

Read this paper on arXiv…

M. Colbrook, X. Ma, P. Hopkins, et. al.
Mon, 24 Oct 16
7/53

Comments: submitted to MNRAS, 8 pages, 4 figures, comments welcome

The SILCC project — IV. Impact of dissociating and ionising radiation on the interstellar medium and Halpha emission as a tracer of the star formation rate [GA]

http://arxiv.org/abs/1610.06569


We present three-dimensional radiation-hydrodynamical simulations of the impact of stellar winds, photoelectric heating, photodissociating and photoionising radiation, and supernovae on the chemical composition and star formation in a stratified disc model. This is followed with a sink-based model for star clusters with populations of individual massive stars. Stellar winds and ionising radiation regulate the star formation rate at a factor of ~10 below the simulation with only supernova feedback due to their immediate impact on the ambient interstellar medium after star formation. Ionising radiation (with winds and supernovae) significantly reduces the ambient densities for most supernova explosions to rho < 10^-25 g cm^-3, compared to 10^-23 g cm^-3 for the model with only winds and supernovae. Radiation from massive stars reduces the amount of molecular hydrogen and increases the neutral hydrogen mass and volume filling fraction. Only this model results in a molecular gas depletion time scale of 2 Gyr and shows the best agreement with observations. In the radiative models, the Halpha emission is dominated by radiative recombination as opposed to collisional excitation (the dominant emission in non-radiative models), which only contributes ~1-10 % to the total Halpha emission. Individual massive stars (M >= 30 M_sun) with short lifetimes are responsible for significant fluctuations in the Halpha luminosities. The corresponding inferred star formation rates can underestimate the true instantaneous star formation rate by factors of ~10.

Read this paper on arXiv…

T. Peters, T. Naab, S. Walch, et. al.
Mon, 24 Oct 16
28/53

Comments: submitted to MNRAS, movies this https URL

3D Hydrodynamic Simulations of Carbon Burning in Massive Stars [SSA]

http://arxiv.org/abs/1610.05173


We present the first detailed three-dimensional (3D) hydrodynamic implicit large eddy simulations of turbulent convection of carbon burning in massive stars. The simulations start with initial radial profiles mapped from a carbon burning shell within a 15$\,\textrm{M}_\odot$ 1D stellar evolution model. We consider 4 resolutions from $128^3$ to $1024^3$ zones. The turbulent flow properties of these carbon burning simulations are very similar to the oxygen burning case. We performed a mean field analysis of the kinetic energy budgets within the Reynolds-averaged Navier-Stokes framework. For the upper convective boundary region, we find that the inferred numerical dissipation is insensitive to resolution for linear mesh resolutions between 512 and 1,024 grid points. For the stiffer and more stratified lower boundary, our highest resolution model still shows signs of decreasing dissipation suggesting that it is not yet fully resolved numerically. We estimate the widths of the upper and lower boundaries to be roughly 30% and 10% of the local pressure scale heights, respectively. The shape of the boundaries is significantly different from those used in stellar evolution models, which assume strict Ledoux or Schwarzschild boundaries. Entrainment rates derived for the carbon shell are consistent with those derived for the oxygen shells and with the entrainment law commonly used in the meteorological and atmosphere science communities. The entrainment rate is roughly inversely proportional to the bulk Richardson number, Ri$_{\rm B}$. We thus suggest the use of Ri$_{\rm B}$ as a means to apply the results of 3D hydrodynamics simulations to 1D stellar evolution modelling.

Read this paper on arXiv…

A. Cristini, C. Meakin, R. Hirschi, et. al.
Tue, 18 Oct 16
66/70

Comments: Submitted to MNRAS – 17/10/16

How the nonlinear coupled oscillators modelization explains the Blazhko effect, the synchronisation of layers, the mode selection, the limit cycle, and the red limit of the instability strip [SSA]

http://arxiv.org/abs/1610.03323


Context. The Blazhko effect, in RR Lyrae type stars, is a century old mystery. Dozens of theory exists, but none have been able to entirely reproduce the observational facts associated to this modulation phenomenon. Existing theory all rely on the usual continuous modelization of the star. Aims. We present a new paradigm which will not only explain the Blazhko effect, but at the same time, will give us alternative explanations to the red limit of the instability strip, the synchronization of layers, the mode selection and the existence of a limit cycle for radially pulsating stars. Methods. We describe the RR Lyrae type pulsating stars as a system of coupled nonlinear oscillators. Considering a spatial discretisation of the star, supposing a spherical symmetry, we develop the equation of motion and energy up to the third order in the radial and adiabatic case. Then, we include the influence of the ionization region as a relaxation oscillator by including elements from synchronisation theory. Results. This discrete approach allows us to exploit existing results in the coupled nonlinear oscillator field. For instance, the study of synchronicity leads to an explanation of the mode selection, the layers synchronisation, the limit cycle and the red limit of the instability strip. But, most of all, the analogy with the Fermi-Pasta-Ulam (FPU) experiment enables us to understand the Blazhko effect. The transfer of energy between different modes, as induced by solitons, not only gives a plausible theory for lightcurve modulation, but also explains the asymmetry of sidelobes.

Read this paper on arXiv…

C. Zalian
Wed, 12 Oct 16
61/64

Comments: N/A

Symplectic fourth-order maps for the collisional N-body problem [CL]

http://arxiv.org/abs/1609.09375


We study analytically and experimentally certain symplectic and time-reversible N-body integrators which employ a Kepler solver for each pair-wise interaction, including the method of Hernandez & Bertschinger (2015). Owing to the Kepler solver, these methods treat close two-body interactions correctly, while close three-body encounters contribute to the truncation error at second order and above. The second-order errors can be corrected to obtain a fourth-order scheme with little computational overhead. We generalise this map to an integrator which employs a Kepler solver only for selected interactions and yet retains fourth-order accuracy without backward steps. In this case, however, two-body encounters not treated via a Kepler solver contribute to the truncation error.

Read this paper on arXiv…

W. Dehnen and D. Hernandez
Fri, 30 Sep 16
25/75

Comments: 17 pages, re-submitted to MNRAS

Analysis of ground penetrating radar data from the tunnel beneath the Temple of the Feathered Serpent in Teotihuacan, Mexico, using new multi-cross algorithms [CL]

http://arxiv.org/abs/1609.08736


As multichannel equipment is increasingly used, we developed two algorithms for multivariable wavelet analysis of GPR signals (multi-cross wavelet MCW and Fourier multi-cross function FMC) and applied them to analyze raw GPR traces of archeological subsurface strata. The traces were from the tunnel located beneath the Temple of the Feathered Serpent (The Citadel, Teotihuacan, Mexico). The MCW and FMC algorithms determined the periods of subsurface strata of the tunnel. GPR traces inside-and-outside the tunnel/chamber, outside the tunnel/chamber and inside the tunnel/chamber analyzed with the MCW and filtered FMC algorithms determined the periods of the tunnel and chamber fillings, clay and matrix (limestone-clay compound). The tunnel filling period obtained by MCW analysis (14.37 ns) reflects the mixed limestone-clay compound of this stratum since its value is close to that of the period of the matrix (15.22 ns); periods of the chamber filling (11.40 ns) and the matrix (11.40 ns) were almost identical. FMC analysis of the tunnel obtained a period (5.08 ns) close to that of the chamber (4.27 ns), suggesting the tunnel and chamber are filled with similar materials. The use of both algorithms allows a deeper analysis since the similarities of the tunnel and chamber filling periods could not have been determined with the MCW algorithm alone. The successful application of the new multi-cross algorithms to archeological GPR data suggests they may also be used to search water and other resources in celestial bodies.

Read this paper on arXiv…

F. Lopez-Rodriguez, V. Velasco-Herrera, R. Alvarez-Bejar, et. al.
Fri, 30 Sep 16
47/75

Comments: 27 pages, 1 table, 14 figures

Exoplanetary Detection By Multifractal Spectral Analysis [EPA]

http://arxiv.org/abs/1609.06148


Owing to technological advances the number of exoplanets discovered has risen dramatically in the last few years. However, when trying to observe Earth analogs, it is often difficult to test the veracity of detection. We have developed a new approach to the analysis of exoplanetary spectral observations based on temporal multifractality, which identifies time scales that characterize planetary orbital motion around the host star. Without fitting spectral data to stellar models, we show how the planetary signal can be robustly detected from noisy data using noise amplitude as a source of information. For observation of transiting planets, combining this method with simple geometry allows us to relate the time scales obtained to primary transit and secondary exoplanet eclipse of the exoplanets. Making use of data obtained with ground-based and space-based observations we have tested our approach on HD 189733b. Moreover, we have investigated the use of this technique in measuring planetary orbital motion via doppler shift detection. Finally, we have analyzed numerical data obtained using the SOAP 2.0 tool, which simulates a stellar spectrum and the influence of the presence of a planet or a spot on that spectrum over one orbital period. We have demonstrated that, so long as the signal-to-noise-ratio $\ge$ 100, our approach reconstructs the planetary orbital period, as well as the rotation period of a spot on the stellar surface.

Read this paper on arXiv…

S. Agarwal, F. Sordo and J. Wettlaufer
Wed, 21 Sep 16
47/53

Comments: N/A

Extreme Scale-out SuperMUC Phase 2 – lessons learned [CL]

http://arxiv.org/abs/1609.01507


In spring 2015, the Leibniz Supercomputing Centre (Leibniz-Rechenzentrum, LRZ), installed their new Peta-Scale System SuperMUC Phase2. Selected users were invited for a 28 day extreme scale-out block operation during which they were allowed to use the full system for their applications. The following projects participated in the extreme scale-out workshop: BQCD (Quantum Physics), SeisSol (Geophysics, Seismics), GPI-2/GASPI (Toolkit for HPC), Seven-League Hydro (Astrophysics), ILBDC (Lattice Boltzmann CFD), Iphigenie (Molecular Dynamic), FLASH (Astrophysics), GADGET (Cosmological Dynamics), PSC (Plasma Physics), waLBerla (Lattice Boltzmann CFD), Musubi (Lattice Boltzmann CFD), Vertex3D (Stellar Astrophysics), CIAO (Combustion CFD), and LS1-Mardyn (Material Science). The projects were allowed to use the machine exclusively during the 28 day period, which corresponds to a total of 63.4 million core-hours, of which 43.8 million core-hours were used by the applications, resulting in a utilization of 69%. The top 3 users were using 15.2, 6.4, and 4.7 million core-hours, respectively.

Read this paper on arXiv…

N. Hammer, F. Jamitzky, H. Satzger, et. al.
Wed, 7 Sep 16
46/61

Comments: 10 pages, 5 figures, presented at ParCo2015 – Advances in Parallel Computing, held in Edinburgh, September 2015. The final publication is available at IOS Press through this http URL

SpECTRE: A Task-based Discontinuous Galerkin Code for Relativistic Astrophysics [HEAP]

http://arxiv.org/abs/1609.00098


We introduce a new relativistic astrophysics code, SpECTRE, that combines a discontinuous Galerkin method with a task-based parallelism model. SpECTRE’s goal is to achieve more accurate solutions for challenging relativistic astrophysics problems such as core-collapse supernovae and binary neutron star mergers. The robustness of the discontinuous Galerkin method allows for the use of high-resolution shock capturing methods in regions where (relativistic) shocks are found, while exploiting high-order accuracy in smooth regions. A task-based parallelism model allows efficient use of the largest supercomputers for problems with a heterogeneous workload over disparate spatial and temporal scales. We argue that the locality and algorithmic structure of discontinuous Galerkin methods will exhibit good scalability within a task-based parallelism framework. We demonstrate the code on a wide variety of challenging benchmark problems in (non)-relativistic (magneto)-hydrodynamics. We demonstrate the code’s scalability including its strong scaling on the NCSA Blue Waters supercomputer up to the machine’s full capacity of 22,380 nodes using 671,400 threads.

Read this paper on arXiv…

L. Kidder, S. Field, F. Foucart, et. al.
Fri, 2 Sep 16
6/49

Comments: 39 pages, 13 figures, and 7 tables

Improved Performances in Subsonic Flows of an SPH Scheme with Gradients Estimated using an Integral Approach [IMA]

http://arxiv.org/abs/1608.08361


In this paper we present results from a series of hydrodynamical tests aimed at validating the performance of a smoothed particle hydrodynamics (SPH) formulation in which gradients are derived from an integral approach. We specifically investigate the code behavior with subsonic flows, where it is well known that zeroth-order inconsistencies present in standard SPH make it particularly problematic to correctly model the fluid dynamics. In particular we consider the Gresho-Chan vortex problem, the growth of Kelvin-Helmholtz instabilities, the statistics of driven subsonic turbulence and the cold Keplerian disc problem. We compare simulation results for the different tests with those obtained, for the same initial conditions, using standard SPH. We also compare the results with the corresponding ones obtained previously with other numerical methods, such as codes based on a moving-mesh scheme or Godunov-type Lagrangian meshless methods. We quantify code performances by introducing error norms and spectral properties of the particle distribution, in a way similar to what was done in other works. We find that the new SPH formulation exhibits strongly reduced gradient errors and outperforms standard SPH in all of the tests considered. In fact, in terms of accuracy we find good agreement between the simulation results of the new scheme and those produced using other recently proposed numerical schemes. These findings suggest that the proposed method can be successfully applied for many astrophysical problems in which the presence of subsonic flows previously limited the use of SPH, with the new scheme now being competitive in these regimes with other numerical methods.

Read this paper on arXiv…

R. Valdarnini
Wed, 31 Aug 16
24/61

Comments: 25 pages, 11 figures, accepted for publication in ApJ

Raining on black holes and massive galaxies: the top-down multiphase condensation model [GA]

http://arxiv.org/abs/1608.08216


The atmospheres filling massive galaxies, groups, and clusters display remarkable similarities with rainfalls. Such plasma halos are shaped by AGN heating and subsonic turbulence (~150 km/s), as probed by Hitomi. The new 3D high-resolution simulations show the soft X-ray (< 1 keV) plasma cools rapidly via radiative emission at the high-density interface of the turbulent eddies, stimulating a top-down condensation cascade of warm, $10^4$ K filaments. The ionized (optical/UV) filaments extend up to several kpc and form a skin enveloping the neutral filaments (optical/IR/21-cm). The peaks of the warm filaments further condense into cold molecular clouds (<50 K; radio) with total mass up to several $10^7$ M$_\odot$, i.e., 5/50$\times$ the neutral/ionized masses. The multiphase structures inherit the chaotic kinematics and are dynamically supported. In the inner 500 pc, the clouds collide in inelastic way, mixing angular momentum and leading to chaotic cold accretion (CCA). The BHAR can be modeled via quasi-spherical viscous accretion with collisional mean free path ~100 pc. Beyond the inner kpc region pressure torques drive the angular momentum transport. In CCA, the BHAR is recurrently boosted up to 2 dex compared with the disc evolution, which arises as turbulence is subdominant. The CCA BHAR distribution is lognormal with pink noise power spectrum characteristic of fractal phenomena. The rapid self-similar CCA variability can explain the light curve variability of AGN and HMXBs. An improved criterium to trace thermal instability is proposed. The 3-phase CCA reproduces crucial observations of cospatial multiphase gas in massive galaxies, as Chandra X-ray images, SOAR H$\alpha$ warm filaments and kinematics, Herschel [C$^+$] emission, and ALMA giant molecular associations. CCA plays key role in AGN feedback, AGN unification/obscuration, the evolution of BHs, galaxies, and clusters.

Read this paper on arXiv…

M. Gaspari, P. Temi and F. Brighenti
Wed, 31 Aug 16
58/61

Comments: 27 pages, 29 figures; feedback welcome

Massive Computation for Understanding Core-Collapse Supernova Explosions [HEAP]

http://arxiv.org/abs/1608.08069


How do massive stars explode? Progress toward the answer is driven by increases in compute power. Petascale supercomputers are enabling detailed three-dimensional simulations of core-collapse supernovae. These are elucidating the role of fluid instabilities, turbulence, and magnetic field amplification in supernova engines.

Read this paper on arXiv…

C. Ott
Tue, 30 Aug 16
36/78

Comments: 12 pages, 8 figures. Refereed overview article published in Computing in Science & Engineering (CiSE; number of references limited due to magazine format). this http URL Non-copyedited version prepared by the author

A new insight into the consistency of smoothed particle hydrodynamics [CL]

http://arxiv.org/abs/1608.05883


In this paper the problem of consistency of smoothed particle hydrodynamics (SPH) is solved. A novel error analysis is developed in $n$-dimensional space using the Poisson summation formula, which enables the treatment of the kernel and particle approximation errors in combined fashion. New consistency integral relations are derived for the particle approximation which correspond to the cosine Fourier transform of the classically known consistency conditions for the kernel approximation. The functional dependence of the error bounds on the SPH interpolation parameters, namely the smoothing length $h$ and the number of particles within the kernel support ${\cal{N}}$ is demonstrated explicitly from which consistency conditions are seen to follow naturally. As ${\cal{N}}\to\infty$, the particle approximation converges to the kernel approximation independently of $h$ provided that the particle mass scales with $h$ as $m\propto h^{\beta}$, with $\beta >n$. This implies that as $h\to 0$, the joint limit $m\to 0$, ${\cal{N}}\to\infty$, and $N\to\infty$ is necessary for complete convergence to the continuum, where $N$ is the total number of particles. The analysis also reveals the presence of a dominant error term of the form $(\ln {\cal{N}})^{n}/{\cal{N}}$, which tends asymptotically to $1/{\cal{N}}$ when ${\cal{N}}\gg 1$, as it has long been conjectured based on the similarity between the SPH and the quasi-Monte Carlo estimates.

Read this paper on arXiv…

L. Sigalotti, O. Rendon, J. Klapp, et. al.
Tue, 23 Aug 16
6/51

Comments: 27 pages

Tree Code for Collision Detection of Large Numbers of Particles Application for the Breit-Wheeler Process [CL]

http://arxiv.org/abs/1608.01125


Collision detection of a large number N of particles can be challenging. Directly testing N particles for collision among each other leads to N 2 queries. Especially in scenarios, where fast, densely packed particles interact, challenges arise for classical methods like Particle-in-Cell or Monte-Carlo. Modern collision detection methods utilising bounding volume hierarchies are suitable to overcome these challenges and allow a detailed analysis of the interaction of large number of particles. This approach is applied to the analysis of the collision of two photon beams leading to the creation of electron-positron pairs.

Read this paper on arXiv…

O. Jansen, E. dHumieres, X. Ribeyre, et. al.
Thu, 4 Aug 16
38/70

Comments: Preprint version

A Second-order Divergence-constrained Multidimensional Numerical Scheme for Relativistic Two-Fluid Electrodynamics [HEAP]

http://arxiv.org/abs/1607.08487


A new multidimensional simulation code for relativistic two-fluid electrodynamics (RTFED) is described. The basic equations consist of the full set of Maxwell’s equations coupled with relativistic hydrodynamic equations for separate two charged fluids, representing the dynamics of either an electron-positron or an electron-proton plasma. It can be recognized as an extension of conventional relativistic magnetohydrodynamics (RMHD). Finite resistivity may be introduced as a friction between the two species, which reduces to resistive RMHD in the long wavelength limit without suffering from a singularity at infinite conductivity. A numerical scheme based on HLL (Harten-Lax-Van Leer) Riemann solver is proposed that exactly preserves the two divergence constraints for Maxwell’s equations simultaneously. Several benchmark problems demonstrate that it is capable of describing RMHD shocks/discontinuities at long wavelength limit, as well as dispersive characteristics due to the two-fluid effect appearing at small scales. This shows that the RTFED model is a promising tool for high energy astrophysics application.

Read this paper on arXiv…

T. Amano
Fri, 29 Jul 16
5/44

Comments: 23 pages, accepted for publication ApJ

Apsara: A multi-dimensional unsplit fourth-order explicit Eulerian hydrodynamics code for arbitrary curvilinear grids [CL]

http://arxiv.org/abs/1607.04272


We present a new fourth-order finite-volume hydrodynamics code named Apsara. The code employs the high-order finite-volume method for mapped coordinates developed by Colella et al. (2011) with extensions for non-linear hyperbolic conservation laws by McCorquodale & Colella (2011) and Guzik et al. (2012). Using the mapped-grid technique Apsara can handle arbitrary structured curvilinear meshes in three spatial dimensions. The code has successfully passed several hydrodynamic test problems including the advection of a Gaussian density profile and a non-linear vortex, as well as the propagation of linear acoustic waves. For these test problems Apsara produces fourth-order accurate results in case of smooth grid mappings. The order of accuracy is reduced to first-order when using the non-smooth circular grid mapping of Calhoun et al. (2008). When applying the high-order method by McCorquodale & Colella (2011) to simulations of low-Mach number flows, e.g. the Gresho vortex and the Taylor-Green vortex, we discover that Apsara delivers superior results than codes based on the dimensionally-splitted PPM method widely used in astrophysics. Hence, Apsara is a suitable tool for simulating highly subsonic flows in astrophysics. As a first astrophysical application we perform ILES simulations of anisotropic turbulence in the context of core collapse supernova obtaining similar results than in the work of Radice et al. (2015).

Read this paper on arXiv…

A. Wongwathanarat, H. Grimm-Strele and E. Muller
Mon, 18 Jul 16
39/50

Comments: 16 pages, 9 figures, 8 tables; accepted by Astronomy & Astrophysics

On the equivalence between the Scheduled Relaxation Jacobi method and Richardson's non-stationary method [CL]

http://arxiv.org/abs/1607.03712


The Scheduled Relaxation Jacobi (SRJ) method is an extension of the classical Jacobi iterative method to solve linear systems of equations ($Au=b$) associated with elliptic problems. It inherits its robustness and accelerates its convergence rate computing a set of $P$ relaxation factors that result from a minimization problem. In a typical SRJ scheme, the former set of factors is employed in cycles of $M$ consecutive iterations until a prescribed tolerance is reached. We present the analytic form for the optimal set of relaxation factors for the case in which all of them are different, and find that the resulting algorithm is equivalent to a non-stationary generalized Richardson’s method. Our method to estimate the weights has the advantage that the explicit computation of the maximum and minimum eigenvalues of the matrix $A$ is replaced by the (much easier) calculation of the maximum and minimum frequencies derived from a von Neumann analysis. This set of weights is also optimal for the general problem, resulting in the fastest convergence of all possible SRJ schemes for a given grid structure. We also show that with the set of weights computed for the optimal SRJ scheme for a fixed cycle size it is possible to estimate numerically the optimal value of the parameter $\omega$ in the Successive Overtaxation (SOR) method in some cases. Finally, we demonstrate with practical examples that our method also works very well for Poisson-like problems in which a high-order discretization of the Laplacian operator is employed. This is of interest since the former discretizations do not yield consistently ordered $A$ matrices. Furthermore, the optimal SRJ schemes here deduced, are advantageous over existing SOR implementations for high-order discretizations of the Laplacian operator in as much as they do not need to resort to multi-coloring schemes for their parallel implementation. (abridged)

Read this paper on arXiv…

J. Adsuara, I. Cordero-Carrion, P. Cerda-Duran, et. al.
Thu, 14 Jul 16
52/72

Comments: 28 pages, 5 figures, submitted to JCP

Hybrid Adaptive Ray-Moment Method (HARM$^2$): A Highly Parallel Method for Radiation Hydrodynamics on Adaptive Grids [IMA]

http://arxiv.org/abs/1607.01802


We present a highly-parallel multi-frequency hybrid radiation hydrodynamics algorithm that combines a spatially-adaptive long characteristics method for the radiation field from point sources with a moment method that handles the diffuse radiation field produced by a volume-filling fluid. Our Hybrid Adaptive Ray-Moment Method (HARM$^2$) operates on patch-based adaptive grids, is compatible with asynchronous time stepping, and works with any moment method. In comparison to previous long characteristics methods, we have greatly improved the parallel performance of the adaptive long-characteristics method by developing a new completely asynchronous and non-blocking communication algorithm. As a result of this improvement, our implementation achieves near-perfect scaling up to $\mathcal{O}(10^3)$ processors on distributed memory machines. We present a series of tests to demonstrate the accuracy and performance of the method.

Read this paper on arXiv…

A. Rosen, M. Krumholz, J. Oishi, et. al.
Fri, 8 Jul 16
4/56

Comments: 27 pages, 8 figures, submitted to Journal of Computational Physics. Referee comments received and will be addressed in final version

The world's largest turbulence simulations [SSA]

http://arxiv.org/abs/1607.00630


Understanding turbulence is critical for a wide range of terrestrial and astrophysical applications. Here we present first results of the world’s highest-resolution simulation of turbulence ever done. The current simulation has a grid resolution of 10048^3 points and was performed on 65536 compute cores on SuperMUC at the Leibniz Supercomputing Centre (LRZ). We present a scaling test of our modified version of the FLASH code, which updates the hydrodynamical equations in less than 3 micro seconds per cell per time step. A first look at the column density structure of the 10048^3 simulation is presented and a detailed analysis is provided in a forthcoming paper.

Read this paper on arXiv…

C. Federrath, R. Klessen, L. Iapichino, et. al.
Tue, 5 Jul 16
12/80

Comments: 2 pages, 3 figures, book contribution to “High Performance Computing in Science und Engineering – Garching/Munich 2016”, eds. S. Wagner, A. Bode, H. Br\”uchle, and M. Brehm