A Hybrid Riemann Solver for Large Hyperbolic Systems of Conservation Laws [CL]

http://arxiv.org/abs/1607.05721


We are interested in the numerical solution of large systems of hyperbolic conservation laws or systems in which the characteristic decomposition is expensive to compute. Solving such equations using finite volumes or Discontinuous Galerkin requires a numerical flux function which solves local Riemann problems at cell interfaces. There are various methods to express the numerical flux function. On the one end, there is the robust but very diffusive Lax-Friedrichs solver; on the other end the upwind Godunov solver which respects all resulting waves. The drawback of the latter method is the costly computation of the eigensystem.
This work presents a family of simple first order Riemann solvers, named HLLX$\omega$, which avoid solving the eigensystem. The new method reproduces all waves of the system with less dissipation than other solvers with similar input and effort, such as HLL and FORCE. The family of Riemann solvers can be seen as an extension or generalization of the methods introduced by Degond et al. \cite{DegondPeyrardRussoVilledieu1999}. We only require the same number of input values as HLL, namely the globally fastest wave speeds in both directions, or an estimate of the speeds. Thus, the new family of Riemann solvers is particularly efficient for large systems of conservation laws when the spectral decomposition is expensive to compute or no explicit expression for the eigensystem is available.

Read this paper on arXiv…

B. Schmidtmann and M. Torrilhon
Thu, 21 Jul 16
46/48

Comments: arXiv admin note: text overlap with arXiv:1606.08040

Hybrid Riemann Solvers for Large Systems of Conservation Laws [CL]

http://arxiv.org/abs/1606.08040


In this paper we present a new family of approximate Riemann solvers for the numerical approximation of solutions of hyperbolic conservation laws. They are approximate, also referred to as incomplete, in the sense that the solvers avoid computing the characteristic decomposition of the flux Jacobian. Instead, they require only an estimate of the globally fastest wave speeds in both directions. Thus, this family of solvers is particularly efficient for large systems of conservation laws, i.e. with many different propagation speeds, and when no explicit expression for the eigensystem is available. Even though only fastest wave speeds are needed as input values, the new family of Riemann solvers reproduces all waves with less dissipation than HLL, which has the same prerequisites, requiring only one additional flux evaluation.

Read this paper on arXiv…

B. Schmidtmann, M. Astrakhantceva and M. Torrilhon
Tue, 28 Jun 16
56/58

Comments: 9 pages

Data Acquisition and Control System for High-Performance Large-Area CCD Systems [CL]

http://arxiv.org/abs/1507.05391


Astronomical CCD systems based on second-generation DINACON controllers were developed at the SAO RAS Advanced Design Laboratory more than seven years ago and since then have been in constant operation at the 6-meter and Zeiss-1000 telescopes. Such systems use monolithic large-area CCDs. We describe the software developed for the control of a family of large-area CCD systems equipped with a DINACON-II controller. The software suite serves for acquisition, primary reduction, visualization, and storage of video data, and also for the control, setup, and diagnostics of the CCD system.

Read this paper on arXiv…

I. Afanasieva
Tue, 21 Jul 15
33/74

Comments: 6 pages, 5 figures

ASTROMLSKIT: A New Statistical Machine Learning Toolkit: A Platform for Data Analytics in Astronomy [CL]

http://arxiv.org/abs/1504.07865


Astroinformatics is a new impact area in the world of astronomy, occasionally called the final frontier, where several astrophysicists, statisticians and computer scientists work together to tackle various data intensive astronomical problems. Exponential growth in the data volume and increased complexity of the data augments difficult questions to the existing challenges. Classical problems in Astronomy are compounded by accumulation of astronomical volume of complex data, rendering the task of classification and interpretation incredibly laborious. The presence of noise in the data makes analysis and interpretation even more arduous. Machine learning algorithms and data analytic techniques provide the right platform for the challenges posed by these problems. A diverse range of open problem like star-galaxy separation, detection and classification of exoplanets, classification of supernovae is discussed. The focus of the paper is the applicability and efficacy of various machine learning algorithms like K Nearest Neighbor (KNN), random forest (RF), decision tree (DT), Support Vector Machine (SVM), Na\”ive Bayes and Linear Discriminant Analysis (LDA) in analysis and inference of the decision theoretic problems in Astronomy. The machine learning algorithms, integrated into ASTROMLSKIT, a toolkit developed in the course of the work, have been used to analyze HabCat data and supernovae data. Accuracy has been found to be appreciably good.

Read this paper on arXiv…

S. Saha, S. Agrawal, M. R, et. al.
Thu, 30 Apr 15
43/43

Comments: Habitability Catalog (HabCat), Supernova classification, data analysis, Astroinformatics, Machine learning, ASTROMLS toolkit, Na\”ive Bayes, SVD, PCA, Random Forest, SVM, Decision Tree, LDA

Fast and accurate prediction of numerical relativity waveforms from binary black hole mergers using surrogate models [CL]

http://arxiv.org/abs/1502.07758


Simulating a binary black hole coalescence by solving Einstein’s equations is computationally expensive, requiring days to months of supercomputing time. In this paper, we construct an accurate and fast-to-evaluate surrogate model for numerical relativity (NR) waveforms from non-spinning binary black hole coalescences with mass ratios from $1$ to $10$ and durations corresponding to about $15$ orbits before merger. Our surrogate, which is built using reduced order modeling techniques, is distinct from traditional modeling efforts. We find that the full multi-mode surrogate model agrees with waveforms generated by NR to within the numerical error of the NR code. In particular, we show that our modeling strategy produces surrogates which can correctly predict NR waveforms that were {\em not} used for the surrogate’s training. For all practical purposes, then, the surrogate waveform model is equivalent to the high-accuracy, large-scale simulation waveform but can be evaluated in a millisecond to a second depending on the number of output modes and the sampling rate. Our model includes all spherical-harmonic ${}_{-2}Y_{\ell m}$ waveform modes that can be resolved by the NR code up to $\ell=8$, including modes that are typically difficult to model with other approaches. We assess the model’s uncertainty, which could be useful in parameter estimation studies seeking to incorporate model error. We anticipate NR surrogate models to be useful for rapid NR waveform generation in multiple-query applications like parameter estimation, template bank construction, and testing the fidelity of other waveform models.

Read this paper on arXiv…

J. Blackman, S. Field, C. Galley, et. al.
Mon, 2 Mar 15
37/39

Comments: 6 pages, 6 figures

The Murchison Widefield Array Correlator [IMA]

http://arxiv.org/abs/1501.05992


The Murchison Widefield Array (MWA) is a Square Kilometre Array (SKA) Precursor. The telescope is located at the Murchison Radio–astronomy Observatory (MRO) in Western Australia (WA). The MWA consists of 4096 dipoles arranged into 128 dual polarisation aperture arrays forming a connected element interferometer that cross-correlates signals from all 256 inputs. A hybrid approach to the correlation task is employed, with some processing stages being performed by bespoke hardware, based on Field Programmable Gate Arrays (FPGAs), and others by Graphics Processing Units (GPUs) housed in general purpose rack mounted servers. The correlation capability required is approximately 8 TFLOPS (Tera FLoating point Operations Per Second). The MWA has commenced operations and the correlator is generating 8.3 TB/day of correlation products, that are subsequently transferred 700 km from the MRO to Perth (WA) in real-time for storage and offline processing. In this paper we outline the correlator design, signal path, and processing elements and present the data format for the internal and external interfaces.

Read this paper on arXiv…

S. Ord, B. Crosse, D. Emrich, et. al.
Tue, 27 Jan 15
30/79

Comments: 17 pages, 9 figures. Accepted for publication in PASA. Some figures altered to meet astro-ph submission requirements

Achieving 100,000,000 database inserts per second using Accumulo and D4M [CL]

http://arxiv.org/abs/1406.4923


The Apache Accumulo database is an open source relaxed consistency database that is widely used for government applications. Accumulo is designed to deliver high performance on unstructured data such as graphs of network data. This paper tests the performance of Accumulo using data from the Graph500 benchmark. The Dynamic Distributed Dimensional Data Model (D4M) software is used to implement the benchmark on a 216-node cluster running the MIT SuperCloud software stack. A peak performance of over 100,000,000 database inserts per second was achieved which is 100x larger than the highest previously published value for any other database. The performance scales linearly with the number of ingest clients, number of database servers, and data size. The performance was achieved by adapting several supercomputing techniques to this application: distributed arrays, domain decomposition, adaptive load balancing, and single-program-multiple-data programming.

Read this paper on arXiv…

J. Kepner, W. Arcand, D. Bestor, et. al.
Fri, 20 Jun 14
2/48

Comments: 6 pages; to appear in IEEE High Performance Extreme Computing (HPEC) 2014