Parametric analysis of Cherenkov light LDF from EAS in the range 30-3000 TeV for primary gamma rays and nuclei [IMA]

http://arxiv.org/abs/1702.07796


A simple ‘knee-like’ approximation of the Lateral Distribution Function (LDF) of Cherenkov light emitted by EAS (extensive air showers) in the atmosphere is proposed for solving various tasks of data analysis in HiSCORE and other wide angle ground-based experiments designed to detect gamma rays and cosmic rays with the energy above tens of TeV. Simulation-based parametric analysis of individual LDF curves revealed that on the radial distance 20-500 m the 5-parameter ‘knee-like’ approximation fits individual LDFs as well as the mean LDF with a very good accuracy. In this paper we demonstrate the efficiency and flexibility of the ‘knee-like’ LDF approximation for various primary particles and shower parameters and the advantages of its application to suppressing proton background and selecting primary gamma rays.

Read this paper on arXiv…

A. Elshoukrofy, E. Postnikov, E. Korosteleva, et. al.
Tue, 28 Feb 17
2/69

Comments: 7 pages, 1 table, 2 figures; Bulletin of the Russian Academy of Sciences: Physics, 81, 4 (2017), in press

Advertisement

Parametric Analysis of Cherenkov Light LDF from EAS for High Energy Gamma Rays and Nuclei: Ways of Practical Application [IMA]

http://arxiv.org/abs/1702.08390


In this paper we propose a ‘knee-like’ approximation of the lateral distribution of the Cherenkov light from extensive air showers in the energy range 30-3000 TeV and study a possibility of its practical application in high energy ground-based gamma-ray astronomy experiments (in particular, in TAIGA-HiSCORE). The approximation has a very good accuracy for individual showers and can be easily simplified for practical application in the HiSCORE wide angle timing array in the condition of a limited number of triggered stations.

Read this paper on arXiv…

A. Elshoukrofy, E. Postnikov, E. Korosteleva, et. al.
Tue, 28 Feb 17
24/69

Comments: 4 pages, 5 figures, proceedings of ISVHECRI 2016 (19th International Symposium on Very High Energy Cosmic Ray Interactions)

Background rejection method for tens of TeV gamma-ray astronomy applicable to wide angle timing arrays [IMA]

http://arxiv.org/abs/1702.07756


A ‘knee-like’ approximation of Cherenkov light Lateral Distribution Functions, which we developed earlier, now is used for the actual tasks of background rejection methods for high energy (tens and hundreds of TeV) gamma-ray astronomy. In this work we implement this technique to the HiSCORE wide angle timing array consisting of Cherenkov light detectors with spacing of 100 m covering 0.2 km$^2$ presently and up to 5 km$^2$ in future. However, it can be applied to other similar arrays. We also show that the application of a multivariable approach (where 3 parameters of the knee-like approximation are used) allows us to reach a high level of background rejection, but it strongly depends on the number of hit detectors.

Read this paper on arXiv…

A. Elshoukrofy, E. Postnikov and L. Sveshnikova
Tue, 28 Feb 17
33/69

Comments: 5 pages, 3 figures; proceedings of the 2nd International Conference on Particle Physics and Astrophysics (ICPPA-2016)

Primary gamma ray selection in a hybrid timing/imaging Cherenkov array [IMA]

http://arxiv.org/abs/1702.07768


This work is a methodical study on hybrid reconstruction techniques for hybrid imaging/timing Cherenkov observations. This type of hybrid array is to be realized at the gamma-observatory TAIGA intended for very high energy gamma-ray astronomy (>30 TeV). It aims at combining the cost-effective timing-array technique with imaging telescopes. Hybrid operation of both of these techniques can lead to a relatively cheap way of development of a large area array. The joint approach of gamma event selection was investigated on both types of simulated data: the image parameters from the telescopes, and the shower parameters reconstructed from the timing array. The optimal set of imaging parameters and shower parameters to be combined is revealed. The cosmic ray background suppression factor depending on distance and energy is calculated. The optimal selection technique leads to cosmic ray background suppression of about 2 orders of magnitude on distances up to 450 m for energies greater than 50 TeV.

Read this paper on arXiv…

E. Postnikov, A. Grinyuk, L. Kuzmichev, et. al.
Tue, 28 Feb 17
57/69

Comments: 4 pages, 5 figures; proceedings of the 19th International Symposium on Very High Energy Cosmic Ray Interactions (ISVHECRI 2016)

Hybrid method for identifying mass groups of primary cosmic rays in the joint operation of IACTs and wide angle Cherenkov timing arrays [IMA]

http://arxiv.org/abs/1702.08302


This work is a methodical study of another option of the hybrid method originally aimed at gamma/hadron separation in the TAIGA experiment. In the present paper this technique was performed to distinguish between different mass groups of cosmic rays in the energy range 200 TeV – 500 TeV. The study was based on simulation data of TAIGA prototype and included analysis of geometrical form of images produced by different nuclei in the IACT simulation as well as shower core parameters reconstructed using timing array simulation. We show that the hybrid method can be sufficiently effective to precisely distinguish between mass groups of cosmic rays.

Read this paper on arXiv…

E. Postnikov, A. Grinyuk, L. Kuzmichev, et. al.
Tue, 28 Feb 17
60/69

Comments: 6 pages, 3 figures; proceedings of the 2nd International Conference on Particle Physics and Astrophysics (ICPPA-2016)

Methodology to create a new Total Solar Irradiance record: Making a composite out of multiple data records [SSA]

http://arxiv.org/abs/1702.02341


Many observational records critically rely on our ability to merge different (and not necessarily overlapping) observations into a single composite. We provide a novel and fully-traceable approach for doing so, which relies on a multi-scale maximum likelihood estimator. This approach overcomes the problem of data gaps in a natural way and uses data-driven estimates of the uncertainties. We apply it to the total solar irradiance (TSI) composite, which is currently being revised and is critical to our understanding of solar radiative forcing. While the final composite is pending decisions on what corrections to apply to the original observations, we find that the new composite is in closest agreement with the PMOD composite and the NRLTSI2 model. In addition, we evaluate long-term uncertainties in the TSI, which reveal a 1/f scaling

Read this paper on arXiv…

T. Wit, G. Kopp, C. Frohlich, et. al.
Thu, 9 Feb 17
13/67

Comments: slightly expanded version of a manuscript to appear in Geophysical Research Letters (2017)

Corral Framework: Trustworthy and Fully Functional Data Intensive Parallel Astronomical Pipelines [IMA]

http://arxiv.org/abs/1701.05566


Data processing pipelines are one of most common astronomical software. This kind of programs are chains of processes that transform raw data into valuable information. In this work a Python framework for astronomical pipeline generation is presented. It features a design pattern (Model-View-Controller) on top of a SQL Relational Database capable of handling custom data models, processing stages, and result communication alerts, as well as producing automatic quality and structural measurements. This pat- tern provides separation of concerns between the user logic and data models and the processing flow inside the pipeline, delivering for free multi processing and distributed computing capabilities. For the astronomical community this means an improvement on previous data processing pipelines, by avoiding the programmer deal with the processing flow, and parallelization issues, and by making him focusing just in the algorithms involved in the successive data transformations. This software as well as working examples of pipelines are available to the community at https://github.com/toros-astro.

Read this paper on arXiv…

J. Cabral, B. Sanchez, M. Beroiz, et. al.
Mon, 23 Jan 17
15/55

Comments: 8 pages, 2 figures, submitted for consideration at Astronomy and Computing. Code available at this https URL

From Blackbirds to Black Holes: Investigating Capture-Recapture Methods for Time Domain Astronomy [HEAP]

http://arxiv.org/abs/1701.03801


In time domain astronomy, recurrent transients present a special problem: how to infer total populations from limited observations. Monitoring observations may give a biassed view of the underlying population due to limitations on observing time, visibility and instrumental sensitivity. A similar problem exists in the life sciences, where animal populations (such as migratory birds) or disease prevalence, must be estimated from sparse and incomplete data. The class of methods termed Capture-Recapture is used to reconstruct population estimates from time-series records of encounters with the study population. This paper investigates the performance of Capture-Recapture methods in astronomy via a series of numerical simulations. The Blackbirds code simulates monitoring of populations of transients, in this case accreting binary stars (neutron star or black hole accreting from a stellar companion) under a range of observing strategies. We first generate realistic light-curves for populations of binaries with contrasting orbital period distributions. These models are then randomly sampled at observing cadences typical of existing and planned monitoring surveys. The classical capture-recapture methods, Lincoln-Peterson, Schnabel estimators, related techniques, and newer methods implemented in the Rcapture package are compared. A general exponential model based on the radioactive decay law is introduced, and demonstrated to recover (at 95% confidence) the underlying population abundance and duty cycle, in a fraction of the observing visits (10-50%) required to discover all the sources in the simulation. Capture-Recapture is a promising addition to the toolbox of time domain astronomy, and methods implemented in R by the biostats community can be readily called from within Python.

Read this paper on arXiv…

S. Laycock
Tue, 17 Jan 17
75/81

Comments: Accepted to New Astronomy. 11 pages, 8 figures (refereed version prior to editorial process)

Quasi-oscillatory dynamics observed in ascending phase of the flare on March 6, 2012 [SSA]

http://arxiv.org/abs/1612.09562


Context. The dynamics of the flaring loops in active region (AR) 11429 are studied. The observed dynamics consist of several evolution stages of the flaring loop system during both the ascending and descending phases of the registered M-class flare. The dynamical properties can also be classified by different types of magnetic reconnection, related plasma ejection and aperiodic flows, quasi-periodic oscillatory motions, and rapid temperature and density changes, among others. The focus of the present paper is on a specific time interval during the ascending (pre-flare) phase. Aims. The goal is to understand the quasi-periodic behavior in both space and time of the magnetic loop structures during the considered time interval. Methods.We have studied the characteristic location, motion, and periodicity properties of the flaring loops by examining space-time diagrams and intensity variation analysis along the coronal magnetic loops using AIA intensity and HMI magnetogram images (from the Solar Dynamics Observatory(SDO)). Results. We detected bright plasma blobs along the coronal loop during the ascending phase of the solar flare, the intensity variations of which clearly show quasi-periodic behavior. We also determined the periods of these oscillations. Conclusions. Two different interpretations are presented for the observed dynamics. Firstly, the oscillations are interpreted as the manifestation of non-fundamental harmonics of longitudinal standing acoustic oscillations driven by the thermodynamically nonequilibrium background (with time variable density and temperature). The second possible interpretation we provide is that the observed bright blobs could be a signature of a strongly twisted coronal loop that is kink unstable.

Read this paper on arXiv…

E. Philishvili, B. Shergelashvili, T. Zaqarashvili, et. al.
Mon, 2 Jan 17
15/45

Comments: 12 pages, 10 figures, A&A, in press

Method of frequency dependent correlations: investigating the variability of total solar irradiance [SSA]

http://arxiv.org/abs/1612.07494


This paper contributes to the field of modeling and hindcasting of the total solar irradiance (TSI) based on different proxy data that extend further back in time than the TSI that is measured from satellites.
We introduce a simple method to analyze persistent frequency-dependent correlations (FDCs) between the time series and use these correlations to hindcast missing historical TSI values. We try to avoid arbitrary choices of the free parameters of the model by computing them using an optimization procedure. The method can be regarded as a general tool for pairs of data sets, where correlating and anticorrelating components can be separated into non-overlapping regions in frequency domain.
Our method is based on low-pass and band-pass filtering with a Gaussian transfer function combined with de-trending and computation of envelope curves.
We find a major controversy between the historical proxies and satellite-measured targets: a large variance is detected between the low-frequency parts of targets, while the low-frequency proxy behavior of different measurement series is consistent with high precision. We also show that even though the rotational signal is not strongly manifested in the targets and proxies, it becomes clearly visible in FDC spectrum.
The application of the new method to solar data allows us to obtain important insights into the different TSI modeling procedures and their capabilities for hindcasting based on the directly observed time intervals.

Read this paper on arXiv…

J. Pelt, M. Kapyla and N. Olspert
Fri, 23 Dec 16
2/60

Comments: 19 pages, 5 figures, accepted for publication in Astronomy & Astrophysics

When "Optimal Filtering" Isn't [CL]

http://arxiv.org/abs/1611.07856


The so-called “optimal filter” analysis of a microcalorimeter’s x-ray pulses is statistically optimal only if all pulses have the same shape, regardless of energy. The shapes of pulses from a nonlinear detector can and do depend on the pulse energy, however. A pulse-fitting procedure that we call “tangent filtering” accounts for the energy dependence of the shape and should therefore achieve superior energy resolution. We take a geometric view of the pulse-fitting problem and give expressions to predict how much the energy resolution stands to benefit from such a procedure. We also demonstrate the method with a case study of K-line fluorescence from several 3d transition metals. The method improves the resolution from 4.9 eV to 4.2 eV at the Cu K$\alpha$ line (8.0keV).

Read this paper on arXiv…

J. Fowler, B. Alpert, W. Doriese, et. al.
Thu, 24 Nov 16
39/54

Comments: Submitted to the Proceedings of the 2016 Applied Superconductivity Conference

Filling the gaps: Gaussian mixture models from noisy, truncated or incomplete samples [IMA]

http://arxiv.org/abs/1611.05806


We extend the common mixtures-of-Gaussians density estimation approach to account for a known sample incompleteness by simultaneous imputation from the current model. The method called GMMis generalizes existing Expectation-Maximization techniques for truncated data to arbitrary truncation geometries and probabilistic rejection. It can incorporate an uniform background distribution as well as independent multivariate normal measurement errors for each of the observed samples, and recovers an estimate of the error-free distribution from which both observed and unobserved samples are drawn. We compare GMMis to the standard Gaussian mixture model for simple test cases with different types of incompleteness, and apply it to observational data from the NASA Chandra X-ray telescope. The python code is capable of performing density estimation with millions of samples and thousands of model components and is released as an open-source package at https://github.com/pmelchior/pyGMMis

Read this paper on arXiv…

P. Melchior and A. Goulding
Fri, 18 Nov 16
49/60

Comments: 12 pages, 6 figures, submitted to Computational Statistics & Data Analysis

A model independent safeguard for unbinned Profile Likelihood [CL]

http://arxiv.org/abs/1610.02643


We present a general method to include residual un-modeled background shape uncertainties in profile likelihood based statistical tests for high energy physics and astroparticle physics counting experiments. This approach provides a simple and natural protection against undercoverage, thus lowering the chances of a false discovery or of an over constrained confidence interval, and allows a natural transition to unbinned space. Unbinned likelihood enhances the sensitivity and allows optimal usage of information for the data and the models.
We show that the asymptotic behavior of the test statistic can be regained in cases where the model fails to describe the true background behavior, and present 1D and 2D case studies for model-driven and data-driven background models. The resulting penalty on sensitivities follows the actual discrepancy between the data and the models, and is asymptotically reduced to zero with increasing knowledge.

Read this paper on arXiv…

N. Priel, L. Rauch, H. Landsman, et. al.
Tue, 11 Oct 16
55/78

Comments: N/A

Long-period oscillations of active region patterns: least-squares mapping on second-order curves [SSA]

http://arxiv.org/abs/1610.01509


Active regions (ARs) are the main sources of variety in solar dynamic events. Automated detection and identification tools need to be developed for solar features for a deeper understanding of the solar cycle. Of particular interest here are the dynamical properties of the ARs, regardless of their internal structure and sunspot distribution. We studied the oscillatory dynamics of two ARs: NOAA 11327 and NOAA 11726 using two different methods of pattern recognition. We developed a novel method of automated AR border detection and compared it to an existing method for the proof-of-concept. The first method uses least-squares fitting on the smallest ellipse enclosing the AR, while the second method applies regression on the convex hull.} After processing the data, we found that the axes and the inclination angle of the ellipse and the convex hull oscillate in time. These oscillations are interpreted as the second harmonic of the standing long-period kink oscillations (with the node at the apex) of the magnetic flux tube connecting the two main sunspots of the ARs. In both ARs we have estimated the distribution of the phase speed magnitude along the magnetic tubes (along the two main spots) by interpreting the obtained oscillation of the inclination angle as the standing second harmonic kink mode. After comparing the obtained results for fast and slow kink modes, we conclude that both of these modes are good candidates to explain the observed oscillations of the AR inclination angles, as in the high plasma $\beta$ regime the phase speeds of these modes are comparable and on the order of the Alfv\'{e}n speed. Based on the properties of the observed oscillations, we detected the appropriate depth of the sunspot patterns, which coincides with estimations made by helioseismic methods. The latter analysis can be used as a basis for developing a magneto-seismological tool for ARs.

Read this paper on arXiv…

G. Dumbadze, B. Shergelashvili, V. Kukhianidze, et. al.
Thu, 6 Oct 16
22/67

Comments: 10 pages, 6 figures, Accepted for publication in A&A

Solar Activity and Transformer Failures in the Greek National Electric Grid [CL]

http://arxiv.org/abs/1307.1149


We study both the short term and long term effects of solar activity on the large transformers (150kV and 400kV) of the Greek national electric grid. We use data analysis and various analytic and statistical methods and models. Contrary to the common belief in PPC Greece, we see that there are considerable both short term (immediate) and long term effects of solar activity onto large transformers in a mid-latitude country (latitude approx. 35 – 41 degrees North) like Greece. Our results can be summarized as follows: For the short term effects: During 1989-2010 there were 43 stormy days (namely days with for example Ap larger or equal to 100) and we had 19 failures occurring during a stormy day plus or minus 3 days and 51 failures occurring during a stormy day plus or minus 7 days. All these failures can be directly related to Geomagnetically Induced Currents (GICs). Explicit cases are presented. For the long term effects we have two main results: The annual transformer failure number for the period of study 1989-2010 follows the solar activity pattern (11 year periodicity, bell-shaped graph). Yet the maximum number of transformer failures occur 3-4 years after the maximum of solar activity. There is statistical correlation between solar activity expressed using various newly defined long term solar activity indices and the annual number of transformer failures. These new long term solar activity indices were defined using both local (from geomagnetic stations in Greece) and global (planetary averages) geomagnetic data. Applying both linear and non-linear statistical regression we compute the regression equations and the corresponding coefficients of determination.

Read this paper on arXiv…

I. Zois
Tue, 13 Sep 16
50/91

Comments: 45 pages,a summary will be presented at the International Conference on Mathematical Modeling in Physical Sciences, 1-5 September 2013, Prague, Czech Republic.Some preliminary results were presented during the 8th European Space Weather Week in Namur, Belgium, 2011. Another part was presented at the 9th European Space Weather Week at the Acad\’emie Royale de Belgique, Brussels, Belgium 2012

Model-independent inference on compact-binary observations [HEAP]

http://arxiv.org/abs/1608.08223


The recent advanced LIGO detections of gravitational waves from merging binary black holes enhance the prospect of exploring binary evolution via gravitational-wave observations of a population of compact-object binaries. In the face of uncertainty about binary formation models, model-independent inference provides an appealing alternative to comparisons between observed and modelled populations. We describe a procedure for clustering in the multi-dimensional parameter space of observations that are subject to significant measurement errors. We apply this procedure to a mock data set of population-synthesis predictions for the masses of merging compact binaries convolved with realistic measurement uncertainties, and demonstrate that we can accurately distinguish subpopulations of binary neutron stars, binary black holes, and mixed black hole — neutron star binaries.

Read this paper on arXiv…

I. Mandel, W. Farr, A. Colonna, et. al.
Wed, 31 Aug 16
48/61

Comments: N/A

The chaotic four-body problem in Newtonian gravity I: Identical point-particles [SSA]

http://arxiv.org/abs/1608.07286


In this paper, we study the chaotic four-body problem in Newtonian gravity. Assuming point particles and total encounter energies $\le$ 0, the problem has three possible outcomes. We describe each outcome as a series of discrete transformations in energy space, using the diagrams first presented in Leigh \& Geller (2012; see the Appendix). Furthermore, we develop a formalism for calculating probabilities for these outcomes to occur, expressed using the density of escape configurations per unit energy, and based on the Monaghan description originally developed for the three-body problem. We compare this analytic formalism to results from a series of binary-binary encounters with identical point particles, simulated using the \texttt{FEWBODY} code. Each of our three encounter outcomes produces a unique velocity distribution for the escaping star(s). Thus, these distributions can potentially be used to constrain the origins of dynamically-formed populations, via a direct comparison between the predicted and observed velocity distributions. Finally, we show that, for encounters that form stable triples, the simulated single star escape velocity distributions are the same as for the three-body problem. This is also the case for the other two encounter outcomes, but only at low virial ratios. This suggests that single and binary stars processed via single-binary and binary-binary encounters in dense star clusters should have a unique velocity distribution relative to the underlying Maxwellian distribution (provided the relaxation time is sufficiently long), which can be calculated analytically.

Read this paper on arXiv…

N. Leigh, N. Stone, A. Geller, et. al.
Mon, 29 Aug 16
4/41

Comments: 18 pages, 12 figures; accepted for publication in MNRAS

Does the Planetary Dynamo Go Cycling On? Re-examining the Evidence for Cycles in Magnetic Reversal Rate [EPA]

http://arxiv.org/abs/1608.07303


The record of reversals of the geomagnetic field has played an integral role in the development of plate tectonic theory. Statistical analyses of the reversal record are aimed at detailing patterns and linking those patterns to core-mantle processes. The geomagnetic polarity timescale is a dynamic record and new paleomagnetic and geochronologic data provide additional detail. In this paper, we examine the periodicity revealed in the reversal record back to 375 Ma using Fourier analysis. Four significant peaks were found in the reversal power spectra within the 16-40-million-year range. Plotting the function constructed from the sum of the frequencies of the proximal peaks yield a transient 26 Myr periodicity, suggesting chaotic motion with a periodic attractor. The possible 16 Myr periodicity, a previously recognized result, may be correlated with “pulsation” of mantle plumes.

Read this paper on arXiv…

A. Melott, A. Pivarunas, J. Meert, et. al.
Mon, 29 Aug 16
20/41

Comments: 4 figures. Submitted to Earth and Planetary Science Letters

Uncertainties in the Sunspot Numbers: Estimation and Implications [SSA]

http://arxiv.org/abs/1608.05261


Sunspot number series are subject to various uncertainties, which are still poorly known. The need for their better understanding was recently highlighted by the major makeover of the international Sunspot Number [Clette et al., Space Science Reviews, 2014]. We present the first thorough estimation of these uncertainties, which behave as Poisson-like random variables with a multiplicative coefficient that is time- and observatory-dependent. We provide a simple expression for these uncertainties, and reveal how their evolution in time coincides with changes in the observations, and processing of the data. Knowing their value is essential for properly building composites out of multiple observations, and for preserving the stability of the composites in time.

Read this paper on arXiv…

T. Wit, L. Lefevre and F. Clette
Fri, 19 Aug 16
25/45

Comments: accepted in Solar Physics (2016), 24 pages

Uncertainty Limits on Solutions of Inverse Problems over Multiple Orders of Magnitude using Bootstrap Methods: An Astroparticle Physics Example [IMA]

http://arxiv.org/abs/1607.07226


Astroparticle experiments such as IceCube or MAGIC require a deconvolution of their measured data with respect to the response function of the detector to provide the distributions of interest, e.g. energy spectra. In this paper, appropriate uncertainty limits that also allow to draw conclusions on the geometric shape of the underlying distribution are determined using bootstrap methods, which are frequently applied in statistical applications. Bootstrap is a collective term for resampling methods that can be employed to approximate unknown probability distributions or features thereof. A clear advantage of bootstrap methods is their wide range of applicability. For instance, they yield reliable results, even if the usual normality assumption is violated.
The use, meaning and construction of uncertainty limits to any user-specific confidence level in the form of confidence intervals and levels are discussed. The precise algorithms for the implementation of these methods, applicable for any deconvolution algorithm, are given. The proposed methods are applied to Monte Carlo simulations to show their feasibility and their precision in comparison to the statistical uncertainties calculated with the deconvolution software TRUEE.

Read this paper on arXiv…

S. Einecke, K. Proksch, N. Bissantz, et. al.
Tue, 26 Jul 16
19/75

Comments: N/A

Deep Recurrent Neural Networks for Supernovae Classification [IMA]

http://arxiv.org/abs/1606.07442


We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae. The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50% of the representational SPCC dataset (around 104 supernovae) we obtain a type Ia vs non type Ia classification accuracy of 94.8%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and a SPCC figure-of-merit F1 = 0.64. We also apply a pre-trained model to obtain classification probabilities as a function of time, and show it can give early indications of supernovae type. Our method is competitive with existing algorithms and has applications for future large-scale photometric surveys.

Read this paper on arXiv…

T. Charnock and A. Moss
Mon, 27 Jun 16
27/43

Comments: 6 pages, 3 figures

Tests for Comparing Weighted Histograms. Review and Improvements [CL]

http://arxiv.org/abs/1606.06591


Histograms with weighted entries are used to estimate probability density functions. Computer simulation is the main application of this type of histograms. A review on chi-square tests for comparing weighted histograms is presented in this paper. Improvements to these tests that have a size closer to its nominal value are proposed. Numerical examples are presented for evaluation and demonstration of various applications of the tests.

Read this paper on arXiv…

N. Gagunashvili
Wed, 22 Jun 16
21/50

Comments: 23 pages, 2 figures. arXiv admin note: text overlap with arXiv:0905.4221

Information Gain in Cosmology: From the Discovery of Expansion to Future Surveys [CEA]

http://arxiv.org/abs/1606.06273


Facing the advent of the next generation cosmological surveys we present a method to forecast knowledge gain on cosmological models. We propose this as a well defined and general tool to quantify the performance of different experiments in relation to different theoretical models. In particular, the assessment of experimental performance will benefit enormously from the fact that this method is invariant under re-parametrization of the model. We apply this to future surveys and compare expected knowledge advancements to the most relevant experiments performed over the history of modern cosmology. When considering the standard cosmological model, we show that it will rapidly reach knowledge saturation in the near future and forthcoming improvements will not match the past ones. On the contrary, we find that new observations have the potential for unprecedented knowledge jumps when extensions of the standard scenario are considered.

Read this paper on arXiv…

M. Raveri, M. Martinelli, G. Zhao, et. al.
Tue, 21 Jun 16
44/75

Comments: 6 pages, 2 figures

Evidence for periodicity in 43-year-long monitoring of NGC 5548 [HEAP]

http://arxiv.org/abs/1606.04606


We present an analysis of 43 years (1972 to 2015) of spectroscopic observations of the Seyfert 1 galaxy NGC 5548. This includes 12 years of new unpublished observations (2003 to 2015). We compiled about 1600 H$\beta$ spectra and analyzed the long term spectral variations of the 5100\AA\ continuum and the H$\beta$ line. Our analysis is based on standard procedures like the Lomb-Scargle method that is known to be rather limited to such heterogeneous data sets, as well as a new method developed specifically for this project that is more robust and reveals a $\sim$5700 day periodicity in the continuum light curve, the H$\beta$ light curve and the radial velocity curve of the red wing of the H$\beta$ line. The data are consistent with orbital motion inside the broad emission line region of the source. We discuss several possible mechanisms that can explain this periodicity, including orbiting dusty and dust-free clouds, a binary black hole system, tidal disruption events and the effect of an orbiting star periodically passing through an accretion disc.

Read this paper on arXiv…

E. Bon, S. Zucker, H. Netzer, et. al.
Thu, 16 Jun 16
20/67

Comments: Accepted in ApJS, 65 pages, 10 figures and 4 tables

DNest4: Diffusive Nested Sampling in C++ and Python [CL]

http://arxiv.org/abs/1606.03757


In probabilistic (Bayesian) inferences, we typically want to compute properties of the posterior distribution, describing knowledge of unknown quantities in the context of a particular dataset and the assumed prior information. The marginal likelihood, also known as the “evidence”, is a key quantity in Bayesian model selection. The Diffusive Nested Sampling algorithm, a variant of Nested Sampling, is a powerful tool for generating posterior samples and estimating marginal likelihoods. It is effective at solving complex problems including many where the posterior distribution is multimodal or has strong dependencies between variables. DNest4 is an open source (MIT licensed), multi-threaded implementation of this algorithm in C++11, along with associated utilities including: i) RJObject, a class template for finite mixture models, (ii) A Python package allowing basic use without C++ coding, and iii) Experimental support for models implemented in Julia. In this paper we demonstrate DNest4 usage through examples including simple Bayesian data analysis, finite mixture models, and Approximate Bayesian Computation.

Read this paper on arXiv…

B. Brewer and D. Foreman-Mackey
Tue, 14 Jun 16
40/67

Comments: Submitted. 31 pages, 9 figures

Detecting Damped Lyman-$α$ Absorbers with Gaussian Processes [CEA]

http://arxiv.org/abs/1605.04460


We develop an automated technique for detecting damped Lyman-$\alpha$ absorbers (DLAs) along spectroscopic sightlines to quasi-stellar objects (QSOs or quasars). The detection of DLAs in large-scale spectroscopic surveys such as SDSS-III sheds light on galaxy formation at high redshift, showing the nucleation of galaxies from diffuse gas. We use nearly 50 000 QSO spectra to learn a novel tailored Gaussian process model for quasar emission spectra, which we apply to the DLA detection problem via Bayesian model selection. We propose models for identifying an arbitrary number of DLAs along a given line of sight. We demonstrate our method’s effectiveness using a large-scale validation experiment, with excellent performance. We also provide a catalog of our results applied to 162 861 spectra from SDSS-III data release 12.

Read this paper on arXiv…

R. Garnett, S. Ho, S. Bird, et. al.
Tue, 17 May 16
19/65

Comments: N/A

Track reconstruction through the application of the Legendre Transform on ellipses [CL]

http://arxiv.org/abs/1605.04738


We propose a pattern recognition method that identifies the common tangent lines of a set of ellipses. The detection of the tangent lines is attained by applying the Legendre transform on a given set of ellipses. As context, we consider a hypothetical detector made out of layers of chambers, each of which returns an ellipse as an output signal. The common tangent of these ellipses represents the trajectory of a charged particle crossing the detector. The proposed method is evaluated using ellipses constructed from Monte Carlo generated tracks.

Read this paper on arXiv…

T. Alexopoulos, Y. Bristogiannis and S. Leontsinis
Tue, 17 May 16
22/65

Comments: 17 pages, 12 figures

Application of Bayesian Neural Networks to Energy Reconstruction in EAS Experiments for ground-based TeV Astrophysics [IMA]

http://arxiv.org/abs/1604.06532


A toy detector array has been designed to simulate the detection of cosmic rays in Extended Air Shower(EAS) Experiments for ground-based TeV Astrophysics. The primary energies of protons from the Monte-Carlo simulation have been reconstructed by the algorithm of Bayesian neural networks (BNNs) and a standard method like the LHAASO experiment\cite{lhaaso-ma}, respectively. The result of the energy reconstruction using BNNs has been compared with the one using the standard method. Compared to the standard method, the energy resolutions are significantly improved using BNNs. And the improvement is more obvious for the high energy protons than the low energy ones.

Read this paper on arXiv…

Y. Bai, Y. Xu, J. Lan, et. al.
Mon, 25 Apr 16
13/40

Comments: 10 pages, 3 figures

Joint signal extraction from galaxy clusters in X-ray and SZ surveys: A matched-filter approach [CEA]

http://arxiv.org/abs/1604.06107


The hot ionized gas of the intra-cluster medium emits thermal radiation in the X-ray band and also distorts the cosmic microwave radiation through the Sunyaev-Zel’dovich (SZ) effect. Combining these two complementary sources of information through innovative techniques can therefore potentially improve the cluster detection rate when compared to using only one of the probes. Our aim is to build such a joint X-ray-SZ analysis tool, which will allow us to detect fainter or more distant clusters while maintaining high catalogue purity. We present a method based on matched multifrequency filters (MMF) for extracting cluster catalogues from SZ and X-ray surveys. We first designed an X-ray matched-filter method, analogous to the classical MMF developed for SZ observations. Then, we built our joint X-ray-SZ algorithm by combining our X-ray matched filter with the classical SZ-MMF, for which we used the physical relation between SZ and X-ray observations. We show that the proposed X-ray matched filter provides correct photometry results, and that the joint matched filter also provides correct photometry when the $F_{\rm X}/Y_{500}$ relation of the clusters is known. Moreover, the proposed joint algorithm provides a better signal-to-noise ratio than single-map extractions, which improves the detection rate even if we do not exactly know the $F_{\rm X}/Y_{500}$ relation. The proposed methods were tested using data from the ROSAT all-sky survey and from the Planck survey.

Read this paper on arXiv…

P. Tarrio, J. Melin, M. Arnaud, et. al.
Fri, 22 Apr 16
45/54

Comments: 22 pages (before appendices), 19 figures, 3 tables, 5 appendices. Accepted for publication in A&A

Using Extreme Value Theory for Determining the Probability of Carrington-Like Solar Flares [CL]

http://arxiv.org/abs/1604.03325


Space weather events can negatively affect satellites, the electricity grid, satellite navigation systems and human health. As a consequence, extreme space weather has been added to the UK and other national risk registers. However, by their very nature, extreme events occur rarely and statistical methods are required to determine the probability of occurrence solar storms. Space weather events can be characterised by a number of natural phenomena such as X-ray (solar) flares, solar energetic particle (SEP) fluxes, coronal mass ejections and various geophysical indices (Dst, Kp, F10.7). Here we use extreme value theory (EVT) to investigate the probability of extreme solar flares. Previous work has suggested that the distribution of solar flares follows a power law. However such an approach can lead to overly “fat-tails” in the probability distribution function and thus to an under estimation of the return time of such events. Using EVT and GOES X-ray flux data we find that the expected 150 year return level is an X60 flare (6×10^(-3) Wm-2, 1-8 {\AA} X-ray flux). We also show that the EVT results are consistent with flare data from the Kepler space telescope mission.

Read this paper on arXiv…

S. Elvidge and M. Angling
Wed, 13 Apr 16
43/60

Comments: 10 pages, 3 figures, submitted to Nature

LikeDM: likelihood calculator of dark matter detection [CL]

http://arxiv.org/abs/1603.07119


With the large progresses of searching for dark matter (DM) particles from indirect and direct methods, we develop a numerical tool which enables fast calculation of the likelihood of specified DM particle models given a number of observational data, such as charged cosmic rays from space-borne experiments (e.g., PAMELA, AMS-02), $\gamma$-rays from Fermi space telescope, and the underground direct detection experiments. The purpose of this tool, \likedm\ — likelihood calculator of dark matter detection, is to bridge the particle model of DM and the observational data. The intermediate steps between these two, including the astrophysical backgrounds, the propagation of charged particles, the analysis of Fermi $\gamma$-ray data, as well as the DM velocity distribution and the nuclear form factor, have been dealt with in the code. We release the first version (v1.0) focusing on the constraints of charged cosmic and gamma rays and the direct detection part will be implemented in the next version. This manual describes the framework, usage, and related physics of the code.

Read this paper on arXiv…

X. Huang, Y. Tsai and Q. Yuan
Thu, 24 Mar 16
1/60

Comments: 26 pages, 5 figures, LikeDM version 1

$K$-corrections: an Examination of their Contribution to the Uncertainty of Luminosity Measurements [GA]

http://arxiv.org/abs/1603.07299


In this paper we provide formulae that can be used to determine the uncertainty contributed to a measurement by a $K$-correction and, thus, valuable information about which flux measurement will provide the most accurate $K$-corrected luminosity. All of this is done at the level of a Gaussian approximation of the statistics involved, that is, where the galaxies in question can be characterized by a mean spectral energy distribution (SED) and a covariance function (spectral 2-point function). This paper also includes approximations of the SED mean and covariance for galaxies, and the three common subclasses thereof, based on applying the templates from Assef et al. (2010) to the objects in zCOSMOS bright 10k (Lilly et al. 2009) and photometry of the same field from Capak et al. (2007), Sanders et al. (2007), and the AllWISE source catalog.

Read this paper on arXiv…

S. Lake and E. Wright
Thu, 24 Mar 16
33/60

Comments: 10 pages, 6 figures, 6 tables (1 extended)

PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms [CL]

http://arxiv.org/abs/1603.01876


The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many real-world big data processing systems. The PageRank pipeline benchmark builds on existing prior scalable benchmarks (Graph500, Sort, and PageRank) to create a holistic benchmark with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. The linear algebraic nature of PageRank makes it well suited to being implemented using the GraphBLAS standard. The computations are simple enough that performance predictions can be made based on simple computing hardware models. The surrounding kernels provide the context for each kernel that allows rigorous definition of both the input and the output for each kernel. Furthermore, since the proposed PageRank pipeline benchmark is scalable in both problem size and hardware, it can be used to measure and quantitatively compare a wide range of present day and future systems. Serial implementations in C++, Python, Python with Pandas, Matlab, Octave, and Julia have been implemented and their single threaded performance has been measured.

Read this paper on arXiv…

P. Dreher, C. Byun, C. Hill, et. al.
Tue, 8 Mar 16
82/83

Comments: 9 pages, 7 figures, to appear in IPDPS 2016 Graph Algorithms Building Blocks (GABB) workshop

Superplot: a graphical interface for plotting and analysing MultiNest output [CL]

http://arxiv.org/abs/1603.00555


We present an application, Superplot, for calculating and plotting statistical quantities relevant to parameter inference from a “chain” of samples drawn from a parameter space, produced by e.g. MultiNest. A simple graphical interface allows one to browse a chain of many variables quickly, and make publication quality plots of, inter alia, profile likelihood, posterior pdf, confidence intervals and credible regions. In this short manual, we document installation and basic usage, and define all statistical quantities and conventions.

Read this paper on arXiv…

A. Fowlie and M. Bardsley
Thu, 3 Mar 16
39/75

Comments: 13 pages, 2 colour figures

Unfolding problem clarification and solution validation [CL]

http://arxiv.org/abs/1602.05834


The unfolding problem formulation for correcting experimental data distortions due to finite resolution and limited detector acceptance is discussed. A novel validation of the problem solution is proposed. Attention is drawn to fact that different unfolded distributions may satisfy the validation criteria, in which case a conservative approach using entropy is suggested. The importance of analysis of residuals is demonstrated.

Read this paper on arXiv…

N. Gagunashvili
Fri, 19 Feb 16
22/50

Comments: 9 pages,4 figures

Gravitational wave astrophysics, data analysis and multimessenger astronomy [IMA]

http://arxiv.org/abs/1602.05573


This paper reviews gravitational wave sources and their detection. One of the most exciting potential sources of gravitational waves are coalescing binary black hole systems. They can occur on all mass scales and be formed in numerous ways, many of which are not understood. They are generally invisible in electromagnetic waves, and they provide opportunities for deep investigation of Einstein’s general theory of relativity. Sect. 1 of this paper considers ways that binary black holes can be created in the universe, and includes the prediction that binary black hole coalescence events are likely to be the first gravitational wave sources to be detected. The next parts of this paper address the detection of chirp waveforms from coalescence events in noisy data. Such analysis is computationally intensive. Sect. 2 reviews a new and powerful method of signal detection based on the GPU-implemented summed parallel infinite impulse response filters. Such filters are intrinsically real time alorithms, that can be used to rapidly detect and localise signals. Sect. 3 of the paper reviews the use of GPU processors for rapid searching for gravitational wave bursts that can arise from black hole births and coalescences. In sect. 4 the use of GPU processors to enable fast efficient statistical significance testing of gravitational wave event candidates is reviewed. Sect. 5 of this paper addresses the method of multimessenger astronomy where the discovery of electromagnetic counterparts of gravitational wave events can be used to identify sources, understand their nature and obtain much greater science outcomes from each identified event.

Read this paper on arXiv…

H. Lee, E. Bigot, Z. Du, et. al.
Fri, 19 Feb 16
42/50

Comments: N/A

Practical Introduction to Clustering Data [CL]

http://arxiv.org/abs/1602.05124


Data clustering is an approach to seek for structure in sets of complex data, i.e., sets of “objects”. The main objective is to identify groups of objects which are similar to each other, e.g., for classification. Here, an introduction to clustering is given and three basic approaches are introduced: the k-means algorithm, neighbour-based clustering, and an agglomerative clustering method. For all cases, C source code examples are given, allowing for an easy implementation.

Read this paper on arXiv…

A. Hartmann
Wed, 17 Feb 16
30/55

Comments: 22 pages. All source code in anc directory included. Section 8.5.6 of book: A.K. Hartmann, Big Practical Guide to Computer Simulations, World-Scientifc, Singapore (2015)

Looking for a Needle in a Haystack? Look Elsewhere! A statistical comparison of approximate global p-values [CL]

http://arxiv.org/abs/1602.03765


The search for new significant peaks over a energy spectrum often involves a statistical multiple hypothesis testing problem. Separate tests of hypothesis are conducted at different locations producing an ensemble of local p-values, the smallest of which is reported as evidence for the new resonance. Unfortunately, controlling the false detection rate (type I error rate) of such procedures may lead to excessively stringent acceptance criteria. In the recent physics literature, two promising statistical tools have been proposed to overcome these limitations. In 2005, a method to “find needles in haystacks” was introduced by Pilla et al. [1], and a second method was later proposed by Gross and Vitells [2] in the context of the “look elsewhere effect” and trial factors. We show that, for relatively small sample sizes, the former leads to an artificial inflation of statistical power that stems from an increase in the false detection rate, whereas the two methods exhibit similar performance for large sample sizes. Finally, we provide general guidelines to select between statistical procedures for signal detection with respect to the specifics of the physics problem under investigation.

Read this paper on arXiv…

S. Algeri, J. Conrad, D. Dyk, et. al.
Fri, 12 Feb 16
6/48

Comments: Submitted to EPJ C

Dynamic system classifier [CL]

http://arxiv.org/abs/1601.07901


Stochastic differential equations describe well many physical, biological and sociological systems, despite the simplification often made in their derivation. Here the usage of simple stochastic differential equations to characterize and classify complex dynamical systems is proposed within a Bayesian framework. To this end, we develop a dynamic system classifier (DSC). The DSC first abstracts training data of a system in terms of time dependent coefficients of the descriptive stochastic differential equation. Thereby the DSC identifies unique correlation structures within the training data. For definiteness we restrict the presentation of DSC to oscillation processes with a time dependent frequency {\omega}(t) and damping factor {\gamma}(t). Although real systems might be more complex, this simple oscillator captures many characteristic features. The {\omega} and {\gamma} timelines represent the abstract system characterization and permit the construction of efficient signal classifiers. Numerical experiments show that such classifiers perform well even in the low signal-to-noise regime.

Read this paper on arXiv…

D. Pumpe, M. Greiner, E. Muller, et. al.
Fri, 29 Jan 16
23/52

Comments: 11 pages, 8 figures

Processing of X-ray Microcalorimeter Data with Pulse Shape Variation using Principal Component Analysis [CL]

http://arxiv.org/abs/1601.01651


We present a method using principal component analysis (PCA) to process x-ray pulses with severe shape variation where traditional optimal filter methods fail. We demonstrate that PCA is able to noise-filter and extract energy information from x-ray pulses despite their different shapes. We apply this method to a dataset from an x-ray thermal kinetic inductance detector which has severe pulse shape variation arising from position-dependent absorption.

Read this paper on arXiv…

D. Yan, T. Cecil, L. Gades, et. al.
Fri, 8 Jan 16
13/51

Comments: Accepted for publication in J. Low Temperature Physics, Low Temperature Detectors 16 (LTD-16) conference

On the Solar Component in the Observed Global Temperature Anomalies [CL]

http://arxiv.org/abs/1512.01075


In this paper, starting from the updated time series of global temperature anomalies, Ta, we show how the solar component affects the observed behavior using, as an indicator of solar activity, the Solar Sunspot Number SSN. The results that are found clearly show that the solar component has an important role and affects significantly the current observed stationary behavior of global temperature anomalies. The solar activity behavior and its future role will therefore be decisive in determining whether or not the restart of the increase of temperature anomalies observed since 1975 will occur.

Read this paper on arXiv…

S. Sello
Fri, 4 Dec 15
23/64

Comments: 9 pages, 7 figures

Frequentist tests for Bayesian models [IMA]

http://arxiv.org/abs/1511.02363


Analogues of the frequentist chi-square and $F$ tests are proposed for testing goodness-of-fit and consistency for Bayesian models. Simple examples exhibit these tests’ detection of inconsistency between consecutive experiments with identical parameters, when the first experiment provides the prior for the second. In a related analysis, a quantitative measure is derived for judging the degree of tension between two different experiments with partially overlapping parameter vectors.

Read this paper on arXiv…

L. Lucy
Tue, 10 Nov 15
27/62

Comments: 8 pages, 4 figures

On the universality of interstellar filaments: theory meets simulations and observations [SSA]

http://arxiv.org/abs/1510.05654


Filaments are ubiquitous in the universe. They are seen in cosmological structures, in the Milky Way centre and in dense interstellar gas. Recent observations have revealed that stars and star clusters form preferentially at the intersection of dense filaments. Understanding the formation and properties of filaments is therefore a crucial step in understanding star formation. Here we perform three-dimensional high-resolution magnetohydrodynamical simulations that follow the evolution of molecular clouds and the formation of filaments and stars within them. We apply a filament detection algorithm and compare simulations with different combinations of physical ingredients: gravity, turbulence, magnetic fields and jet/outflow feedback. We find that gravity-only simulations produce significantly narrower filament profiles than observed, while simulations that at least include turbulence produce realistic filament properties. For these turbulence simulations, we find a remarkably universal filament width of (0.10+/-0.02) pc, which is independent of the evolutionary stage or the star formation history of the clouds. We derive a theoretical model that provides a physical explanation for this characteristic filament width, based on the sonic scale (lambda_sonic) of molecular cloud turbulence. Our derivation provides lambda_sonic as a function of the cloud diameter L, the velocity dispersion sigma_v, the gas sound speed c_s and the strength of the magnetic field parameterised by plasma beta. For typical cloud conditions in the Milky Way spiral arms, we find theoretically that lambda_sonic = 0.04-0.16 pc, in excellent agreement with the filament width of 0.05-0.15 pc found in observations.

Read this paper on arXiv…

C. Federrath
Wed, 21 Oct 15
48/66

Comments: 13 pages, 8 figures, submitted to MNRAS, comments welcome

Effect of data gaps on correlation dimension computed from light curves of variable stars [IMA]

http://arxiv.org/abs/1410.4454


Observational data, especially astrophysical data, is often limited by gaps in data that arises due to lack of observations for a variety of reasons. Such inadvertent gaps are usually smoothed over using interpolation techniques. However the smoothing techniques can introduce artificial effects, especially when non-linear analysis is undertaken. We investigate how gaps can affect the computed values of correlation dimension of the system, without using any interpolation. For this we introduce gaps artificially in synthetic data derived from standard chaotic systems, like the R{\”o}ssler and Lorenz, with frequency of occurrence and size of missing data drawn from two Gaussian distributions. Then we study the changes in correlation dimension with change in the distributions of position and size of gaps. We find that for a considerable range of mean gap frequency and size, the value of correlation dimension is not significantly affected, indicating that in such specific cases, the calculated values can still be reliable and acceptable. Thus our study introduces a method of checking the reliability of computed correlation dimension values by calculating the distribution of gaps with respect to its size and position. This is illustrated for the data from light curves of three variable stars, R Scuti, U Monocerotis and SU Tauri. We also demonstrate how a cubic spline interpolation can cause a time series of Gaussian noise with missing data to be misinterpreted as being chaotic in origin. This is demonstrated for the non chaotic light curve of variable star SS Cygni, which gives a saturated D$_{2}$ value, when interpolated using a cubic spline. In addition we also find that a careful choice of binning, in addition to reducing noise, can help in shifting the gap distribution to the reliable range for D$_2$ values.

Read this paper on arXiv…

S. George, G. Ambika and R. Misra
Tue, 13 Oct 15
63/64

Comments: 13 pages, 15 figures

Resolution enhancement by extrapolation of coherent diffraction images: a quantitative study about the limits and a numerical study of non-binary and phase objects [CL]

http://arxiv.org/abs/1510.01654


In coherent diffractive imaging (CDI) the resolution with which the reconstructed object can be obtained is limited by the numerical aperture of the experimental setup. We present here a theoretical and numerical study for achieving super-resolution by post-extrapolation of coherent diffraction images, such as diffraction patterns or holograms. We proof that a diffraction pattern can unambiguously be extrapolated from just a fraction of the entire pattern and that the ratio of the extrapolated signal to the originally available signal, is linearly proportional to the oversampling ratio. While there could be in principle other methods to achieve extrapolation, we devote our discussion to employing phase retrieval methods and demonstrate their limits. We present two numerical studies; namely the extrapolation of diffraction patterns of non-binary and that of phase objects together with a discussion of the optimal extrapolation procedure.

Read this paper on arXiv…

T. Latychevskaia and H. Fink
Wed, 7 Oct 15
37/72

Comments: N/A

Testing a Novel Self-Assembling Data Paradigm in the Context of IACT Data [IMA]

http://arxiv.org/abs/1509.02202


The process of gathering and associating data from multiple sensors or sub-detectors due to a common physical event (the process of event-building) is used in many fields, including high-energy physics and $\gamma$-ray astronomy. Fault tolerance in event-building is a challenging problem that increases in difficulty with higher data throughput rates and increasing numbers of sub-detectors. We draw on biological self-assembly models in the development of a novel event-building paradigm that treats each packet of data from an individual sensor or sub-detector as if it were a molecule in solution. Just as molecules are capable of forming chemical bonds, “bonds” can be defined between data packets using metadata-based discriminants. A database — which plays the role of a beaker of solution — continually selects pairs of assemblies at random to test for bonds, which allows single packets and small assemblies to aggregate into larger assemblies. During this process higher-quality associations supersede spurious ones. The database thereby becomes fluid, dynamic, and self-correcting rather than static. We will describe tests of the self-assembly paradigm using our first fluid database prototype and data from the VERITAS $\gamma$-ray telescope.

Read this paper on arXiv…

A. Weinstein, L. Fortson, T. Brantseg, et. al.
Wed, 9 Sep 15
2/56

Comments: In Proceedings of the 34th International Cosmic Ray Conference (ICRC2015), The Hague, The Netherlands

Machine Learning Model of the Swift/BAT Trigger Algorithm for Long GRB Population Studies [HEAP]

http://arxiv.org/abs/1509.01228


To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift/BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien 2014 is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of $\gtrsim97\%$ ($\lesssim 3\%$ error), which is a significant improvement on a cut in GRB flux which has an accuracy of $89.6\%$ ($10.4\%$ error). These models are then used to measure the detection efficiency of Swift as a function of redshift $z$, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of $n_0 \sim 0.48^{+0.41}_{-0.23} \ {\rm Gpc}^{-3} {\rm yr}^{-1}$ with power-law indices of $n_1 \sim 1.7^{+0.6}_{-0.5}$ and $n_2 \sim -5.9^{+5.7}_{-0.1}$ for GRBs above and below a break point of $z_1 \sim 6.8^{+2.8}_{-3.2}$. This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting. The code used in this is analysis is publicly available online (https://github.com/PBGraff/SwiftGRB_PEanalysis).

Read this paper on arXiv…

P. Graff, A. Lien, J. Baker, et. al.
Fri, 4 Sep 15
52/58

Comments: 16 pages, 18 figures, 5 tables, submitted to ApJ

Comparing non-nested models in the search for new physics [CL]

http://arxiv.org/abs/1509.01010


Searches for unknown physics and deciding between competing physical models to explain data rely on statistical hypotheses testing. A common approach, used for example in the discovery of the Brout-Englert-Higgs boson, is based on the statistical Likelihood Ratio Test (LRT) and its asymptotic properties. In the common situation, when neither of the two models under comparison is a special case of the other i.e., when the hypotheses are non-nested, this test is not applicable, and so far no efficient solution exists. In physics, this problem occurs when two models that reside in different parameter spaces are to be compared. An important example is the recently reported excess emission in astrophysical $\gamma$-rays and the question whether its origin is known astrophysics or dark matter. We develop and study a new, generally applicable, frequentist method and validate its statistical properties using a suite of simulations studies. We exemplify it on realistic simulated data of the Fermi-LAT $\gamma$-ray satellite, where non-nested hypotheses testing appears in the search for particle dark matter.

Read this paper on arXiv…

S. Algeri, J. Conrad and D. Dyk
Fri, 4 Sep 15
53/58

Comments: We welcome examples of non-nested models testing problems

Performance analysis of the Least-Squares estimator in Astrometry [IMA]

http://arxiv.org/abs/1509.00677


We characterize the performance of the widely-used least-squares estimator in astrometry in terms of a comparison with the Cramer-Rao lower variance bound. In this inference context the performance of the least-squares estimator does not offer a closed-form expression, but a new result is presented (Theorem 1) where both the bias and the mean-square-error of the least-squares estimator are bounded and approximated analytically, in the latter case in terms of a nominal value and an interval around it. From the predicted nominal value we analyze how efficient is the least-squares estimator in comparison with the minimum variance Cramer-Rao bound. Based on our results, we show that, for the high signal-to-noise ratio regime, the performance of the least-squares estimator is significantly poorer than the Cramer-Rao bound, and we characterize this gap analytically. On the positive side, we show that for the challenging low signal-to-noise regime (attributed to either a weak astronomical signal or a noise-dominated condition) the least-squares estimator is near optimal, as its performance asymptotically approaches the Cramer-Rao bound. However, we also demonstrate that, in general, there is no unbiased estimator for the astrometric position that can precisely reach the Cramer-Rao bound. We validate our theoretical analysis through simulated digital-detector observations under typical observing conditions. We show that the nominal value for the mean-square-error of the least-squares estimator (obtained from our theorem) can be used as a benchmark indicator of the expected statistical performance of the least-squares method under a wide range of conditions. Our results are valid for an idealized linear (one-dimensional) array detector where intra-pixel response changes are neglected, and where flat-fielding is achieved with very high accuracy.

Read this paper on arXiv…

R. Lobos, J. Silva, R. Mendez, et. al.
Thu, 3 Sep 15
17/58

Comments: 35 pages, 8 figures. Accepted for publication by PASP

Time Series with Tailored Nonlinearities [CL]

http://arxiv.org/abs/1509.00223


It is demonstrated how to generate time series with tailored nonlinearities by inducing well- defined constraints on the Fourier phases. Correlations between the phase information of adjacent phases and (static and dynamic) measures of nonlinearities are established and their origin is explained. By applying a set of simple constraints on the phases of an originally linear and uncor- related Gaussian time series, the observed scaling behavior of the intensity distribution of empirical time series can be reproduced. The power law character of the intensity distributions being typical for e.g. turbulence and financial data can thus be explained in terms of phase correlations.

Read this paper on arXiv…

C. Raeth and I. Laut
Wed, 2 Sep 15
72/87

Comments: 5 pages, 5 figures, Phys. Rev. E, Rapid Communication, accepted