# The NIFTY way of Bayesian signal inference [IMA]

We introduce NIFTY, “Numerical Information Field Theory”, a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTY can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTY as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D3PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy.

M. Selig
Wed, 24 Dec 14
18/37

Comments: 6 pages, 2 figures, refereed proceeding of the 33rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2013), software available at this http URL and this http URL

|

# A Novel, Fully Automated Pipeline for Period Estimation in the EROS 2 Data Set [IMA]

We present a new method to discriminate periodic from non-periodic irregularly sampled lightcurves. We introduce a periodic kernel and maximize a similarity measure derived from information theory to estimate the periods and a discriminator factor. We tested the method on a dataset containing 100,000 synthetic periodic and non-periodic lightcurves with various periods, amplitudes and shapes generated using a multivariate generative model. We correctly identified periodic and non-periodic lightcurves with a completeness of 90% and a precision of 95%, for lightcurves with a signal-to-noise ratio (SNR) larger than 0.5. We characterize the efficiency and reliability of the model using these synthetic lightcurves and applied the method on the EROS-2 dataset. A crucial consideration is the speed at which the method can be executed. Using hierarchical search and some simplification on the parameter search we were able to analyze 32.8 million lightcurves in 18 hours on a cluster of GPGPUs. Using the sensitivity analysis on the synthetic dataset, we infer that 0.42% in the LMC and 0.61% in the SMC of the sources show periodic behavior. The training set, the catalogs and source code are all available in this http URL

P. Protopapas, P. Huijse, P. Estevez, et. al.
Mon, 8 Dec 14
5/61

Comments: N/A

|

# On spin scale-discretised wavelets on the sphere for the analysis of CMB polarisation [IMA]

A new spin wavelet transform on the sphere is proposed to analyse the polarisation of the cosmic microwave background (CMB), a spin $\pm 2$ signal observed on the celestial sphere. The scalar directional scale-discretised wavelet transform on the sphere is extended to analyse signals of arbitrary spin. The resulting spin scale-discretised wavelet transform probes the directional intensity of spin signals. A procedure is presented using this new spin wavelet transform to recover E- and B-mode signals from partial-sky observations of CMB polarisation.

J. McEwen, M. Buttner, B. Leistedt, et. al.
Thu, 4 Dec 14
3/82

Comments: 4 pages, Proceedings IAU Symposium No. 306, 2014 (A. F. Heavens, J.-L. Starck, A. Krone-Martins eds.)

|

# Application of Lossless Data Compression Techniques to Radio Astronomy Data flows [CL]

The modern practice of Radio Astronomy is characterized by extremes of data volume and rates, principally because of the direct relationship between the signal to noise ratio that can be achieved and the need to Nyquist sample the RF bandwidth necessary by way of support. The transport of these data flows is costly. By examining the statistical nature of typical data flows and applying well known techniques from the field of Information Theory the following work shows that lossless compression of typical radio astronomy data flows is in theory possible. The key parameter in determining the degree of compression possible is the standard deviation of the data. The practical application of compression could prove beneficial in reducing the costs of data transport and (arguably) storage for new generation instruments such as the Square Kilometer Array.

T. Natusch
Fri, 23 May 14
12/44

Comments: In preparation for submission

# Slepian Spatial-Spectral Concentration on the Ball [CL]

We formulate and solve the Slepian spatial-spectral concentration problem on the three-dimensional ball. Both the standard Fourier-Bessel and also the Fourier-Laguerre spectral domains are considered since the latter exhibits a number of practical advantages (spectral decoupling and exact computation). The Slepian spatial and spectral concentration problems are formulated as eigenvalue problems, the eigenfunctions of which form an orthogonal family of concentrated functions. Equivalence between the spatial and spectral problems is shown. The spherical Shannon number on the ball is derived, which acts as the analog of the space-bandwidth product in the Euclidean setting, giving an estimate of the number of concentrated eigenfunctions and thus the dimension of the space of functions that can be concentrated in both the spatial and spectral domains simultaneously. Various symmetries of the spatial region are considered that reduce considerably the computational burden of recovering eigenfunctions, either by decoupling the problem into smaller subproblems or by affording analytic calculations. The family of concentrated eigenfunctions forms a Slepian basis that can be used be represent concentrated signals efficiently. We illustrate our results with numerical examples and show that the Slepian basis indeeds permits a sparse representation of concentrated signals.

Z. Khalid, R. Kennedy and J. McEwen
Mon, 24 Mar 14
49/50

# On the art and theory of self-calilbration [IMA]

Calibration is the process of inferring how much measured data depend on the signal one is interested in. It is essential for any quantitative signal estimation on the basis of the data. Here, we investigate the “art” of self-calibration that augments an external calibration solution using a known reference signal with an internal calibration on the unknown measurement signal itself. Contemporary self-calibration schemes try to find a self-consistent solution for signal and calibration. This can be understood in terms of maximizing their joint probability. Thus, the full uncertainty structure of this probability around its maximum is not taken into account by these schemes. Therefore better schemes — in sense of minimal square error — can be designed that also reflect the uncertainties of signal and calibration reconstructions. We argue that at least the signal uncertainty should not be neglected in typical measurement situations, since the calibration solutions suffer from a systematic bias otherwise, which consequently distorts the signal reconstruction. Furthermore, we argue that non-parametric, signal-to-noise filtered calibration should provide more accurate reconstructions than the common bin averages and provide a new, improved self-calibration scheme. We illustrate our findings with a simplistic numerical example.

Fri, 6 Dec 13
27/55

|

# D3PO – Denoising, Deconvolving, and Decomposing Photon Observations [IMA]

The analysis of astronomical images is a non-trivial task. The D3PO algorithm addresses the inference problem of denoising, deconvolving, and decomposing photon observations. The primary goal is the simultaneous reconstruction of the diffuse and point-like photon flux from a given photon count image. In order to discriminate between these morphologically different signal components, a probabilistic algorithm is derived in the language of information field theory based on a hierarchical Bayesian parameter model. The signal inference exploits prior information on the spatial correlation structure of the diffuse component and the brightness distribution of the spatially uncorrelated point-like sources. A maximum a posteriori solution and a solution minimizing the Gibbs free energy of the inference problem using variational Bayesian methods are discussed. Since the derivation of the solution does not dependent on the underlying position space, the implementation of the D3PO algorithm uses the NIFTY package to ensure operationality on various spatial grids and at any resolution. The fidelity of the algorithm is validated by the analysis of simulated data, including a realistic high energy photon count image showing a 32 x 32 arcmin^2 observation with a spatial resolution of 0.1 arcmin. In all tests the D3PO algorithm successfully denoised, deconvolved, and decomposed the data into a diffuse and a point-like signal estimate for the respective photon flux components.

Mon, 11 Nov 13
31/39

|