The pipeline for the ExoMars DREAMS scientific data archiving [IMA]

http://arxiv.org/abs/1703.05301


DREAMS (Dust Characterisation, Risk Assessment, and Environment Analyser on the Martian Surface) is a payload accommodated on the Schiaparelli Entry and Descent Module (EDM) of ExoMars 2016, the ESA and Roscosmos mission to Mars (Esposito (2015), Bettanini et al. (2014)). It is a meteorological station with the additional capability to perform measure- ments of the atmospheric electric fields close to the surface of Mars. The instrument package will make the first measurements of electric fields on Mars, providing data that will be of value in planning the second ExoMars mission in 2020, as well as possible future human missions to the red planet. This paper describes the pipeline to convert the raw telemetries to the final data products for the archive, with associated metadata.

Read this paper on arXiv…

P. Schipani, L. Marty, M. Mannetta, et. al.
Fri, 17 Mar 17
5/50

Comments: 4 pages, to appear in the Proceedings of ADASS 2016, Astronomical Society of the Pacific (ASP) Conference Series

LAGO: the Latin American Giant Observatory [IMA]

http://arxiv.org/abs/1703.05337


The Latin American Giant Observatory (LAGO) is an extended cosmic ray observatory composed of a network of water-Cherenkov detectors (WCD) spanning over different sites located at significantly different altitudes (from sea level up to more than $5000$\,m a.s.l.) and latitudes across Latin America, covering a wide range of geomagnetic rigidity cut-offs and atmospheric absorption/reaction levels. The LAGO WCD is simple and robust, and incorporates several integrated devices to allow time synchronization, autonomous operation, on board data analysis, as well as remote control and automated data transfer.
This detection network is designed to make detailed measurements of the temporal evolution of the radiation flux coming from outer space at ground level. LAGO is mainly oriented to perform basic research in three areas: high energy phenomena, space weather and atmospheric radiation at ground level. It is an observatory designed, built and operated by the LAGO Collaboration, a non-centralized collaborative union of more than 30 institutions from ten countries.
In this paper we describe the scientific and academic goals of the LAGO project – illustrating its present status with some recent results – and outline its future perspectives.

Read this paper on arXiv…

I. Sidelnik, H. Asorey and LAGO. Collaboration
Fri, 17 Mar 17
8/50

Comments: 4 pages, 2 figures, Proceedings of the 9th International Workshop on Ring Imaging Cherenkov Detectors (RICH 2016), Lake Bled, Slovenia

An investigation of pulsar searching techniques with the Fast Folding Algorithm [IMA]

http://arxiv.org/abs/1703.05581


Here we present an in-depth study of the behaviour of the Fast Folding Algorithm, an alternative pulsar searching technique to the Fast Fourier Transform. Weaknesses in the Fast Fourier Transform, including a susceptibility to red noise, leave it insensitive to pulsars with long rotational periods (P > 1 s). This sensitivity gap has the potential to bias our understanding of the period distribution of the pulsar population. The Fast Folding Algorithm, a time-domain based pulsar searching technique, has the potential to overcome some of these biases. Modern distributed-computing frameworks now allow for the application of this algorithm to all-sky blind pulsar surveys for the first time. However, many aspects of the behaviour of this search technique remain poorly understood, including its responsiveness to variations in pulse shape and the presence of red noise. Using a custom CPU-based implementation of the Fast Folding Algorithm, ffancy, we have conducted an in-depth study into the behaviour of the Fast Folding Algorithm in both an ideal, white noise regime as well as a trial on observational data from the HTRU-S Low Latitude pulsar survey, including a comparison to the behaviour of the Fast Fourier Transform. We are able to both confirm and expand upon earlier studies that demonstrate the ability of the Fast Folding Algorithm to outperform the Fast Fourier Transform under ideal white noise conditions, and demonstrate a significant improvement in sensitivity to long-period pulsars in real observational data through the use of the Fast Folding Algorithm.

Read this paper on arXiv…

A. Cameron, E. Barr, D. Champion, et. al.
Fri, 17 Mar 17
19/50

Comments: 19 pages, 15 figures, 3 tables

Astrophysics and Big Data: Challenges, Methods, and Tools [IMA]

http://arxiv.org/abs/1703.05084


Nowadays there is no field research which is not flooded with data. Among the sciences, Astrophysics has always been driven by the analysis of massive amounts of data. The development of new and more sophisticated observation facilities, both ground-based and spaceborne, has led data more and more complex (Variety), an exponential growth of both data Volume (i.e., in the order of petabytes), and Velocity in terms of production and transmission. Therefore, new and advanced processing solutions will be needed to process this huge amount of data. We investigate some of these solutions, based on machine learning models as well as tools and architectures for Big Data analysis that can be exploited in the astrophysical context.

Read this paper on arXiv…

M. Garofalo, A. Botta and G. Ventre
Thu, 16 Mar 17
23/92

Comments: 4 pages, 1 figures, proceedings of the IAU-325 symposium on Astroinformatics, Cambridge University press

Aqua MODIS Band 24 Crosstalk Striping [IMA]

http://arxiv.org/abs/1703.04719


Aqua MODIS, unlike its predecessor on board the Terra spacecraft, had always been thought to have been spared from significant deleterious impacts of electronic crosstalk on its imagery. However, recent efforts brought to our attention the presence of striping artifacts in Aqua MODIS images from band 24 (4.47$\mu$m), which upon further inspection proved to have a noticeable impact on the quality of the L1B product and to have been present since the beginning of the mission, in 2002. Using images of the Moon from scheduled lunar observations, we linked the artifacts with electronic crosstalk contamination of the response of detector 1 of band 24 by signal sent from the detector 10 of band 26 (1.375$\mu$m), a neighboring band in the same focal plane assembly. In this paper, we report on these findings, the artifact mitigation strategy adopted by us, and on our success in restoring band 24 detector 1 behavior and image quality.

Read this paper on arXiv…

G. Keller, Z. Wang, A. Wu, et. al.
Thu, 16 Mar 17
24/92

Comments: N/A

Protecting the Dark Skies of Chile: Initiatives, Education and Coordination [IMA]

http://arxiv.org/abs/1703.04684


During the next decade, Chile will consolidate its place as the ‘World Capital of Astronomy’. By 2025, more than 70% of the world’s infrastructure for conducting professional astronomical observations will be installed in the Atacama Desert in the north of the country. The amazing scientific discoveries these telescopes produce have a direct impact on our understanding of the cosmos, and protecting this ‘window to the universe’ is fundamental in order to ensure humanity’s right to contemplate the night sky and decipher our origins. As a country, Chile faces the challenge of fighting light pollution and protecting its dark skies in a context of sprawling urban growth and an ever-expanding mining industry that shares the same territory with astronomical observatories.
The Chilean Astronomical Society (Sociedad Chilena de Astronomia, SOCHIAS) plays an active role in protecting dark skies through a series of initiatives involving educational programmes, aiding in the development and enforcement of public policy and regulation, and seeking the declaration of Chile’s best astronomical sites as protected heritage areas, both at the national and international levels. Whilst describing our experiences, I highlight the importance of approaching the problem of light pollution from all sides, involving all the relevant actors (communities, national and local governments, lighting industry, environmentalists, astronomers and others). I also discuss how communication and timely coordination with potential problematic actors (like industries, cities and some government agencies) can be an effective tool to transform potential enemies into allies in the fight for the protection of the night sky.

Read this paper on arXiv…

G. Blanc
Thu, 16 Mar 17
29/92

Comments: 9 pages, 3 figures. Published as par of the proceedings of the “The Right to Dark Skies” conference, organized by UNESCO, Mexico City, January 2016

Characterizing a CCD detector for astronomical purposes: OAUNI Project [IMA]

http://arxiv.org/abs/1703.04836


This work verifies the instrumental characteristics of the CCD detector which is part of the UNI astronomical observatory. We measured the linearity of the CCD detector of the SBIG STXL6303E camera, along with the associated gain and readout noise. The linear response to the incident light of the detector is extremely linear (R2 =99.99%), its effective gain is 1.65 +/- 0.01 e-/ADU and its readout noise is 12.2 e-. These values are in agreement with the manufacturer. We confirm that this detector is extremely precise to make measurements for astronomical purposes.

Read this paper on arXiv…

A. Pereyra, M. Zevallos, J. Ricra, et. al.
Thu, 16 Mar 17
60/92

Comments: 6 pages, 8 figures, Published by TECNIA (UNI)

Enabling New ALMA Science with Improved Support for Time-Domain Observations [IMA]

http://arxiv.org/abs/1703.04692


While the Atacama Large Millimeter/submillimeter Array (ALMA) is a uniquely powerful telescope, its impact in certain fields of astrophysics has been limited by observatory policies rather than the telescope’s innate technical capabilities. In particular, several observatory policies present challenges for observations of variable, mobile, and/or transient sources — collectively referred to here as “time-domain” observations. In this whitepaper we identify some of these policies, describe the scientific applications they impair, and suggest changes that would increase ALMA’s science impact in Cycle 6 and beyond.
Parties interested in time-domain science with ALMA are encouraged to join the ALMA Time-domain Special Interest Group (ATSIG) by signing up for the ATSIG mailing list at https://groups.google.com/group/alma-td-sig .

Read this paper on arXiv…

K. Alexander, E. Berger, G. Bower, et. al.
Thu, 16 Mar 17
66/92

Comments: 9 pages; whitepaper submitted to the ALMA Science Advisory Council; corresponding author P. K. G. Williams (pwilliams@cfa.harvard.edu)

Exponential Distance Relation and Near Resonances in the Trappist-1 Planetary System [IMA]

http://arxiv.org/abs/1703.04545


We report in this paper a new exponential relation distance of planets in the newly discovered exoplanetary system of the Trappist-1 star, and we comment on near orbital mean motion resonances among the seven planets. We predict that possible smaller planets could be found inside the orbit of the innermost discovered Planet b.

Read this paper on arXiv…

V. Pletser and L. Basano
Thu, 16 Mar 17
68/92

Comments: 6 pages, 2 figures, 5 Tables

LSDCat: Detection and cataloguing of emission line sources in integral-field spectroscopy datacubes [IMA]

http://arxiv.org/abs/1703.05166


We present a robust, efficient, and user-friendly algorithm for detecting faint emission line sources in large integral-field spectroscopic datacubes. together with the public release of the software package LSDCat (Line Source Detection and Cataloguing). LSDCat uses a 3-dimensional matched filter approach, combined with thresholding in signal-to-noise, to build a catalogue of individual line detections. In a second pass, the detected lines are grouped into distinct objects, and positions, spatial extents, and fluxes of the detected lines are determined. LSDCat requires only a small number of input parameters, and we provide guidelines for choosing appropriate values. The software is coded in Python and capable to process very large datacubes in a short time. We verify the implementation with a source insertion and recovery experiment utilising a real datacube taken with the MUSE instrument at the ESO Very Large Telescope.

Read this paper on arXiv…

E. Herenz and L. Wisotzki
Thu, 16 Mar 17
80/92

Comments: 14 pages. Accepted for publication in Astronomy & Astrophysics. The LSDCat software is available at this https URL

Digital receivers for low-frequency radio telescopes UTR-2, URAN, GURT [IMA]

http://arxiv.org/abs/1703.04384


This paper describes digital radio astronomical receivers used for decameter and meter wavelength observations. This paper describes digital radio astronomical receivers used for decameter and meter wavelength observations. Since 1998, digital receivers performing on-the-fly dynamic spectrum calculations or waveform data recording without data loss have been used at the UTR-2 radio telescope, the URAN VLBI system, and the GURT new generation radio telescope. Here we detail these receivers developed for operation in the strong interference environment that prevails in the decameter wavelength range. Data collected with these receivers allowed us to discover numerous radio astronomical objects and phenomena at low frequencies, a summary of which is also presented.

Read this paper on arXiv…

V. Zakharenko, A. Konovalenko, P. Zarka, et. al.
Tue, 14 Mar 17
1/74

Comments: 24 pages, 15 figures

Gain factor and parameter settings optimization of the new gamma-ray burst polarimeter POLAR [IMA]

http://arxiv.org/abs/1703.04210


As a space-borne detector POLAR is designed to conduct hard X-ray polarization measurements of gamma-ray bursts on the statistically significant sample of events and with an unprecedented accuracy. During its development phase a number of tests, calibrations runs and verification measurements were carried out in order to validate instrument functionality and optimize operational parameters. In this article we present results on gain optimization togeter with verification data obtained in the course of broad laboratory and environmental tests. In particular we focus on exposures to the $^{137}$Cs radioactive source and determination of the gain dependence on the high voltage for all 1600 detection channels of the polarimeter. Performance of the instrument is described in detail with respect to the dynamic range, energy resolution and temperature dependence. Gain optimization algorithms and response non-uniformity studies are also broadly discussed. Results presented below constitute important parts for development of the POLAR calibration and operation database.

Read this paper on arXiv…

X. Zhang, W. Hajdas, H. Xiao, et. al.
Tue, 14 Mar 17
17/74

Comments: 22 pages, 14 figures

Design and experimental test of an optical vortex coronagraph [IMA]

http://arxiv.org/abs/1703.04228


The optical vortex coronagraph (OVC) is one of the promising ways for direct imaging exoplanets because of its small inner working angle and high throughput. This paper presents the design and laboratory demonstration performance at 633nm and 1520nm of the OVC based on liquid crystal polymers (LCP). Two LCPs has been manufactured in partnership with a commercial vendor. The OVC can deliver a good performance in laboratory test and achieve the contrast of the order 10^-6 at angular distance 3{\lambda}/D, which is able to image the giant exoplanets at a young stage in combination with extreme adaptive optics.

Read this paper on arXiv…

C. Liu, D. Ren, Y. Zhu, et. al.
Tue, 14 Mar 17
45/74

Comments: 8 pages and 6 figures

Canvas and Cosmos: Visual Art Techniques Applied to Astronomy Data [IMA]

http://arxiv.org/abs/1703.04183


Bold colour images from telescopes act as extraordinary ambassadors for research astronomers because they pique the public’s curiosity. But are they snapshots documenting physical reality? Or are we looking at artistic spacescapes created by digitally manipulating astronomy images? This paper provides a tour of how original black and white data, from all regimes of the electromagnetic spectrum, are converted into the colour images gracing popular magazines, numerous websites, and even clothing. The history and method of the technical construction of these images is outlined. However, the paper focuses on introducing the scientific reader to visual literacy (e.g.human perception) and techniques from art (e.g. composition, colour theory) since these techniques can produce not only striking but politically powerful public outreach images. When created by research astronomers, the cultures of science and visual art can be balanced and the image can illuminate scientific results sufficiently strongly that the images are also used in research publications. Included are reflections on how they could feedback into astronomy research endeavours and future forms of visualization as well as on the relevance of outreach images to visual art.

Read this paper on arXiv…

J. English
Tue, 14 Mar 17
52/74

Comments: This is the submitted version (and lacks a couple of references, has lower quality figures, etc). 51 pages, 26 images. The paper has been published in IJMPD. For images by the author see this https URL

Spatial Linear Dark Field Control: Stabilizing Deep Contrast for Exoplanet Imaging Using Bright Speckles [IMA]

http://arxiv.org/abs/1703.04259


Direct imaging of exoplanets requires the ability to build and maintain a high contrast dark hole (DH) within the science image to a high degree of precision. Current techniques, such as electric field conjugation (EFC), have been demonstrated in the lab and have shown that they are capable of generating a DH with high contrast. To do so, such techniques require continuous wavefront estimate updates that are acquired by interrupting the DH, thereby competing with the science measurement. In this paper, we introduce and demonstrate spatial linear dark field control (LDFC) as a new technique by which the DH contrast can be controlled and maintained without any disruption to the science image. Instead of rebuilding the DH using EFC after it degrades over time, spatial LDFC locks the high contrast dark field (DF) after EFC using the bright field (BF) that responds linearly to wavefront variations that modify both the BF and the DH.

Read this paper on arXiv…

K. Miller, O. Guyon and J. Males
Tue, 14 Mar 17
68/74

Comments: 9 pages, 11 images

The EBEX Balloon Borne Experiment – Optics, Receiver, and Polarimetry [IMA]

http://arxiv.org/abs/1703.03847


The E and B Experiment (EBEX) was a long-duration balloon-borne cosmic microwave background polarimeter that flew over Antarctica in 2013. We describe the experiment’s optical system, receiver, and polarimetric approach, and report on their in-flight performance. EBEX had three frequency bands centered on 150, 250, and 410~GHz. To make efficient use of limited mass and space we designed a 115~cm$^{2}$sr high throughput optical system that had two ambient temperature mirrors and four anti-reflection coated polyethylene lenses per focal plane. All frequency bands shared the same optical train. Polarimetry was achieved with a continuously rotating achromatic half-wave plate (AHWP) that was levitated with a superconducting magnetic bearing (SMB). Rotation stability was 0.45~\% over a period of 10~hours, and angular position accuracy was 0.01~degrees. This is the first use of a SMB in astrophysics. The measured modulation efficiency was above 90~\% for all bands. To our knowledge the 109~\% fractional bandwidth of the AHWP was the broadest implemented to date. The receiver that contained one lens and the AHWP at a temperature of 4~K, the polarizing grid and other lenses at 1~K, and the two focal planes at 0.25~K performed according to specifications giving focal plane temperature stability with fluctuation power spectrum that had $1/f$ knee at 2~mHz. EBEX was the first balloon-borne instrument to implement technologies characteristic of modern CMB polarimeters including high throughput optical systems, and large arrays of transition edge sensor bolometric detectors with mutiplexed readouts.

Read this paper on arXiv…

EBEX. Collaboration, A. Aboobaker, P. Ade, et. al.
Tue, 14 Mar 17
71/74

Comments: 49 pages, 32 figures, to be submitted to The Astrophysical Journal Supplement

AGILIS: Agile Guided Interferometer for Longbaseline Imaging Synthesis – Demonstration and concepts of reconfigurable optical imaging interferometers [IMA]

http://arxiv.org/abs/1703.03919


In comparison to the radio and sub-millimetric domains, imaging with optical interferometry is still in its infancy. Due to the limited number of telescopes in existing arrays, image generation is a demanding process that relies on time-consuming reconfiguration of the interferometer array and super-synthesis. Using single mode optical fibres for the coherent transport of light from the collecting telescopes to the focal plane, a new generation of interferometers optimized for imaging can be designed. To support this claim, we report on the successful completion of the `OHANA Iki project: an end-to-end, on-sky demonstration of a two-telescope interferometer, built around near-infrared single mode fibres, carried out as part of the `OHANA project. Having demonstrated that coherent transport by single-mode fibres is feasible, we explore the concepts, performances, and limitations of a new imaging facility with single mode fibres at its heart: Agile Guided Interferometer for Longbaseline Imaging Synthesis (AGILIS). AGILIS has the potential of becoming a next generation facility or a precursor to a much larger project like the Planet Formation Imager (PFI).

Read this paper on arXiv…

J. Woillez, O. Lai, G. Perrin, et. al.
Tue, 14 Mar 17
74/74

Comments: 16 pages, 10 figures, 2 tables, accepted in A&A

Polynomial Apodizers for Centrally Obscured Vortex Coronagraphs [IMA]

http://arxiv.org/abs/1703.02994


Several coronagraph designs have been proposed over the last two decades to directly image exoplanets. Among these designs, the vector vortex coronagraphs provide theoretically perfect starlight cancellation along with small inner working angles when deployed on telescopes with unobstructed pupils. However, current and planned space missions and ground-based extremely large telescopes present complex pupil geometries, including secondary mirror central obscurations, that prevent vortex coronagraphs from rejecting on-axis sources entirely. Recent solutions combining the vortex phase mask with a ring-apodized pupil have been proposed to circumvent this issue, but provide a limited throughput for vortex charges $>2$. We present a family of pupil plane apodizations that compensate for pupil geometries with circularly symmetric central obstructions caused by on-axis secondary mirrors for charge 2, 4, and 6 vector vortex coronagraphs. These apodizations are derived analytically and allow the vortex coronagraph to retain theoretically perfect nulling in the presence of central obscurations. For a charge 4 vortex, we design polynomial apodization functions assuming a greyscale apodizing filter that represent a substantial gain in throughput over the ring-apodized vortex coronagraph design, while for a charge 6 vortex, we design polynomial apodized vortex coronagraphs that have $\gtrsim 70\%$ total energy throughput for the entire range of central obscuration sizes studied. We propose methods for optimizing apodizations produced with either greyscale apodizing filters or shaped mirrors. We conclude by demonstrating how this design may be combined with apodizations numerically optimized for struts and segment gaps in telescope pupils to design terrestrial exoplanet imagers for complex pupils.

Read this paper on arXiv…

K. Fogarty, L. Pueyo, J. Mazoyer, et. al.
Fri, 10 Mar 17
24/52

Comments: 18 pages, 12 figures, submitted to ApJ

Black Holes and Vacuum Cleaners: Using Metaphor, Relevance, and Inquiry in Labels for Space Images [IMA]

http://arxiv.org/abs/1703.02927


This study extended research on the development of explanatory labels for astronomical images for the non-expert lay public. The research questions addressed how labels with leading questions/metaphors and relevance to everyday life affect comprehension of the intended message for deep space images, the desire to learn more, and the aesthetic appreciation of images. Participants were a convenience sample of 1,921 respondents solicited from a variety of websites and through social media who completed an online survey that used four high-resolution images as stimuli: Sagittarius A*, Solar Flare, Cassiopeia A, and the Pinwheel Galaxy (M101). Participants were randomly assigned initially to 1 of 3 label conditions: the standard label originally written for the image, a label with a leading question containing a metaphor related to the information for the image, or a label that contained a fact about the image relevant to everyday life. Participants were randomly assigned to 1 image and compared all labels for that image. Open-ended items at various points asked participants to pose questions to a hypothetical astronomer. Main findings were that the relevance condition was significantly more likely to increase wanting to learn more; the original label was most likely to increase overall appreciation; and, smart phone users were more likely to want to learn more and report increased levels of appreciation. Results are discussed in terms of the need to examine individual viewer characteristics and goals in creating different labels for different audiences.

Read this paper on arXiv…

L. Smith, K. Arcand, B. Smith, et. al.
Thu, 9 Mar 17
32/54

Comments: 50 pages, 7 tables, 2 figures, accepted by the journal “Psychology of Aesthetics, Creativity, and the Arts”

Multi-GPU maximum entropy image synthesis for radio astronomy [IMA]

http://arxiv.org/abs/1703.02920


The maximum entropy method (MEM) is a well known deconvolution technique in radio-interferometry. This method solves a non-linear optimization problem with an entropy regularization term. Other heuristics such as CLEAN are faster but highly user dependent. Nevertheless, MEM has the following advantages: it is unsupervised, it has an statistical basis, it has a better resolution and better image quality under certain conditions. This work presents a high performance GPU version of non-gridded MEM, which is tested using interferometric and simulated data. We propose a single-GPU and a multi-GPU implementation for single and multi-spectral data, respectively. We also make use of the Peer-to-Peer and Unified Virtual Addressing features of newer GPUs which allows to exploit transparently and efficiently multiple GPUs. Several ALMA data sets are used to demonstrate the effectiveness in imaging and to evaluate GPU performance. The results show that a speedup from 1000 to 5000 times faster than a sequential version can be achieved, depending on data and image size. This has allowed us to reconstruct the HD142527 CO(6-5) short baseline data set in 2.1 minutes, instead of the 2.5 days that takes on CPU.

Read this paper on arXiv…

M. Carcamo, P. Roman, S. Casassus, et. al.
Thu, 9 Mar 17
36/54

Comments: 11 pages, 13 figures

CMU DeepLens: Deep Learning For Automatic Image-based Galaxy-Galaxy Strong Lens Finding [IMA]

http://arxiv.org/abs/1703.02642


Galaxy-scale strong gravitational lensing is not only a valuable probe of the dark matter distribution of massive galaxies, but can also provide valuable cosmological constraints, either by studying the population of strong lenses or by measuring time delays in lensed quasars. Due to the rarity of galaxy-scale strongly lensed systems, fast and reliable automated lens finding methods will be essential in the era of large surveys such as LSST, Euclid, and WFIRST. To tackle this challenge, we introduce CMU DeepLens, a new fully automated galaxy-galaxy lens finding method based on Deep Learning. This supervised machine learning approach does not require any tuning after the training step which only requires realistic image simulations of strongly lensed systems. We train and validate our model on a set of 20,000 LSST-like mock observations including a range of lensed systems of various sizes and signal-to-noise ratios (S/N). We find on our simulated data set that for a rejection rate of non-lenses of 99%, a completeness of 90% can be achieved for lenses with Einstein radii larger than 1.4″ and S/N larger than 20 on individual $g$-band LSST exposures. Finally, we emphasize the importance of realistically complex simulations for training such machine learning methods by demonstrating that the performance of models of significantly different complexities cannot be distinguished on simpler simulations. We make our code publicly available at https://github.com/McWilliamsCenter/CMUDeepLens .

Read this paper on arXiv…

F. Lanusse, Q. Ma, N. Li, et. al.
Thu, 9 Mar 17
46/54

Comments: 12 pages, 9 figures, submitted to MNRAS

$C^{3}$ : A Command-line Catalogue Cross-matching tool for modern astrophysical survey data [IMA]

http://arxiv.org/abs/1703.02300


In the current data-driven science era, it is needed that data analysis techniques has to quickly evolve to face with data whose dimensions has increased up to the Petabyte scale. In particular, being modern astrophysics based on multi-wavelength data organized into large catalogues, it is crucial that the astronomical catalog cross-matching methods, strongly dependant from the catalogues size, must ensure efficiency, reliability and scalability. Furthermore, multi-band data are archived and reduced in different ways, so that the resulting catalogues may differ each other in formats, resolution, data structure, etc, thus requiring the highest generality of cross-matching features. We present $C^{3}$ (Command-line Catalogue Cross-match), a multi-platform application designed to efficiently cross-match massive catalogues from modern surveys. Conceived as a stand-alone command-line process or a module within generic data reduction/analysis pipeline, it provides the maximum flexibility, in terms of portability, configuration, coordinates and cross-matching types, ensuring high performance capabilities by using a multi-core parallel processing paradigm and a sky partitioning algorithm.

Read this paper on arXiv…

G. Riccio, M. Brescia, S. Cavuoti, et. al.
Wed, 8 Mar 17
8/60

Comments: 6 pages, 4 figures, proceedings of the IAU-325 symposium on Astroinformatics, Cambridge University press

METAPHOR: Probability density estimation for machine learning based photometric redshifts [IMA]

http://arxiv.org/abs/1703.02292


We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z’s and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF’s derived from a traditional SED template fitting method (Le Phare).

Read this paper on arXiv…

V. Amaro, S. Cavuoti, M. Brescia, et. al.
Wed, 8 Mar 17
14/60

Comments: proceedings of the International Astronomical Union, IAU-325 symposium, Cambridge University press

Sensitivity Characterization of a Parametric Transducer for Gravitational Wave Detection Through Optomechanical Spring Effect [IMA]

http://arxiv.org/abs/1703.02179


We present the characterization of the most recent parametric transducers designed to enhance the Mario Schenberg Gravitational Wave Detector sensitivity. The transducer is composed of a microwave re-entrant cavity that attaches to the gravitational wave antenna via a rigid spring. It functions as a three-mode mass-spring system; motion of the spherical antenna couples to a 50 $\mu$m thick membrane, which converts its mechanical motion into a frequency shift of the cavity resonance. Through the optomechanical spring effect, the microwave transducer frequency-displacement sensitivity was measured to be 726 MHz/$\mu$m at 4 K. The spherical antenna detection sensitivity is determined analytically using the transducer amplification gain and equivalent displacement noise in the test setup to be $5.5 \times 10^{-19}\sqrt{Hz}^{-1}$.

Read this paper on arXiv…

N. Carvalho, J. Bourhill, O. Aguiar, et. al.
Wed, 8 Mar 17
26/60

Comments: N/A

Space variant deconvolution of galaxy survey images [IMA]

http://arxiv.org/abs/1703.02305


Removing the aberrations introduced by the Point Spread Function (PSF) is a fundamental aspect of astronomical image processing. The presence of noise in observed images makes deconvolution a nontrivial task that necessitates the use of regularisation. This task is particularly difficult when the PSF varies spatially as is the case for the Euclid telescope. New surveys will provide images containing thousand of galaxies and the deconvolution regularisation problem can be considered from a completely new perspective. In fact, one can assume that galaxies belong to a low-rank dimensional space. This work introduces the use of the low-rank matrix approximation as a regularisation prior for galaxy image deconvolution and compares its performance with a standard sparse regularisation technique. This new approach leads to a natural way to handle a space variant PSF. Deconvolution is performed using a Python code that implements a primal-dual splitting algorithm. The data set considered is a sample of 10 000 space-based galaxy images convolved with a known spatially varying Euclid-like PSF and including various levels of Gaussian additive noise. Performance is assessed by examining the deconvolved galaxy image pixels and shapes. The results demonstrate that for small samples of galaxies sparsity performs better in terms of pixel and shape recovery, while for larger samples of galaxies it is possible to obtain more accurate estimates of the galaxy shapes using the low-rank approximation.

Read this paper on arXiv…

S. Farrens, J. Starck and F. Mboula
Wed, 8 Mar 17
40/60

Comments: 12 pages and 8 figures. To be published in A&A

Robust Estimation of Scattering in Pulsar Timing Analysis [IMA]

http://arxiv.org/abs/1703.02108


We present a robust approach to incorporating models for the time-variable broadening of the pulse profile due to scattering in the ionized interstellar medium into profile-domain pulsar timing analysis. We use this approach to simultaneously estimate temporal variations in both the dispersion measure (DM) and scattering, together with a model for the pulse profile that includes smooth evolution as a function of frequency, and the pulsar’s timing model. We show that fixing the scattering timescales when forming time-of-arrival estimates, as has been suggested in the context of traditional pulsar timing analysis, can significantly underestimate the uncertainties in both DM, and the arrival time of the pulse, leading to bias in the timing parameters. We apply our method using a new, publicly available, GPU accelerated code, both to simulations, and observations of the millisecond pulsar PSR J1643$-$1224. This pulsar is known to exhibit significant scattering variability compared to typical millisecond pulsars, and we find including low-frequency ($< 1$ GHz) data without a model for these scattering variations leads to significant periodic structure in the DM, and also biases the astrometric parameters at the $4\sigma$ level, for example, changing proper motion in right ascension by $0.50 \pm 0.12$. If low frequency observations are to be included when significant scattering variations are present, we conclude it is necessary to not just model those variations, but also to sample the parameters that describe the variations simultaneously with all other parameters in the model, a task for which profile domain pulsar timing is ideally suited.

Read this paper on arXiv…

L. Lentati, M. Kerr, S. Dai, et. al.
Wed, 8 Mar 17
42/60

Comments: 12 pages, 6 figures, Accepted to MNRAS

Correlation of AOT with Relative Frequency of Air Showers with energy 10^{15} – 10^{16} eV by Yakutsk Data [IMA]

http://arxiv.org/abs/1703.01902


Long-term series of measurement of spectral transparency of the atmosphere (\lambda = 430 nm) and atmospheric optical thickness (AOT) measured by multimode photometer CE 318 in the region of Yakutsk array are analyzed. Correlation of AOT with intensity of air showers with small energies 10^{15} – 10^{16} eV is found. The variability of aerosol composition of the atmosphere during the registration period of the Cherenkov light should be taken into account since it may affect the quality of determining characteristics of air showers.

Read this paper on arXiv…

S. Knurenko and I. Petrov
Tue, 7 Mar 17
6/66

Comments: XXV ECRS 2016 Proceedings – eConf C16-09-04.3

Astrometric calibration and performance of the Dark Energy Camera [IMA]

http://arxiv.org/abs/1703.01679


We characterize the ability of the Dark Energy Camera (DECam) to perform relative astrometry across its 500~Mpix, 3 deg^2 science field of view, and across 4 years of operation. This is done using internal comparisons of ~4×10^7 measurements of high-S/N stellar images obtained in repeat visits to fields of moderate stellar density, with the telescope dithered to move the sources around the array. An empirical astrometric model includes terms for: optical distortions; stray electric fields in the CCD detectors; chromatic terms in the instrumental and atmospheric optics; shifts in CCD relative positions of up to ~10 um when the DECam temperature cycles; and low-order distortions to each exposure from changes in atmospheric refraction and telescope alignment. Errors in this astrometric model are dominated by stochastic variations with typical amplitudes of 10-30 mas (in a 30 s exposure) and 5-10 arcmin coherence length, plausibly attributed to Kolmogorov-spectrum atmospheric turbulence. The size of these atmospheric distortions is not closely related to the seeing. Given an astrometric reference catalog at density ~0.7 arcmin^{-2}, e.g. from Gaia, the typical atmospheric distortions can be interpolated to 7 mas RMS accuracy (for 30 s exposures) with 1 arcmin coherence length for residual errors. Remaining detectable error contributors are 2-4 mas RMS from unmodelled stray electric fields in the devices, and another 2-4 mas RMS from focal plane shifts between camera thermal cycles. Thus the astrometric solution for a single DECam exposure is accurate to 3-6 mas (0.02 pixels, or 300 nm) on the focal plane, plus the stochastic atmospheric distortion.

Read this paper on arXiv…

G. Bernstein, R. Armstrong, A. Plazas, et. al.
Tue, 7 Mar 17
15/66

Comments: Submitted to PASP

A Physical Model-based Correction for Charge Traps in the Hubble Space Telescope's Wide Field Camera 3 Near-IR Detector and Applications to Transiting Exoplanets and Brown Dwarfs [IMA]

http://arxiv.org/abs/1703.01301


The Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) near-IR channel is extensively used in time-resolved observations, especially for transiting exoplanet spectroscopy and brown dwarf and directly imaged exoplanet rotational phase mapping. The ramp effect is the dominant source of systematics in the WFC3 for time-resolved observations, which limits its photometric precision. Current mitigation strategies are based on empirical fits and require additional orbits “to help the telescope reach a thermal equilibrium”. We show that the ramp effect profiles can be explained and corrected with high fidelity using charge trapping theories. We also present a model for this process that can be used to predict and to correct charge trap systematics. Our model is based on a very small number of parameters that are intrinsic to the detector. We find that these parameters are very stable between the different datasets, and we provide best-fit values. Our model is tested with more than 120 orbits ($\sim40$ visits) of WFC3 observations and is proved to be able to provide near photon noise limited corrections for observations made with both staring and scanning modes of transiting exoplanets as well as for starting-mode observations of brown dwarfs. After our model correction, the light curve of the first orbit in each visit has the same photometric precision as subsequent orbits, so data from the first orbit need no longer be discarded. Near IR arrays with the same physical characteristics (e.g., JWST/NIRCam) may also benefit from the extension of this model, if similar systematic profiles are observed.

Read this paper on arXiv…

Y. Zhou, D. Apai, B. Lew, et. al.
Tue, 7 Mar 17
42/66

Comments: 16 pages, 13 figures, accepted to Astronomical Journal

Improved Point Source Detection in Crowded Fields using Probabilistic Cataloging [IMA]

http://arxiv.org/abs/1703.01303


Cataloging is challenging in crowded fields because sources are extremely covariant with their neighbors and blending makes even the number of sources ambiguous. We present the first optical probabilistic catalog, cataloging a crowded (~0.1 sources per pixel brighter than 22nd magnitude in F606W) Sloan Digital Sky Survey r band image from M2. Probabilistic cataloging returns an ensemble of catalogs inferred from the image and thus can capture source-source covariance and deblending ambiguities. By comparing to a traditional catalog of the same image and a Hubble Space Telescope catalog of the same region, we show that our catalog ensemble better recovers sources from the image. It goes more than a magnitude deeper than the traditional catalog while having a lower false discovery rate brighter than 20th magnitude. We also present an algorithm for reducing this catalog ensemble to a condensed catalog that is similar to a traditional catalog, except it explicitly marginalizes over source-source covariances and nuisance parameters. We show that this condensed catalog has a similar completeness and false discovery rate to the catalog ensemble. Future telescopes will be more sensitive, and thus more of their images will be crowded. Probabilistic cataloging performs better than existing software in crowded fields and so should be considered when creating photometric pipelines in the Large Synoptic Space Telescope era.

Read this paper on arXiv…

S. Portillo, B. Lee, T. Daylan, et. al.
Tue, 7 Mar 17
50/66

Comments: 18 pages, 17 figures, submitted to the Astrophysical Journal

Uncertain Photometric Redshifts with Deep Learning Methods [IMA]

http://arxiv.org/abs/1703.01979


The need for accurate photometric redshifts estimation is a topic that has fundamental importance in Astronomy, due to the necessity of efficiently obtaining redshift information without the need of spectroscopic analysis. We propose a method for determining accurate multimodal photo-z probability density functions (PDFs) using Mixture Density Networks (MDN) and Deep Convolutional Networks (DCN). A comparison with a Random Forest (RF) is performed.

Read this paper on arXiv…

A. DIsanto
Tue, 7 Mar 17
56/66

Comments: 4 pages, 1 figure, Astroinformatics 2016 conference proceeding

Pulsar Timing at the Deep Space Network [IMA]

http://arxiv.org/abs/1703.01342


The 70-m DSN’s Deep Space Station antenna 14 (DSS-14) at Goldstone has recently been outfitted with instrumentation to enable pulsar searching and timing operation. Systems capable of similar operations are undergoing installation at DSS-63, and are planned for DSS-43. The Goldstone system is the first of these to become operational, with a 640 MHz bandwidth stretching from 1325-1965 MHz. Initial results from the pulsar timing pipeline show short-term residuals of < 100 ns for pulsar B1937+21. Commissioning obsefvations at DSS-14 to obtain a baseline set of TOA measurements on several millisecond pulsars are currently underway.

Read this paper on arXiv…

J. Kocz, W. Majid, L. White, et. al.
Tue, 7 Mar 17
59/66

Comments: 7 pages, 7 figures

Gamma-ray Observations Under Bright Moonlight with VERITAS [IMA]

http://arxiv.org/abs/1703.01307


Imaging atmospheric Cherenkov telescopes (IACTs) are equipped with sensitive photomultiplier tube (PMT) cameras. Exposure to high levels of background illumination degrades the efficiency of and potentially destroys these photo-detectors over time, so IACTs cannot be operated in the same configuration in the presence of bright moonlight as under dark skies. Since September 2012, observations have been carried out with the VERITAS IACTs under bright moonlight (defined as about three times the night-sky-background (NSB) of a dark extragalactic field, typically occurring when Moon illumination > 35%) in two observing modes, firstly by reducing the voltage applied to the PMTs and, secondly, with the addition of ultra-violet (UV) bandpass filters to the cameras. This has allowed observations at up to about 30 times previous NSB levels (around 80% Moon illumination), resulting in 30% more observing time between the two modes over the course of a year. These additional observations have already allowed for the detection of a flare from the 1ES 1727+502 and for an observing program targeting a measurement of the cosmic-ray positron fraction. We provide details of these new observing modes and their performance relative to the standard VERITAS observations.

Read this paper on arXiv…

S. Archambault, A. Archer, W. Benbow, et. al.
Tue, 7 Mar 17
62/66

Comments: N/A

Understanding NaI(Tl) crystal background for dark matter searches [IMA]

http://arxiv.org/abs/1703.01982


We have developed ultra-low-background NaI(Tl) crystals to reproduce the DAMA results with the ultimate goal of achieving purity levels that are comparable to or better than those of the DAMA/LIBRA crystals. Even though the achieved background level does not approach that of DAMA/LIBRA, it is crucial to have a quantitative understanding of the backgrounds. We describe the contributions of background sources quantitatively by performing Geant4 Monte Carlo simulations that are fitted to the measured data to quantify the unknown fractions of the background compositions. The overall simulated background spectrum well describes the measured data with a 9.16-kg NaI(Tl) crystal and shows that the background sources are dominated by surface $^{210}$Pb and internal $^{40}$K in the 2 to 6-keV energy interval, which produce 2.31 counts/day/keV/kg (dru) and 0.48 dru, respectively.

Read this paper on arXiv…

G. Adhikari, P. Adhikari, C. Ha, et. al.
Tue, 7 Mar 17
66/66

Comments: N/A

X-ray Spectro-polarimetry with Photoelectric Polarimeters [IMA]

http://arxiv.org/abs/1703.00949


We derive a generalization of forward fitting for X-ray spectroscopy to include linear polarization of X-ray sources, appropriate for the anticipated next generation of space-based photoelectric polarimeters. We show that the inclusion of polarization sensitivity requires joint fitting to three observed spectra, one for each of the Stoke’s parameters, I(E), U(E), and Q(E). The equations for Stoke’s I(E) (the total intensity spectrum) are identical to the familiar case with no polarization sensitivity, and for which the model-predicted spectrum is obtained by a convolution of the source spectrum, F(E’), with the familiar energy response function, e(E’)*R(E’, E), where e(E’) and R(E’, E) are the effective area and energy redistribution matrix, respectively. In addition to the energy spectrum, the two new relations for U(E) and Q(E) include the source polarization fraction and position angle versus energy, a(E’), and psi’_0(E’), respectively, and the model-predicted spectra for these relations are obtained by a convolution with the “modulated” energy response function, m(E’)*e(E’)R(E, E’), where m(E’) is the energy-dependent modulation fraction that quantifies a polarimeter’s angular response to 100% polarized radiation. We present results of simulations with response parameters appropriate for the proposed PRAXyS Small Explorer observatory to illustrate the procedures and methods, and we discuss some aspects of photoelectric polarimeters with relevance to understanding their calibration and operation.

Read this paper on arXiv…

T. Strohmayer
Mon, 6 Mar 17
16/47

Comments: 27 pages, 8 figures, accepted for publication in The Astrophysical Journal

Higher Order Accurate Space-Time Schemes for Computational Astrophysics — Part I — Finite Volume Methods [IMA]

http://arxiv.org/abs/1703.01241


As computational astrophysics comes under pressure to become a precision science, there is an increasing need to move to high accuracy schemes for computational astrophysics. Hence the need for a specialized review on higher order schemes for computational astrophysics.
The focus here is on weighted essentially non-oscillatory (WENO) schemes, discontinuous Galerkin (DG) schemes and PNPM schemes. WENO schemes are higher order extensions of traditional second order finite volume schemes which are already familiar to most computational astrophysicists. DG schemes, on the other hand, evolve all the moments of the solution, with the result that they are more accurate than WENO schemes. PNPM schemes occupy a compromise position between WENO and PNPM schemes. They evolve an Nth order spatial polynomial, while reconstructing higher order terms up to Mth order. As a result, the timestep can be larger.
Time-dependent astrophysical codes need to be accurate in space and time. This is realized with the help of SSP-RK (strong stability preserving Runge-Kutta) schemes and ADER (Arbitrary DERivative in space and time) schemes. The most popular approaches to SSP-RK and ADER schemes are also described.
The style of this review is to assume that readers have a basic understanding of hyperbolic systems and one-dimensional Riemann solvers. Such an understanding can be acquired from a sequence of prepackaged lectures available from this http URL We then build on this understanding to give the reader a practical introduction to the schemes described here. The emphasis is on computer-implementable ideas, not necessarily on the underlying theory, because it was felt that this would be most interesting to most computational astrophysicists.

Read this paper on arXiv…

D. Balsara
Mon, 6 Mar 17
28/47

Comments: N/A

Near UV Imager with an MCP Based Photon Counting Detector [IMA]

http://arxiv.org/abs/1703.01116


We are developing a compact UV Imager using light weight components, that can be flown on a small CubeSat or a balloon platform. The system has a lens-based optics that can provide an aberration-free image over a wide field of view. The backend instrument is a photon counting detector with off-the-shelf MCP, CMOS sensor and electronics. We are using a Z-stack MCP with a compact high voltage power supply and a phosphor screen anode, which is read out by a CMOS sensor and the associated electronics. The instrument can be used to observe solar system objects and detect bright transients from the upper atmosphere with the help of CubeSats or high altitude balloons. We have designed the imager to be capable of working in direct frame transfer mode as well in the photon-counting mode for single photon event detection. The identification and centroiding of each photon event are done using an FPGA-based data acquisition and real-time processing system.

Read this paper on arXiv…

S. Ambily, J. Mathew, M. Sarpotdar, et. al.
Mon, 6 Mar 17
30/47

Comments: 8 pages, 4 figures, presented at Space Telescopes and Instrumentation 2016: Ultraviolet to Gamma Ray conference, July 18, 2016 at Edinburgh, UK

The Receiver System for the Ooty Wide Field Array [IMA]

http://arxiv.org/abs/1703.00625


The legacy Ooty Radio Telescope (ORT) is being reconfigured as a 264-element synthesis telescope, called the Ooty Wide Field Array (OWFA). Its antenna elements are the contiguous 1.92 m sections of the parabolic cylinder. It will operate in a 38-MHz frequency band centred at 326.5 MHz and will be equipped with a digital receiver including a 264-element spectral correlator with a spectral resolution of 48 kHz. OWFA is designed to retain the benefits of equatorial mount, continuous 9-hour tracking ability and large collecting area of the legacy telescope and use modern digital techniques to enhance the instantaneous field of view by more than an order of magnitude. OWFA has unique advantages for contemporary investigations related to large scale structure, transient events and space weather watch. In this paper, we describe the RF subsystems, digitizers and fibre optic communication of OWFA and highlight some specific aspects of the system relevant for the observations planned during the initial operation.

Read this paper on arXiv…

C. Subrahmanya, P. Prasad, B. Girish, et. al.
Fri, 3 Mar 17
2/62

Comments: 10 pages, 5 figures, 1 table, (Accepted for publication in J. Astrophysics and Astronomy)

The Ooty Wide Field Array [IMA]

http://arxiv.org/abs/1703.00621


We describe here an ongoing upgrade to the legacy Ooty Radio Telescope (ORT).The ORT is a cylindrical parabolic cylinder 530mx30m in size operating at a frequency of 326.5 (or z ~ 3.35 for the HI 21cm line). The telescope has been constructed on a north-south hill slope whose gradient is equal to the latitude of the hill, making it effectively equitorially mounted. The feed consists of an array of 1056 dipoles. The key feature of this upgrade is the digitisation and cross-correlation of the signals of every set of 4-dipoles. This converts the ORT into a 264 element interferometer with a field of view of 2 degrees x 27cos(delta) degrees . This upgraded instrument is called the Ooty Wide Field Array (OWFA). This paper briefly describes the salient features of the upgrade, as well as its main science drivers. There are three main science drivers viz. (1) Observations of the large scale distribution of HI in the post-reionisation era (2) studies of the propagation of plasma irregularities through the inner heliosphere and (3) blind surveys for transient sources. More details on the upgrade, as well as on the expected science uses can be found in other papers in this special issue.

Read this paper on arXiv…

C. Subrahmanya, P. Manoharan and J. Chengalur
Fri, 3 Mar 17
3/62

Comments: To appear in the special section of the JAA on the Ooty Wide Field Array

SKA Aperture Array Verification System: Electromagnetic modeling and beam pattern measurements using a micro UAV [IMA]

http://arxiv.org/abs/1703.00537


In this paper we present the electromagnetic modeling and beam pattern measurements of a 16-elements ultra wideband sparse random test array for the low frequency instrument of the Square Kilometer Array telescope. We discuss the importance of a small array test platform for the development of technologies and techniques towards the final telescope, highlighting the most relevant aspects of its design. We also describe the electromagnetic simulations and modeling work as well as the embedded- element and array pattern measurements using an Unmanned Aerial Vehicle system. The latter are helpful both for the validation of the models and the design as well as for the future calibration of the telescope. At this stage of the design, these measurements have shown a general agreement between experimental results and numerical data and have revealed the localized effect of un- calibrated cable lengths in the inner side-lobes of the array pattern.

Read this paper on arXiv…

E. Acedo, P. Bolli, F. Paonessa, et. al.
Fri, 3 Mar 17
8/62

Comments: 18 pages, 17 figures. Submitted to Experimental Astronomy

DSN Transient Observatory [IMA]

http://arxiv.org/abs/1703.00584


The DSN Transient Observatory (DTO) is a signal processing facility that can monitor up to four DSN downlink bands for astronomically interesting signals. The monitoring is done commensally with reception of deep space mission telemetry. The initial signal processing is done with two CASPER ROACH1 boards, each handling one or two baseband signals. Each ROACH1 has a 10~GBe interface with a GPU-equipped Debian Linux workstation for additional processing. The initial science programs include monitoring Mars for electrostatic discharges, radio spectral lines, searches for fast radio bursts and pulsars and SETI. The facility will be available to the scientific community through a peer review process.

Read this paper on arXiv…

T. Kuiper, R. Monroe, L. White, et. al.
Fri, 3 Mar 17
11/62

Comments: N/A

Multi-Level Pre-Correlation RFI Flagging for Real-Time Implementation on UniBoard [IMA]

http://arxiv.org/abs/1703.00473


Because of the denser active use of the spectrum, and because of radio telescopes higher sensitivity, radio frequency interference (RFI) mitigation has become a sensitive topic for current and future radio telescope designs. Even if quite sophisticated approaches have been proposed in the recent years, the majority of RFI mitigation operational procedures are based on post-correlation corrupted data flagging. Moreover, given the huge amount of data delivered by current and next generation radio telescopes, all these RFI detection procedures have to be at least automatic and, if possible, real-time.
In this paper, the implementation of a real-time pre-correlation RFI detection and flagging procedure into generic high-performance computing platforms based on Field Programmable Gate Arrays (FPGA) is described, simulated and tested. One of these boards, UniBoard, developed under a Joint Research Activity in the RadioNet FP7 European programme is based on eight FPGAs interconnected by a high speed transceiver mesh. It provides up to ~4 TMACs with Altera Stratix IV FPGA and 160 Gbps data rate for the input data stream.
Considering the high in-out data rate in the pre-correlation stages, only real-time and go-through detectors (i.e. no iterative processing) can be implemented. In this paper, a real-time and adaptive detection scheme is described.
An ongoing case study has been set up with the Electronic Multi-Beam Radio Astronomy Concept (EMBRACE) radio telescope facility at Nan\c{c}ay Observatory. The objective is to evaluate the performances of this concept in term of hardware complexity, detection efficiency and additional RFI metadata rate cost. The UniBoard implementation scheme is described.

Read this paper on arXiv…

D. Cedric, W. Rodolphe and R. Philippe
Fri, 3 Mar 17
18/62

Comments: 16 pages, 13 figures

The Aesthetics of Astrophysics: How to Make Appealing Color-Composite Images that Convey the Science [IMA]

http://arxiv.org/abs/1703.00490


Astronomy has a rich tradition of using color photography and imaging, for visualization in research as well as for sharing scientific discoveries in formal and informal education settings (i.e., for “public outreach.”) In the modern era, astronomical research has benefitted tremendously from electronic cameras that allow data and images to be generated and analyzed in a purely digital form with a level of precision not previously possible. Advances in image-processing software have also enabled color-composite images to be made in ways much more complex than with darkroom techniques, not only at optical wavelengths but across the electromagnetic spectrum. And the internet has made it possible to rapidly disseminate these images to eager audiences.
Alongside these technological advances, there have been gains in understanding how to make images that are scientifically illustrative as well as aesthetically pleasing. Studies have also given insights on how the public interprets astronomical images, and how that can be different than professional astronomers. An understanding of these differences will help in the creation of images that are meaningful to both groups.
In this invited review we discuss the techniques behind making color-composite images as well as examine the factors one should consider when doing so, whether for data visualization or public consumption. We also provide a brief history of astronomical imaging with a focus on the origins of the “modern era” during which distribution of high-quality astronomical images to the public is a part of nearly every professional observatory’s public outreach. We review relevant research into the expectations and misconceptions that often affect the public’s interpretation of these images.

Read this paper on arXiv…

T. Rector, Z. Levay, L. Frattare, et. al.
Fri, 3 Mar 17
24/62

Comments: Accepted, PASP, Invited Review for special focus issue concerning the topic of Data Visualization in Astronomy

Prowess – a software model for the Ooty Wide Field Array [IMA]

http://arxiv.org/abs/1703.00643


One of the scientific objectives of the Ooty Wide Field Array(OWFA) is to observe the redshifted HI emission from z ~ 3.35. Although predictions spell out optimistic outcomes in reasonable integration times, these studies were based purely on analytical assumptions, without accounting for limiting systematics. A software model for OWFA has been developed with a view to understanding the instrument-induced systematics, by describing a complete software model for the instrument. This model has been implemented through a suite of programs, together called Prowess, which has been conceived with a dual role of an emulator as well as observatory data analysis software. The programming philosophy followed in building Prowess enables any user to define an own set of functions and add new functionality. This paper describes a co-ordinate system suitable for OWFA in which the baselines are defined. The foregrounds are simulated from their angular power spectra. The visibilities are then computed from the foregrounds. These visibilities are then used for further processing, such as calibration and power spectrum estimation. The package allows for rich visualisation features in multiple output formats in an interactive fashion, giving the user an intuitive feel for the data. Prowess has been extensively used for numerical predictions of the foregrounds for the OWFA HI experiment.

Read this paper on arXiv…

V. Marthi
Fri, 3 Mar 17
51/62

Comments: To appear in the Special Section of the JAA on the Ooty Wide Field Array

Managing the Public to Manage Data: Citizen Science and Astronomy [IMA]

http://arxiv.org/abs/1703.00037


Citizen science projects recruit members of the public as volunteers to process and produce datasets. These datasets must win the trust of the scientific community. The task of securing credibility involves, in part, applying standard scientific procedures to clean these datasets. However, effective management of volunteer behavior also makes a significant contribution to enhancing data quality. Through a case study of Galaxy Zoo, a citizen science project set up to generate datasets based on volunteer classifications of galaxy morphologies, this paper explores how those involved in running the project manage volunteers. The paper focuses on how methods for crediting volunteer contributions motivate volunteers to provide higher quality contributions and to behave in a way that better corresponds to statistical assumptions made when combining volunteer contributions into datasets. These methods have made a significant contribution to the success of the project in securing trust in these datasets, which have been well used by other scientists. Implications for practice are then presented for citizen science projects, providing a list of considerations to guide choices regarding how to credit volunteer contributions to improve the quality and trustworthiness of citizen science-produced datasets.

Read this paper on arXiv…

P. Darch
Thu, 2 Mar 17
4/44

Comments: 16 pages, 0 figures, published in International Journal of Digital Curation

Image Subtraction Reduction of Open Clusters M35 & NGC 2158 In The K2 Campaign-0 Super-Stamp [IMA]

http://arxiv.org/abs/1703.00030


Observations were made of the open clusters M35 and NGC 2158 during the initial K2 campaign (C0). Reducing these data to high-precision photometric time-series is challenging due to the wide point spread function (PSF) and the blending of stellar light in such dense regions. We developed an image-subtraction-based K2 reduction pipeline that is applicable to both crowded and sparse stellar fields. We applied our pipeline to the data-rich C0 K2 super-stamp, containing the two open clusters, as well as to the neighboring postage stamps. In this paper, we present our image subtraction reduction pipeline and demonstrate that this technique achieves ultra-high photometric precision for sources in the C0 super-stamp. We extract the raw light curves of 3960 stars taken from the UCAC4 and EPIC catalogs and de-trend them for systematic effects. We compare our photometric results with the prior reductions published in the literature. For detrended, TFA-corrected sources in the 12–12.25 $\rm K_{p}$ magnitude range, we achieve a best 6.5 hour window running rms of 35 ppm falling to 100 ppm for fainter stars in the 14–14.25 $ \rm K_{p}$ magnitude range. For stars with $\rm K_{p}> 14$, our detrended and 6.5 hour binned light curves achieve the highest photometric precision. Moreover, all our TFA-corrected sources have higher precision on all time scales investigated. This work represents the first published image subtraction analysis of a K2 super-stamp. This method will be particularly useful for analyzing the Galactic bulge observations carried out during K2 campaign 9. The raw light curves and the final results of our detrending processes are publicly available at \url{this http URL}.

Read this paper on arXiv…

M. Soares-Furtado, J. Hartman, G. Bakos, et. al.
Thu, 2 Mar 17
10/44

Comments: Accepted for publication in PASP. 14 pages, 5 figures, 2 tables. Light curves available from this http URL

The Plastic Scintillator Detector at DAMPE [IMA]

http://arxiv.org/abs/1703.00098


he DArk Matter Particle Explorer (DAMPE) is a general purposed satellite-borne high energy $\gamma-$ray and cosmic ray detector, and among the scientific objectives of DAMPE are the searches for the origin of cosmic rays and an understanding of Dark Matter particles. As one of the four detectors in DAMPE, the Plastic Scintillator Detector (PSD) plays an important role in the particle charge measurement and the photons/electrons separation. The PSD has 82 modules, each consists of a long organic plastic scintillator bar and two PMTs at both ends for readout, in two layers and covers an overall active area larger than 82 cm $\times$ 82 cm. It can identify the charge states for relativistic ions from H to Fe, and the detector efficiency for Z=1 particles can reach 0.9999. The PSD has been successfully launched with DAMPE on Dec. 17, 2015. In this paper, the design, the assembly, the qualification tests of the PSD and some of the performance measured on the ground have been described in detail.

Read this paper on arXiv…

Y. Yu, Z. Sun, H. Su, et. al.
Thu, 2 Mar 17
25/44

Comments: N/A

The Hubble Catalog of Variables [IMA]

http://arxiv.org/abs/1703.00258


The Hubble Catalog of Variables (HCV) is a 3 year ESA funded project that aims to develop a set of algorithms to identify variables among the sources included in the Hubble Source Catalog (HSC) and produce the HCV. We will process all HSC sources with more than a predefined number of measurements in a single filter/instrument combination and compute a range of lightcurve features to determine the variability status of each source. At the end of the project, the first release of the Hubble Catalog of Variables will be made available at the Mikulski Archive for Space Telescopes (MAST) and the ESA Science Archives. The variability detection pipeline will be implemented at the Space Telescope Science Institute (STScI) so that updated versions of the HCV may be created following the future releases of the HSC.

Read this paper on arXiv…

P. Gavras, A. Bonanos, I. Bellas-Velidis, et. al.
Thu, 2 Mar 17
26/44

Comments: 4 pages, 5 figures, To appear in the conference proceedings of the IAU Symposium 325 AstroInformatics (2016 October 20-24, Sorrento, Italy)

Antarctic Surface Reflectivity Measurements from the ANITA-3 and HiCal-1 Experiments [IMA]

http://arxiv.org/abs/1703.00415


The primary science goal of the NASA-sponsored ANITA project is measurement of ultra-high energy neutrinos and cosmic rays, observed via radio-frequency signals resulting from a neutrino- or cosmic ray- interaction with terrestrial matter (atmospheric or ice molecules, e.g.). Accurate inference of the energies of these cosmic rays requires understanding the transmission/reflection of radio wave signals across the ice-air boundary. Satellite-based measurements of Antarctic surface reflectivity, using a co-located transmitter and receiver, have been performed more-or-less continuously for the last few decades. Satellite-based reflectivity surveys, at frequencies ranging from 2–45 GHz and at near-normal incidence, yield generally consistent reflectivity maps across Antarctica. Using the Sun as an RF source, and the ANITA-3 balloon borne radio-frequency antenna array as the RF receiver, we have also measured the surface reflectivity over the interval 200-1000 MHz, at elevation angles of 12-30 degrees, finding agreement with the Fresnel equations within systematic errors. To probe low incidence angles, inaccessible to the Antarctic Solar technique and not probed by previous satellite surveys, a novel experimental approach (“HiCal-1”) was devised. Unlike previous measurements, HiCal-ANITA constitute a bi-static transmitter-receiver pair separated by hundreds of kilometers. Data taken with HiCal, between 200–600 MHz shows a significant departure from the Fresnel equations, constant with frequency over that band, with the deficit increasing with obliquity of incidence, which we attribute to the combined effects of possible surface roughness, surface grain effects, radar clutter and/or shadowing of the reflection zone due to Earth curvature effects.

Read this paper on arXiv…

P. Gorham, P. Allison, O. Banerjee, et. al.
Thu, 2 Mar 17
28/44

Comments: N/A

Phylogenetic Tools in Astrophysics [IMA]

http://arxiv.org/abs/1703.00286


Multivariate clustering in astrophysics is a recent development justified by the bigger and bigger surveys of the sky. The phylogenetic approach is probably the most unexpected technique that has appeared for the unsupervised classification of galaxies, stellar populations or globular clusters. On one side, this is a somewhat natural way of classifying astrophysical entities which are all evolving objects. On the other side, several conceptual and practical difficulties arize, such as the hierarchical representation of the astrophysical diversity, the continuous nature of the parameters, and the adequation of the result to the usual practice for the physical interpretation. Most of these have now been solved through the studies of limited samples of stellar clusters and galaxies. Up to now, only the Maximum Parsimony (cladistics) has been used since it is the simplest and most general phylogenetic technique. Probabilistic and network approaches are obvious extensions that should be explored in the future.

Read this paper on arXiv…

D. Fraix-Burnet
Thu, 2 Mar 17
38/44

Comments: N/A