Frequency stability characterization of a broadband fiber Fabry-Perot interferometer [IMA]

An optical etalon illuminated by a white light source provides a broadband comb-like spectrum that can be employed as a calibration source for astronomical spectrographs in radial velocity (RV) surveys for extrasolar planets. For this application the frequency stability of the etalon is critical, as its transmission spectrum is susceptible to frequency fluctuations due to changes in cavity temperature, optical power and input polarization. In this paper we present a laser frequency comb measurement technique to characterize the frequency stability of a custom-designed fiber Fabry-Perot interferometer (FFP). Simultaneously probing the stability of two etalon resonance modes, we assess both the absolute stability of the etalon and the long-term stability of the cavity dispersion. We measure mode positions with MHz precision, which corresponds to splitting the FFP resonances by a part in 500 and to RV precision of ~1 m/s. We address limiting systematic effects, including the presence of parasitic etalons, that need to be overcome to push the metrology of this system to the equivalent RV precision of 10 cm/s. Our results demonstrate a means to characterize environmentally-driven perturbations of etalon resonance modes across broad spectral bandwidths, as well as motivate the benefits and challenges of FFPs as spectrograph calibrators.

Read this paper on arXiv…

J. Jennings, S. Halverson, R. Terrien, et. al.
Thu, 2 Mar 17

Comments: 13 pages, 7 figures, submitted to Opt. Express

First Data Release of the Hyper Suprime-Cam Subaru Strategic Program [IMA]

The Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP) is a three-layered imaging survey aimed at addressing some of the most outstanding questions in astronomy today, including the nature of dark matter and dark energy. The survey has been awarded 300 nights of observing time at the Subaru Telescope. The survey started in March 2014. This paper presents the first public data release of HSC-SSP. This release includes data taken in the first 1.7 years of observations (61.5 nights) and each of the Wide, Deep, and UltraDeep layers covers about 108, 26, and 4 square degrees down to depths of i~26.4, ~26.5, and ~27.2 mag, respectively (5 sigma for point sources). All the layers are observed in five broad-bands (grizy) and the Deep and UltraDeep layers are observed in narrow-bands as well. We achieve an impressive image quality of 0.6 arcsec in the i-band in the Wide layer. We show that we achieve 1-2 per cent PSF photometry (rms) both internally and externally (against Pan-STARRS1), and ~10 mas and ~40 mas internal and external astrometric accuracy, respectively. Both the calibrated images and catalogs are made available to the community through dedicated user interfaces and database servers. In addition to the pipeline products, we also provide value-added products such as photometric redshifts and a collection of public spectroscopic redshifts. Detailed descriptions of all the data can be found online. The data release website is

Read this paper on arXiv…

H. Aihara, R. Armstrong, S. Bickerton, et. al.
Wed, 1 Mar 17

Comments: 33 pages, 19 figures. Draft paper to be submitted to PASJ special issue in a month. Data available at this https URL

Composite Reflective/Absorptive IR-Blocking Filters Embedded in Metamaterial Antireflection Coated Silicon [IMA]

Infrared (IR) blocking filters are crucial for controlling the radiative loading on cryogenic systems and for optimizing the sensitivity of bolometric detectors in the far-IR. We present a new IR filter approach based on a combination of patterned frequency selective structures on silicon and a thin (50 $\mu \textrm{m}$ thick) absorptive composite based on powdered reststrahlen absorbing materials. For a 300 K blackbody, this combination reflects $\sim$50\% of the incoming light and blocks \textgreater 99.8\% of the total power with negligible thermal gradients and excellent low frequency transmission. This allows for a reduction in the IR thermal loading to negligible levels in a single cold filter. These composite filters are fabricated on silicon substrates which provide excellent thermal transport laterally through the filter and ensure that the entire area of the absorptive filter stays near the bath temperature. A metamaterial antireflection coating cut into these substrates reduces in-band reflections to below 1\%, and the in-band absorption of the powder mix is below 1\% for signal bands below 750 GHz. This type of filter can be directly incorporated into silicon refractive optical elements.

Read this paper on arXiv…

C. Munson, S. Choi, K. Coughlin, et. al.
Wed, 1 Mar 17

Comments: N/A

Estimating Extinction using Unsupervised Machine Learning [IMA]

Dust extinction is the most robust tracer of the gas distribution in the interstellar medium, but measuring extinction is limited by the systematic uncertainties involved in estimating the intrinsic colors to background stars. In this paper we present a new technique, PNICER, that estimates intrinsic colors and extinction for individual stars using unsupervised machine learning algorithms. This new method aims to be free from any priors with respect to the column density and intrinsic color distribution. It is applicable to any combination of parameters and works in arbitrary numbers of dimensions. Furthermore, it is not restricted to color space. Extinction towards single sources is determined by fitting Gaussian Mixture Models along the extinction vector to (extinction-free) control field observations. In this way it becomes possible to describe the extinction for observed sources with probability densities. PNICER effectively eliminates known biases found in similar methods and outperforms them in cases of deep observational data where the number of background galaxies is significant, or when a large number of parameters is used to break degeneracies in the intrinsic color distributions. This new method remains computationally competitive, making it possible to correctly de-redden millions of sources within a matter of seconds. With the ever-increasing number of large-scale high-sensitivity imaging surveys, PNICER offers a fast and reliable way to efficiently calculate extinction for arbitrary parameter combinations without prior information on source characteristics. PNICER also offers access to the well-established NICER technique in a simple unified interface and is capable of building extinction maps including the NICEST correction for cloud substructure. PNICER is offered to the community as an open-source software solution and is entirely written in Python.

Read this paper on arXiv…

S. Meingast, M. Lombardi and J. Alves
Wed, 1 Mar 17

Comments: Accepted for publication in A&A, source code available at this http URL

Target-based Optimization of Advanced Gravitational-Wave Detector Network Operations [IMA]

We introduce two novel time-dependent figures of merit for both online and offline optimizations of advanced gravitational-wave (GW) detector network operations with respect to (i) detecting continuous signals from known source locations and (ii) detecting GWs of neutron star binary coalescences from known local galaxies, which thereby have the highest potential for electromagnetic counterpart detection. For each of these scientific goals, we characterize an $N$-detector network, and all its $(N-1)$-detector subnetworks, to identify subnetworks and individual detectors (key contributors) that contribute the most to achieving the scientific goal. Our results show that aLIGO-Hanford is expected to be the key contributor in 2017 to the goal of detecting GWs from the Crab pulsar within the network of LIGO and Virgo detectors. For the same time period and for the same network, both LIGO detectors are key contributors to the goal of detecting GWs from the Vela pulsar, as well as to detecting signals from 10 high interest pulsars. Key contributors to detecting continuous GWs from the Galactic Center can only be identified for finite time intervals within each sidereal day with either the 3-detector network of the LIGO and Virgo detectors in 2017, or the 4-detector network of the LIGO, Virgo, and KAGRA detectors in 2019-2020. Characterization of the LIGO-Virgo detectors with respect to goal (ii) identified the two LIGO detectors as key contributors. Additionally, for all analyses, we identify time periods within a day when lock losses or scheduled service operations could result with the least amount of signal-to-noise or transient detection probability loss for a detector network.

Read this paper on arXiv…

A. Szolgyen, G. Dalya, L. Gondan, et. al.
Wed, 1 Mar 17

Comments: 16 pages, 5 figures

Calibration of the Large Area X-ray Proportional Counter (LAXPC) instrument on-board AstroSat [IMA]

We present the calibration and background model for the Large Area X-ray Proportional Counter (LAXPC) detectors on-board AstroSat. LAXPC instrument has three nominally identical detectors to achieve large collecting area. These detectors are independent of each other and in the event analysis mode, they record the arrival time and energy of each photon that is detected. The detectors have a time-resolution of 10 $\mu$s and a dead-time of about 42 $\mu$s. This makes LAXPC ideal for timing studies. The energy resolution and peak channel to energy mapping were obtained from calibration on ground using radioactive sources coupled with GEANT4 simulations of the detectors. The response matrix was further refined from observations of the Crab X-ray source after launch. At around 20 keV the energy resolution of detector is about 10–15\%, while the combined effective area of the 3 detectors is about 6000 cm$^2$.

Read this paper on arXiv…

H. Antia, J. Yadav, P. Agrawal, et. al.
Wed, 1 Mar 17

Comments: submitted for publication

Parametric analysis of Cherenkov light LDF from EAS in the range 30-3000 TeV for primary gamma rays and nuclei [IMA]

A simple ‘knee-like’ approximation of the Lateral Distribution Function (LDF) of Cherenkov light emitted by EAS (extensive air showers) in the atmosphere is proposed for solving various tasks of data analysis in HiSCORE and other wide angle ground-based experiments designed to detect gamma rays and cosmic rays with the energy above tens of TeV. Simulation-based parametric analysis of individual LDF curves revealed that on the radial distance 20-500 m the 5-parameter ‘knee-like’ approximation fits individual LDFs as well as the mean LDF with a very good accuracy. In this paper we demonstrate the efficiency and flexibility of the ‘knee-like’ LDF approximation for various primary particles and shower parameters and the advantages of its application to suppressing proton background and selecting primary gamma rays.

Read this paper on arXiv…

A. Elshoukrofy, E. Postnikov, E. Korosteleva, et. al.
Tue, 28 Feb 17

Comments: 7 pages, 1 table, 2 figures; Bulletin of the Russian Academy of Sciences: Physics, 81, 4 (2017), in press

Astronomical technology – the past and the future [IMA]

The past fifty years have been an epoch of impressive progress in the field of astronomical technology. Practically all the technical tools, which we use today, have been developed during that time span. While the first half of this period has been dominated by advances in the detector technologies, during the past two decades innovative telescope concepts have been developed for practically all wavelength ranges where astronomical observations are possible. Further important advances can be expected in the next few decades. Based on the experience of the past, some of the main sources of technological progress can be identified.

Read this paper on arXiv…

I. Appenzeller
Tue, 28 Feb 17

Comments: 9 pages, 9 figures, Review article associated with the Karl Schwarzschild Award Lecture 2015, Published in Astron. Nachrichten; doi: 10.1002/asna.201612360

Parametric Analysis of Cherenkov Light LDF from EAS for High Energy Gamma Rays and Nuclei: Ways of Practical Application [IMA]

In this paper we propose a ‘knee-like’ approximation of the lateral distribution of the Cherenkov light from extensive air showers in the energy range 30-3000 TeV and study a possibility of its practical application in high energy ground-based gamma-ray astronomy experiments (in particular, in TAIGA-HiSCORE). The approximation has a very good accuracy for individual showers and can be easily simplified for practical application in the HiSCORE wide angle timing array in the condition of a limited number of triggered stations.

Read this paper on arXiv…

A. Elshoukrofy, E. Postnikov, E. Korosteleva, et. al.
Tue, 28 Feb 17

Comments: 4 pages, 5 figures, proceedings of ISVHECRI 2016 (19th International Symposium on Very High Energy Cosmic Ray Interactions)

VaST: a variability search toolkit [IMA]

Variability Search Toolkit (VaST) is a software package designed to find variable objects in a series of sky images. It relies on source list matching (as opposed to image subtraction) using SExtractor for source detection and aperture or PSF-fitting photometry (using PSFEx). Variability indices that characterize scatter and smoothness of the lightcurve are computed for all objects. Candidate variables are identified as objects having a variability index value significantly higher than other objects of similar brightness. The distinguishing features of VaST are the ability to conduct accurate aperture photometry using images obtained with non-linear detectors and handle complex image distortions. The software was successfully applied to images obtained with telescopes ranging from 0.08 to 2.5m in diameter equipped with a variety of detectors including CCD, CMOS, MIC and photographic plates. About 1800 variable stars have been discovered with VaST. It is used as the transient detection engine in the NMW nova patrol. The code is written in C and can work on many UNIX-like systems. VaST is free software available at this http URL

Read this paper on arXiv…

K. Sokolovsky and A. Lebedev
Tue, 28 Feb 17

Comments: 20 pages, 6 figures, 1 table; posted for public review before submitting to A&C – comments welcome until March 24 2017

Background rejection method for tens of TeV gamma-ray astronomy applicable to wide angle timing arrays [IMA]

A ‘knee-like’ approximation of Cherenkov light Lateral Distribution Functions, which we developed earlier, now is used for the actual tasks of background rejection methods for high energy (tens and hundreds of TeV) gamma-ray astronomy. In this work we implement this technique to the HiSCORE wide angle timing array consisting of Cherenkov light detectors with spacing of 100 m covering 0.2 km$^2$ presently and up to 5 km$^2$ in future. However, it can be applied to other similar arrays. We also show that the application of a multivariable approach (where 3 parameters of the knee-like approximation are used) allows us to reach a high level of background rejection, but it strongly depends on the number of hit detectors.

Read this paper on arXiv…

A. Elshoukrofy, E. Postnikov and L. Sveshnikova
Tue, 28 Feb 17

Comments: 5 pages, 3 figures; proceedings of the 2nd International Conference on Particle Physics and Astrophysics (ICPPA-2016)

Primary gamma ray selection in a hybrid timing/imaging Cherenkov array [IMA]

This work is a methodical study on hybrid reconstruction techniques for hybrid imaging/timing Cherenkov observations. This type of hybrid array is to be realized at the gamma-observatory TAIGA intended for very high energy gamma-ray astronomy (>30 TeV). It aims at combining the cost-effective timing-array technique with imaging telescopes. Hybrid operation of both of these techniques can lead to a relatively cheap way of development of a large area array. The joint approach of gamma event selection was investigated on both types of simulated data: the image parameters from the telescopes, and the shower parameters reconstructed from the timing array. The optimal set of imaging parameters and shower parameters to be combined is revealed. The cosmic ray background suppression factor depending on distance and energy is calculated. The optimal selection technique leads to cosmic ray background suppression of about 2 orders of magnitude on distances up to 450 m for energies greater than 50 TeV.

Read this paper on arXiv…

E. Postnikov, A. Grinyuk, L. Kuzmichev, et. al.
Tue, 28 Feb 17

Comments: 4 pages, 5 figures; proceedings of the 19th International Symposium on Very High Energy Cosmic Ray Interactions (ISVHECRI 2016)

Hybrid method for identifying mass groups of primary cosmic rays in the joint operation of IACTs and wide angle Cherenkov timing arrays [IMA]

This work is a methodical study of another option of the hybrid method originally aimed at gamma/hadron separation in the TAIGA experiment. In the present paper this technique was performed to distinguish between different mass groups of cosmic rays in the energy range 200 TeV – 500 TeV. The study was based on simulation data of TAIGA prototype and included analysis of geometrical form of images produced by different nuclei in the IACT simulation as well as shower core parameters reconstructed using timing array simulation. We show that the hybrid method can be sufficiently effective to precisely distinguish between mass groups of cosmic rays.

Read this paper on arXiv…

E. Postnikov, A. Grinyuk, L. Kuzmichev, et. al.
Tue, 28 Feb 17

Comments: 6 pages, 3 figures; proceedings of the 2nd International Conference on Particle Physics and Astrophysics (ICPPA-2016)

HARPO: 1.7 – 74 MeV gamma-ray beam validation of a high angular resolutio n, high linear polarisation dilution, gas time projection chamber telescope and polarimeter [IMA]

A presentation at the SciNeGHE conference of the past achievements, of the present activities and of the perspectives for the future of the HARPO project, the development of a time projection chamber as a high-performance gamma-ray telescope and linear polarimeter in the e+e- pair creation regime.

Read this paper on arXiv…

D. Bernard
Tue, 28 Feb 17

Comments: Presented at SciNeGHE 2016 “11th Workshop on Science with the New generation of High Energy Gamma-ray Experiments : High-energy gamma-ray experiments at the dawn of gravitational wave astronomy” 18-21 October 2016, Pisa, Italy. Proceedings to be submitted to Il Nuovo Cimento

Polarization in Monte Carlo radiative transfer and dust scattering polarization signatures of spiral galaxies [IMA]

Polarization is an important tool to further the understanding of interstellar dust and the sources behind it. In this paper we describe our implementation of polarization that is due to scattering of light by spherical grains and electrons in the dust Monte Carlo radiative transfer code SKIRT. In contrast to the implementations of other Monte Carlo radiative transfer codes, ours uses co-moving reference frames that rely solely on the scattering processes. It fully supports the peel-off mechanism that is crucial for the efficient calculation of images in 3D Monte Carlo codes. We develop reproducible test cases that push the limits of our code. The results of our program are validated by comparison with analytically calculated solutions. Additionally, we compare results of our code to previously published results. We apply our method to models of dusty spiral galaxies at near-infrared and optical wavelengths. We calculate polarization degree maps and show them to contain signatures that trace characteristics of the dust arms independent of the inclination or rotation of the galaxy.

Read this paper on arXiv…

C. Peest, P. Camps, M. Stalevski, et. al.
Mon, 27 Feb 17

Comments: 15 pages, 10 figures, accepted by Astronomy and Astrophysics

Effects of Pre-ionisation in Radiative Shocks I: Self-Consistent Models [IMA]

In this paper we treat the pre-ionisation problem in shocks over the velocity range $10 < v_{\rm s} < 1500$\,km/s in a self-consistent manner. We identify four distinct classes of solution controlled by the value of the shock precursor parameter, $\Psi = {\cal Q}/v_s$, where ${\cal Q}$ is the ionization parameter of the UV photons escaping upstream. This parameter determines both the temperature and the degree of ionisation of the gas entering the shock. In increasing velocity the shock solution regimes are cold neutral precursors ($v_s \lesssim 40$\,km/s), warm neutral precursors ($40 \lesssim v_s \lesssim 75$\,km/s), warm partly-ionized precursors ($75 \lesssim v_s \lesssim 120$\,km/s), and fast shocks in which the pre-shock gas is in photoionisation equilibrium, and is fully ionized. The main effect of a magnetic field is to push these velocity ranges to higher values, and to limit the post-shock compression. In order to facilitate comparison with observations of shocks, we provide a number of convenient scaling relationships for parameters such as post-shock temperature, compression factors, cooling lengths, and H$\beta$ and X-ray luminosity.

Read this paper on arXiv…

R. Sutherland and M. Dopita
Mon, 27 Feb 17

Comments: 30 pages, 19 figures, extended tables included. Accepted ApJ Feb 2017

Imaging the Schwarzschild-radius-scale Structure of M87 with the Event Horizon Telescope using Sparse Modeling [IMA]

We propose a new imaging technique for radio and optical/infrared interferometry. The proposed technique reconstructs the image from the visibility amplitude and closure phase, which are standard data products of short-millimeter very long baseline interferometers such as the Event Horizon Telescope (EHT) and optical/infrared interferometers, by utilizing two regularization functions: the $\ell_1$-norm and total variation (TV) of the brightness distribution. In the proposed method, optimal regularization parameters, which represent the sparseness and effective spatial resolution of the image, are derived from data themselves using cross validation (CV). As an application of this technique, we present simulated observations of M87 with the EHT based on four physically motivated models. We confirm that $\ell_1$+TV regularization can achieve an optimal resolution of $\sim 20-30$% of the diffraction limit $\lambda/D_{\rm max}$, which is the nominal spatial resolution of a radio interferometer. With the proposed technique, the EHT can robustly and reasonably achieve super-resolution sufficient to clearly resolve the black hole shadow. These results make it promising for the EHT to provide an unprecedented view of the event-horizon-scale structure in the vicinity of the super-massive black hole in M87 and also the Galactic center Sgr A*.

Read this paper on arXiv…

K. Akiyama, K. Kuramochi, S. Ikeda, et. al.
Mon, 27 Feb 17

Comments: 16 pages, 6 figures, accepted for publication in ApJ

Optimization Study for the Experimental Configuration of CMB-S4 [IMA]

The CMB Stage 4 (CMB-S4) experiment is a next-generation, ground-based experiment that will measure the cosmic microwave background (CMB) polarization to unprecedented accuracy, probing the signature of inflation, the nature of cosmic neutrinos, relativistic thermal relics in the early universe, and the evolution of the universe. To advance the progress towards designing the instrument for CMB-S4, we have established a framework to optimize the instrumental configuration to maximize its scientific output. In this paper, we report our first results from this framework, using simplified instrumental and cost models. We have primarily studied two classes of instrumental configurations: arrays of large aperture telescopes with diameters ranging from 2-10 m, and hybrid arrays that combine small-aperture telescopes (0.5 m diameter) with large-aperture telescopes. We explore performance as a function of the telescope aperture size, the distribution of the detectors into different microwave frequencies, the survey strategy and survey area, the low-frequency noise performance, and the balance between small and large aperture telescopes for the hybrid configurations. We also examine the impact from the uncertainties of the instrumental model. There are several areas that deserve further improvement. In our forecasting framework, we adopt a simple two-component foregrounds model with spacially varying power-law spectral indices. We estimate delensing performance statistically and ignore possible non-idealities. Instrumental systematics, which is not accounted for in our study, may influence the design. Further study of the instrumental and cost models will be one of the main areas of study by the whole CMB-S4 community. We hope that our framework will be useful for estimating the influence of these improvement in future, and we will incorporate them in order to improve the optimization further.

Read this paper on arXiv…

D. Barron, Y. Chinone, A. Kusaka, et. al.
Mon, 27 Feb 17

Comments: 42 pages, 23 figures

Performance of a continuously rotating half-wave plate on the POLARBEAR telescope [IMA]

A continuously rotating half-wave plate (CRHWP) is a promising tool to improve the sensitivity to large angular scales in cosmic microwave background (CMB) polarization measurements. With a CRHWP, single detectors can measure all three of the Stokes parameters, $I$, $Q$ and $U$, thereby avoiding the set of systematic errors that can be introduced by mismatches in the properties of orthogonal detector pairs. We focus on the implementation of CRHWPs in large aperture telescopes (i.e. the primary mirror is larger than the current maximum half-wave plate diameter of $\sim$0.5 m), where the CRHWP can be placed between the primary mirror and focal plane. In this configuration, one needs to address the intensity to polarization ($I{\rightarrow}P$) leakage of the optics, which becomes a source of 1/f noise and also causes differential gain systematics that arise from CMB temperature fluctuations. In this paper, we present the performance of a CRHWP installed in the POLARBEAR experiment, which employs a Gregorian telescope with a 2.5 m primary illumination pattern. The CRHWP is placed near the prime focus between the primary and secondary mirrors. We find that the $I{\rightarrow}P$ leakage is larger than the expectation from the physical properties of our primary mirror, resulting in a 1/f knee of 100 mHz. The excess leakage could be due to imperfections in the detector system, i.e. detector non-linearity in the responsivity and time-constant. We demonstrate, however, that by subtracting the leakage correlated with the intensity signal, the 1/f noise knee frequency is reduced to 32 mHz ($\ell \sim$39 for our scan strategy), which is sufficient to probe the primordial B-mode signal. We also discuss methods for further noise subtraction in future projects where the precise temperature control of instrumental components and the leakage reduction will play a key role.

Read this paper on arXiv…

S. Takakura, M. Aguilar, Y. Akiba, et. al.
Fri, 24 Feb 17

Comments: 27 pages, 5 figures, 3 tables, to be submitted to JCAP

San Pedro Meeting on Wide Field Variability Surveys: Some Concluding Comments [IMA]

This is a written version of the closing talk at the 22nd Los Alamos Stellar pulsation conference on wide field variability surveys. It comments on some of the issues which arise from the meeting. These include the need for attention to photometric standardization (especially in the infrared) and the somewhat controversial problem of statistical bias in the use of parallaxes (and other methods of distance determination). Some major advances in the use of pulsating variables to study Galactic structure are mentioned. The paper includes a clarification of apparently conflicting results from classical Cepheids and RR Lyrae stars in the inner Galaxy and bulge. The importance of understanding non-periodic phenomena in variable stars,particularly AGB variables and RCB stars is stressed, especially for its relevance to mass-loss, in which pulsation may only play a minor role.

Read this paper on arXiv…

M. Feast
Fri, 24 Feb 17

Comments: Conference on wide field variability surveys: a 21st-century perspective, 8 pages in press

The EBEX Balloon-Borne Experiment – Gondola, Attitude Control, and Control Software [IMA]

The E and B Experiment (EBEX) was a long-duration balloon-borne instrument designed to measure the polarization of the cosmic microwave background (CMB) radiation. EBEX was the first balloon-borne instrument to implement a kilo-pixel array of transition edge sensor (TES) bolometric detectors and the first CMB experiment to use the digital version of the frequency domain multiplexing system for readout of the TES array. The scan strategy relied on 40 s peak-to-peak constant velocity azimuthal scans. We discuss the unique demands on the design and operation of the payload that resulted from these new technologies and the scan strategy. We describe the solutions implemented including the development of a power system designed to provide a total of at least 2.3 kW, a cooling system to dissipate 590 W consumed by the detectors’ readout system, software to manage and handle the data of the kilo-pixel array, and specialized attitude reconstruction software. We present flight performance data showing faultless management of the TES array, adequate powering and cooling of the readout electronics, and constraint of attitude reconstruction errors such that the spurious B-modes they induced were less than 10% of CMB B-mode power spectrum with $r=0.05$.

Read this paper on arXiv…

EBEX. Collaboration, A. Aboobaker, P. Ade, et. al.
Fri, 24 Feb 17

Comments: 37 pages, 16 figures, submitted to ApJ Supp

PURIFYing real radio interferometric observations [IMA]

Next-generation radio interferometers, such as the Square Kilometre Array (SKA), will revolutionise our understanding of the universe through their unprecedented sensitivity and resolution. However, standard methods in radio interferometry produce reconstructed interferometric images that are limited in quality and they are not scalable for big data. In this work we apply and evaluate alternative interferometric reconstruction methods that make use of state-of-the-art sparse image reconstruction algorithms motivated by compressive sensing, which have been implemented in the PURIFY software package. In particular, we implement and apply the proximal alternating direction method of multipliers (P-ADMM) algorithm presented in a recent article. We apply PURIFY to real interferometric observations. For all observations PURIFY outperforms the standard CLEAN, where in some cases PURIFY provides an improvement in dynamic range by over an order of magnitude. The latest version of PURIFY, which includes the developments presented in this work, is made publicly available.

Read this paper on arXiv…

L. Pratley, J. McEwen, M. dAvezac, et. al.
Thu, 23 Feb 17

Comments: 1 page, Proceedings of International BASP Frontiers Workshop 2017

TFAW: wavelet-based signal reconstruction to reduce photometric noise in time-domain surveys [IMA]

There have been many efforts to correct systematic effects in astronomical light curves to improve the detection and characterization of planetary transits and astrophysical variability in general. Algorithms like the Trend Filtering Algorithm (TFA) use simultaneously-observed stars to measure and remove systematic effects, and binning is used to reduce high-frequency random noise. We present TFAW, a modified version of TFA which reduces noise in variable-star light curves without modifying their intrinsic characteristics. We modified TFA’s iterative signal reconstruction by adding a Stationary Wavelet Transform filter which characterizes the noise- and trend-free signal and the underlying noise contribution at each iteration. The algorithm performs an adaptive noise estimation through the wavelet transform which reduces correlated and uncorrelated noise while preserving signals typical of astrophysical changes. We carried out tests over simulated sinusoidal and transit-like signals to assess the effectiveness of the method, and applied TFAW to real light curves from the Evryscope and TFRM. We also studied TFAW’s application to simulated multiperiodic signals. The TFAW improvement in RMS of simulated and real light curves ranges from 0.025 to 0.05 magnitudes compared to TFA. The signal-detection frequency power spectra remain almost unchanged for high SNR light curves, confirming that TFAW does not introduce new correlated noise sources. The signal detection efficiency of the power-spectrum peaks improves by a factor ~1.5 for low SNR light curves, allowing the recovery of transiting planets smaller than previous algorithms. TFAW is also able to improve the characterization of multiperiodic signals. We present two newly-discovered variable stars from Evryscope and TFRM. TFAW is a generic algorithm which is applicable to any kind of ground-based or space-based time-domain survey.

Read this paper on arXiv…

D. Ser, O. Fors, J. Nunez, et. al.
Thu, 23 Feb 17

Comments: Submitted to Astronomy & Astrophysics. 10 pages, 12 figures

New Results from the Solar Maximum Mission Bent Crystal Spectrometer [IMA]

The Bent Crystal Spectrometer (BCS) onboard the NASA Solar Maximum Mission was part of the X-ray Polychromator, which observed numerous flares and bright active regions from February to November 1980, when operation was suspended as a result of the failure of the spacecraft fine pointing system. Observations resumed following the Space Shuttle SMM Repair Mission in April 1984 and continued until November 1989. BCS spectra have been widely used in the past to obtain temperatures, emission measures, and turbulent and bulk flows during flares, as well as element abundances. Instrumental details including calibration factors not previously published are given here, and the in-orbit performance of the BCS is evaluated. Some significant changes during the mission are described, and recommendations for future instrumentation are made. Using improved estimates for the instrument parameters and operational limits, it is now possible to obtain de-convolved, calibrated spectra that show finer detail than before, providing the means to improved interpretation of the physics of the emitting plasmas. The results indicate how historical, archived data can be re-used to obtain enhanced and new, scientifically valuable results.

Read this paper on arXiv…

C. Rapley, J. Sylwester and K. Phillips
Thu, 23 Feb 17

Comments: Figures in B&W but will appear in color when published. Accepted for publication

Testing of General Relativity with Geodetic VLBI [IMA]

The geodetic VLBI technique is capable of measuring the Sun’s gravity light deflection from distant radio sources around the whole sky. This light deflection is equivalent to the conventional gravitational delay used for the reduction of geodetic VLBI data. While numerous tests based on a global set of VLBI data have shown that the parameter ‘gamma’ of the post-Newtonian approximation is equal to unity with a precision of about 0.02 percent, more detailed analysis reveals some systematic deviations depending on the angular elongation from the Sun. In this paper a limited set of VLBI observations near the Sun were adjusted to obtain the estimate of the parameter ‘gamma’ free of the elongation angle impact. The parameter ‘gamma’ is still found to be close to unity with precision of 0.06 percent, two subsets of VLBI data measured at short and long baselines produce some statistical inconsistency.

Read this paper on arXiv…

O. Titov
Thu, 23 Feb 17

Comments: Proceedings of EVN-2016 Meeting

The Brazilian Science Data Center (BSDC) [IMA]

Astrophysics and Space Science are becoming increasingly characterised by what is now known as “big data”, the bottlenecks for progress partly shifting from data acquisition to “data mining”. Truth is that the amount and rate of data accumulation in many fields already surpasses the local capabilities for its processing and exploitation, and the efficient conversion of scientific data into knowledge is everywhere a challenge. The result is that, to a large extent, isolated data archives risk being progressively likened to “data graveyards”, where the information stored is not reused for scientific work. Responsible and efficient use of these large datasets means democratising access and extracting the most science possible from it, which in turn signifies improving data accessibility and integration. Improving data processing capabilities is another important issue specific to researchers and computer scientists of each field. The project presented here wishes to exploit the enormous potential opened up by information technology at our age to advance a model for a science data center in astronomy which aims to expand data accessibility and integration to the largest possible extent and with the greatest efficiency for scientific and educational use. Greater access to data means more people producing and benefiting from information, whereas larger integration of related data from different origins means a greater research potential and increased scientific impact.The project of the BSDC is preoccupied, primarily, with providing tools and solutions for the Brazilian astronomical community. It nevertheless capitalizes on extensive international experience, and is developed in cooperation with the ASI Science Data Center (ASDC), from the Italian Space Agency, granting it an essential ingredient of internationalisation. The BSDC is Virtual Observatory-compliant.

Read this paper on arXiv…

U. Almeida, P. Giommi, C. Brandt, et. al.
Thu, 23 Feb 17

Comments: 7 pages, 1 figure, Proceedings of IWARA 2016

Gaia eclipsing binary and multiple systems. Supervised classification and self-organizing maps [IMA]

Large surveys producing tera- and petabyte-scale databases require machine-learning and knowledge discovery methods to deal with the overwhelming quantity of data and the difficulties of extracting concise, meaningful information with reliable assessment of its uncertainty. This study investigates the potential of a few machine-learning methods for the automated analysis of eclipsing binaries in the data of such surveys. We aim to aid the extraction of samples of eclipsing binaries from such databases and to provide basic information about the objects. We estimate class labels according to two classification systems, one based on the light curve morphology (EA/EB/EW classes) and the other based on the physical characteristics of the binary system (system morphology classes; detached through overcontact systems). Furthermore, we explore low-dimensional surfaces along which the light curves of eclipsing binaries are concentrated, to use in the characterization of the binary systems and in the exploration of biases of the full unknown Gaia data with respect to the training sets. We explore the performance of principal component analysis (PCA), linear discriminant analysis (LDA), random forest classification and self-organizing maps (SOM). We pre-process the photometric time series by combining a double Gaussian profile fit and a smoothing spline, in order to de-noise and interpolate the observed light curves. We achieve further denoising, and selected the most important variability elements from the light curves using PCA. We perform supervised classification using random forest and LDA based on the PC decomposition, while SOM gives a continuous 2-dimensional manifold of the light curves arranged by a few important features. We estimate the uncertainty of the supervised methods due to the specific finite training set using ensembles of models constructed on randomized training sets.

Read this paper on arXiv…

M. Suveges, F. Barblan, I. Lecoeur-Taibi, et. al.
Wed, 22 Feb 17

Comments: 20 pages, 22 figures. Accepted for publication in A&A

The SkyMapper Transient Survey [IMA]

The SkyMapper 1.3 m telescope at Siding Spring Observatory has now begun regular operations. Alongside the Southern Sky Survey, a comprehensive digital survey of the entire southern sky, SkyMapper will carry out a search for supernovae and other transients. The search strategy, covering a total footprint area of ~2000 deg2 with a cadence of $\leq 5$ days, is optimised for discovery and follow-up of low-redshift type Ia supernovae to constrain cosmic expansion and peculiar velocities. We describe the search operations and infrastructure, including a parallelised software pipeline to discover variable objects in difference imaging; simulations of the performance of the survey over its lifetime; public access to discovered transients; and some first results from the Science Verification data.

Read this paper on arXiv…

R. Scalzo, F. Yuan, M. Childress, et. al.
Tue, 21 Feb 17

Comments: 13 pages, 11 figures; submitted to PASA

Scalable explicit implementation of anisotropic diffusion with Runge-Kutta-Legendre super-time-stepping [IMA]

An important ingredient in numerical modelling of high temperature magnetised astrophysical plasmas is the anisotropic transport of heat along magnetic field lines from higher to lower temperatures.Magnetohydrodynamics (MHD) typically involves solving the hyperbolic set of conservation equations along with the induction equation. Incorporating anisotropic thermal conduction requires to also treat parabolic terms arising from the diffusion operator. An explicit treatment of parabolic terms will considerably reduce the simulation time step due to its dependence on the square of the grid resolution ($\Delta x$) for stability. Although an implicit scheme relaxes the constraint on stability, it is difficult to distribute efficiently on a parallel architecture. Treating parabolic terms with accelerated super-time stepping (STS) methods has been discussed in literature but these methods suffer from poor accuracy (first order in time) and also have difficult-to-choose tuneable stability parameters. In this work we highlight a second order (in time) Runge Kutta Legendre (RKL) scheme (first described by Meyer et. al. 2012) that is robust, fast and accurate in treating parabolic terms alongside the hyperbolic conversation laws. We demonstrate its superiority over the first order super time stepping schemes with standard tests and astrophysical applications. We also show that explicit conduction is particularly robust in handling saturated thermal conduction. Parallel scaling of explicit conduction using RKL scheme is demonstrated up to more than $10^4$ processors.

Read this paper on arXiv…

B. Vaidya, D. Prasad, A. Mignone, et. al.
Tue, 21 Feb 17

Comments: 14 pages, 9 figures, submitted to MNRAS

UCAC5: New Proper Motions using Gaia DR1 [IMA]

New astrometric reductions of the US Naval Observatory CCD Astrograph Catalog (UCAC) all-sky observations were performed from first principles using the TGAS stars in the 8 to 11 magnitude range as reference star catalog. Significant improvements in the astrometric solutions were obtained and the UCAC5 catalog of mean positions at a mean epoch near 2001 was generated. By combining UCAC5 with Gaia DR1 data new proper motions on the Gaia coordinate system for over 107 million stars were obtained with typical accuracies of 1 to 2 mas/yr (R = 11 to 15 mag), and about 5 mas/yr at 16th mag. Proper motions of most TGAS stars are improved over their Gaia data and the precision level of TGAS proper motions is extended to many millions more, fainter stars. External comparisons were made using stellar cluster fields and extragalactic sources. The TGAS data allow us to derive the limiting precision of the UCAC x,y data, which is significantly better than 1/100 pixel.

Read this paper on arXiv…

N. Zacharias, C. Finch and J. Frouard
Tue, 21 Feb 17

Comments: 15 pages, 15 figures, 3 tables, accepted by AJ

Performance of an Algorithm for Estimation of Flux, Background and Location on One-Dimensional Signals [IMA]

Optimal estimation of signal amplitude, background level, and photocentre location is crucial to the combined extraction of astrometric and photometric information from focal plane images, and in particular from the one-dimensional measurements performed by Gaia on intermediate to faint magnitude stars. Our goal is to define a convenient maximum likelihood framework, suited to efficient iterative implementation and to assessment of noise level, bias, and correlation among variables. The analytical model is investigated numerically and verified by simulation over a range of magnitude and background values. The estimates are unbiased, with a well-understood correlation between amplitude and background, and with a much lower correlation of either of them with location, further alleviated in case of signal symmetry. Two versions of the algorithm are implemented and tested against each other, respectively, for independent and combined parameter estimation. Both are effective and provide consistent results, but the latter is more efficient because it takes into account the flux-background estimate correlation.

Read this paper on arXiv…

M. Gai, D. Busonero and R. Cancelliere
Tue, 21 Feb 17

Comments: 13 pages; 13 figures; to be published on PASP

An update to the EVEREST K2 pipeline: Short cadence, saturated stars, and Kepler-like photometry down to Kp = 15 [IMA]

We present an update to the EVEREST K2 pipeline that addresses various limitations in the previous version and improves the photometric precision of the de-trended light curves. We develop a fast regularization scheme for third order pixel level decorrelation (PLD) and adapt the algorithm to include the PLD vectors of neighboring stars to enhance the predictive power of the model and minimize overfitting, particularly for faint stars. We also modify PLD to work for saturated stars and improve its performance on extremely variable stars. On average, EVEREST 2.0 light curves have 10-20% higher photometric precision than those in the previous version, yielding the highest precision light curves at all Kp magnitudes of any publicly available K2 catalog. For most K2 campaigns, we recover the original Kepler precision to at least Kp = 14, and to at least Kp = 15 for campaigns 1, 5, and 6. We also de-trend all short cadence targets observed by K2, obtaining even higher photometric precision for these stars. All light curves for campaigns 0-8 are available online in the EVEREST catalog, which will be continuously updated with future campaigns. EVEREST 2.0 is open source and is coded in a general framework that can be applied to other photometric surveys, including Kepler and the upcoming TESS mission.

Read this paper on arXiv…

R. Luger, E. Kruse, D. Foreman-Mackey, et. al.
Tue, 21 Feb 17

Comments: 16 pages, 21 figures. Submitted to AJ. Source code and documentation available at this https URL

AstroGrid-PL [IMA]

We summarise the achievements of the AstroGrid-PL project, which aims to provide an infrastructure grid computing, distributed storage and Virtual Observatory services to the Polish astronomical community. It was developed from 2011-2015 as a domain grid component within the larger PLGrid Plus project for scientific computing in Poland.

Read this paper on arXiv…

G. Stachowski, T. Kundera, P. Ciecielag, et. al.
Mon, 20 Feb 17

Comments: 4 pages, 2 figures, in the Proceedings of the 37th Meeting of the Polish Astronomical Society

EOVSA Implementation of a Spectral Kurtosis Correlator for Transient Detection and Classification [IMA]

We describe in general terms the practical use in astronomy of a higher-order statistical quantity called Spectral Kurtosis (SK), and describe the first implementation of SK-enabled firmware in the F-engine (Fourier transform-engine) of a digital FX correlator for Expanded Owens Valley Solar Array (EOVSA). The development of the theory for SK is summarized, leading to an expression for generalized SK that is applicable to both SK spectrometers and those not specifically designed for SK. We also give the means for computing both the SK estimator and thresholds for its application as a discriminator of RFI contamination. Tests of the performance of EOVSA as an SK spectrometer are shown to agree precisely with theoretical expectations, and the methods for configuring the correlator for correct SK operation are described.

Read this paper on arXiv…

G. Nita, J. Hickish, D. MacMahon, et. al.
Mon, 20 Feb 17

Comments: 16 pages, 4 figures

Spectral performance of SKALA antennas I: Mitigating spectral artefacts in SKA1-LOW 21-cm cosmology experiments [IMA]

This paper is the first on a series of papers describing the impact of antenna instrumental artefacts on the 21-cm cosmology experiments to be carried out by the SKA1-LOW telescope, i.e., the Cosmic Dawn (CD) and the Epoch of Reionization (EoR). The smoothness of the passband response of the current log-periodic antenna being developed for the SKA1-LOW is analyzed using numerical electromagnetic simulations. The frequency ripples are characterized using low-order polynomials defined locally, in order to study the impact of the passband smoothness in the instrument calibration and CD/EoR Science. A solution is offered to correct a fast ripple found at 60 MHz during a test campaign at the SKA site at the Murchison Radio-astronomy Observatory, Western Australia in September 2015 with a minor impact on the telescope’s performance and design. A comparison with the Hydrogen Epoch of Reionization Array antenna is also shown demonstrating the potential use of the SKA1-LOW antenna for the Delay Spectrum technique to detect the EoR.

Read this paper on arXiv…

E. Acedo, C. Trott, R. Wayth, et. al.
Mon, 20 Feb 17

Comments: 12 pages, 19 figures, submitted to MNRAS

Sports stars: analyzing the performance of astronomers at visualization-based discovery [IMA]

In this data-rich era of astronomy, there is a growing reliance on automated techniques to discover new knowledge. The role of the astronomer may change from being a discoverer to being a confirmer. But what do astronomers actually look at when they distinguish between “sources” and “noise?” What are the differences between novice and expert astronomers when it comes to visual-based discovery? Can we identify elite talent or coach astronomers to maximize their potential for discovery? By looking to the field of sports performance analysis, we consider an established, domain-wide approach, where the expertise of the viewer (i.e. a member of the coaching team) plays a crucial role in identifying and determining the subtle features of gameplay that provide a winning advantage. As an initial case study, we investigate whether the SportsCode performance analysis software can be used to understand and document how an experienced HI astronomer makes discoveries in spectral data cubes. We find that the process of timeline-based coding can be applied to spectral cube data by mapping spectral channels to frames within a movie. SportsCode provides a range of easy to use methods for annotation, including feature-based codes and labels, text annotations associated with codes, and image-based drawing. The outputs, including instance movies that are uniquely associated with coded events, provide the basis for a training program or team-based analysis that could be used in unison with discipline specific analysis software. In this coordinated approach to visualization and analysis, SportsCode can act as a visual notebook, recording the insight and decisions in partnership with established analysis methods. Alternatively, in situ annotation and coding of features would be a valuable addition to existing and future visualisation and analysis packages.

Read this paper on arXiv…

C. Fluke, L. Parrington, S. Hegarty, et. al.
Fri, 17 Feb 17

Comments: 16 pages, 7 figures; accepted for publication in PASP Special Issue: Techniques and Methods for Astrophysical Data Visualization

A new infrared Fabry-Pérot-based radial-velocity-reference module for the SPIRou radial-velocity spectrograph [IMA]

The field of exoplanet research is moving towards the detection and characterization of habitable planets. These exo-Earths can be easily found around low-mass stars by using either photometric transit or radial-velocity (RV) techniques. In the latter case the gain is twofold because the signal induced by the planet of a given mass is higher due to the more favourable planet-star mass ratio and because the habitable zone lies closer to the star. However, late-type stars emit mainly in the infrared (IR) wavelength range, which calls for IR instruments. SPIRou is a stable RV IR spectrograph addressing these ambitious scientific objectives. As with any other spectrograph, calibration and drift monitoring is fundamental to achieve high precision. Our goal was to build, test and finally operate a Fabry-P\’erot-based RV-reference module able to provide the needed spectral information over the full wavelength range of SPIRou. We adapted the existing HARPS Fabry-P\’erot calibrator for operation in the IR domain. After manufacturing and assembly, we characterized the FP RV-module in the laboratory. We measured finesse, transmittance, and spectral flux of the system. The measured finesse value of $F=12.8$ corresponds perfectly to the theoretical value. The total transmittance at peak is of the order of 0.5%, mainly limited by fibre-connectors and interfaces. Nevertheless, the provided flux is in line with the the requirements set by the SPIRou instrument. Once installed on SPIRou, we will test the full spectral characteristics and stability of the RV-reference module. The goal will be to prove that the line position and shape stability of all lines is better than 0.3 m s$^{-1}$ between two calibration sequences (typically 24 hours), such that the RV-reference module can be used to monitor instrumental drifts.

Read this paper on arXiv…

C. Federica, W. Francois, C. Bruno, et. al.
Fri, 17 Feb 17

Comments: 13 pages, 21 figures

The w-effect in interferometric imaging: from a fast sparse measurement operator to super-resolution [IMA]

Modern radio telescopes, such as the Square Kilometre Array (SKA), will probe the radio sky over large fields-of-view, which results in large w-modulations of the sky image. This effect complicates the relationship between the measured visibilities and the image under scrutiny. In algorithmic terms, it gives rise to massive memory and computational time requirements. Yet, it can be a blessing in terms of reconstruction quality of the sky image. In recent years, several works have shown that large w-modulations promote the spread spectrum effect. Within the compressive sensing framework, this effect increases the incoherence between the sensing basis and the sparsity basis of the signal to be recovered, leading to better estimation of the sky image. In this article, we revisit the w-projection approach using convex optimisation in realistic settings, where the measurement operator couples the w-terms in Fourier and the de-gridding kernels. We provide sparse, thus fast, models of the Fourier part of the measurement operator through adaptive sparsification procedures. Consequently, memory requirements and computational cost are significantly alleviated, at the expense of introducing errors on the radio-interferometric data model. We present a first investigation of the impact of the sparse variants of the measurement operator on the image reconstruction quality. We finally analyse the interesting super-resolution potential associated with the spread spectrum effect of the w-modulation, and showcase it through simulations. Our C++ code is available online on GitHub.

Read this paper on arXiv…

A. Dabbech, L. Wolz, L. Pratley, et. al.
Fri, 17 Feb 17

Comments: submitted to MNRAS

The Short Term Stability of a Simulated Differential Astrometric Reference Frame in the Gaia era [IMA]

We use methods of differential astrometry to construct a small field inertial reference frame stable at the micro-arcsecond level. Such a high level of astrometric precision can be expected with the end-of-mission standard errors to be achieved with the Gaia space satellite using global astrometry. We harness Gaia measurements of field angles and look at the influence of the number of reference stars and the star’s magnitude as well as astrometric systematics on the total error budget with the help of Gaia-like simulations around the Ecliptic Pole in a differential astrometric scenario. We find that the systematic errors are modeled and reliably estimated to the $\mu$as level even in fields with a modest number of 37 stars with G $<$13 mag over a 0.24 sq.degs. field of view for short time scales of the order of a day with high-cadence observations such as those around the North Ecliptic Pole during the EPSL scanning mode of Gaia for a perfect instrument. The inclusion of the geometric instrument model over such short time scales accounting for large-scale calibrations requires fainter stars down to G = 14 mag without diminishing the accuracy of the reference frame. We discuss several future perspectives of utilizing this methodology over different and longer timescales.

Read this paper on arXiv…

U. Abbas, B. Bucciarelli, M. Lattanzi, et. al.
Fri, 17 Feb 17

Comments: 14 pages, 7 figures, accepted by Publications of the Astronomical Society of the Pacific

CHIME FRB: An application of FFT beamforming for a radio telescope [IMA]

We have developed FFT beamforming techniques for the CHIME radio telescope, to search for and localize the astrophysical signals from Fast Radio Bursts (FRBs) over a large instantaneous field-of-view (FOV) while maintaining the full angular resolution of CHIME. We implement a hybrid beamforming pipeline in a GPU correlator, synthesizing 256 FFT-formed beams in the North-South direction by four formed beams along East-West via exact phasing, tiling a sky area of ~250 square degrees. A zero-padding approximation is employed to improve chromatic beam alignment across the wide bandwidth of 400 to 800 MHz. We up-channelize the data in order to achieve fine spectral resolution of $\Delta\nu$=24 kHz and time cadence of 0.983 ms, desirable for detecting transient and dispersed signals such as those from FRBs.

Read this paper on arXiv…

C. Ng, K. Vanderlinde, A. Paradise, et. al.
Fri, 17 Feb 17

Comments: 4 pages, 3 figures, submitted to the XXXII International Union of Radio Science General Assembly & Scientific Symposium (URSI GASS) 2017

Charge-induced force-noise on free-falling test masses: results from LISA Pathfinder [IMA]

We report on electrostatic measurements made on board the European Space Agency mission LISA Pathfinder. Detailed measurements of the charge-induced electrostatic forces exerted on free-falling test masses (TMs) inside the capacitive gravitational reference sensor are the first made in a relevant environment for a space-based gravitational wave detector. Employing a combination of charge control and electric-field compensation, we show that the level of charge-induced acceleration noise on a single TM can be maintained at a level close to 1.0 fm/s^2/sqrt(Hz) across the 0.1-100 mHz frequency band that is crucial to an observatory such as LISA. Using dedicated measurements that detect these effects in the differential acceleration between the two test masses, we resolve the stochastic nature of the TM charge build up due to interplanetary cosmic rays and the TM charge-to-force coupling through stray electric fields in the sensor. All our measurements are in good agreement with predictions based on a relatively simple electrostatic model of the LISA Pathfinder instrument.

Read this paper on arXiv…

M. Armano, H. Audley, G. Auger, et. al.
Thu, 16 Feb 17

Comments: 9 Pages, 3 figures

Effects of transients in LIGO suspensions on searches for gravitational waves [IMA]

This paper presents an analysis of the transient behavior of the Advanced LIGO suspensions used to seismically isolate the optics. We have characterized the transients in the longitudinal motion of the quadruple suspensions during Advanced LIGO’s first observing run. Propagation of transients between stages is consistent with modelled transfer functions, such that transient motion originating at the top of the suspension chain is significantly reduced in amplitude at the test mass. We find that there are transients seen by the longitudinal motion monitors of quadruple suspensions, but they are not significantly correlated with transient motion above the noise floor in the gravitational wave strain data, and therefore do not present a dominant source of background noise in the searches for transient gravitational wave signals.

Read this paper on arXiv…

M. Walker, T. Abbott, S. Aston, et. al.
Thu, 16 Feb 17

Comments: N/A

Learn from every mistake! Hierarchical information combination in astronomy [IMA]

Throughout the processing and analysis of survey data, a ubiquitous issue nowadays is that we are spoilt for choice when we need to select a methodology for some of its steps. The alternative methods usually fail and excel in different data regions, and have various advantages and drawbacks, so a combination that unites the strengths of all while suppressing the weaknesses is desirable. We propose to use a two-level hierarchy of learners. Its first level consists of training and applying the possible base methods on the first part of a known set. At the second level, we feed the output probability distributions from all base methods to a second learner trained on the remaining known objects. Using classification of variable stars and photometric redshift estimation as examples, we show that the hierarchical combination is capable of achieving general improvement over averaging-type combination methods, correcting systematics present in all base methods, is easy to train and apply, and thus, it is a promising tool in the astronomical “Big Data” era.

Read this paper on arXiv…

M. Suveges, S. Fotopoulou, J. Coupon, et. al.
Thu, 16 Feb 17

Comments: 6 pages, 3 figures. To appear in the conference proceedings of the IAU Symposium 325 AstroInformatics (2016 October 20-24, Sorrento, Italy)

The Multi-site All-Sky CAmeRA: Finding transiting exoplanets around bright ($m_V < 8$) stars [IMA]

This paper describes the design, operations, and performance of the Multi-site All-Sky CAmeRA (MASCARA). Its primary goal is to find new exoplanets transiting bright stars, $4 < m_V < 8$, by monitoring the full sky. MASCARA consists of one northern station on La Palma, Canary Islands (fully operational since February 2015), one southern station at La Silla Observatory, Chile (operational from early 2017), and a data centre at Leiden Observatory in the Netherlands. Both MASCARA stations are equipped with five interline CCD cameras using wide field lenses (24 mm focal length) with fixed pointings, which together provide coverage down to airmass 3 of the local sky. The interline CCD cameras allow for back-to-back exposures, taken at fixed sidereal times with exposure times of 6.4 sidereal seconds. The exposures are short enough that the motion of stars across the CCD does not exceed one pixel during an integration. Astrometry and photometry are performed on-site, after which the resulting light curves are transferred to Leiden for further analysis. The final MASCARA archive will contain light curves for ${\sim}70,000$ stars down to $m_V=8.4$, with a precision of $1.5\%$ per 5 minutes at $m_V=8$.

Read this paper on arXiv…

G. Talens, J. Spronck, A. Lesage, et. al.
Wed, 15 Feb 17

Comments: 9 pages, 12 figures, Accepted for publication in A&A. See also this http URL

Overall properties of the Gaia DR1 reference frame [IMA]

We compare quasar positions of the auxiliary quasar solution with ICRF2 sources using different samples and evaluate the influence on the {\it Gaia} DR1 reference frame owing to the Galactic aberration effect over the J2000.0-J20015.0 period. Then we estimate the global rotation between TGAS with {\it Tycho}-2 proper motion systems to investigate the property of the {\it Gaia} DR1 reference frame. Finally, the Galactic kinematics analysis using the K-M giant proper motions is performed to understand the property of {\it Gaia} DR1 reference frame. The positional comparison between the auxiliary quasar solution and ICRF2 shows negligible orientation and validates the declination bias of $\sim$$-0.1$\mas~in {\it Gaia} quasar positions with respect to ICRF2. Galactic aberration effect is thought to cause an offset $\sim$$0.01$\mas~of the $Z$ axis direction of {\it Gaia} DR1 reference frame. The global rotation between TGAS and {\it Tycho}-2 proper motion systems, obtained by different samples, shows a much smaller value than the claimed value $0.24$\masyr. For the Galactic kinematics analysis of the TGAS K-M giants, we find possible non-zero Galactic rotation components beyond the classical Oort constants: the rigid part $\omega_{Y_G} = -0.38 \pm 0.15$\masyr~and the differential part $\omega^\prime_{Y_G} = -0.29 \pm 0.19$\masyr~around the $Y_G$ axis of Galactic coordinates, which indicates possible residual rotation in {\it Gaia} DR1 reference frame or problems in the current Galactic kinematical model.

Read this paper on arXiv…

N. Liu, Z. Zhu, J. Liu, et. al.
Wed, 15 Feb 17

Comments: 6 pages, 1 figure. Accepted for publication in A&A

Phantom: A smoothed particle hydrodynamics and magnetohydrodynamics code for astrophysics [IMA]

We present Phantom, a fast, parallel, modular and low-memory smoothed particle hydrodynamics and magnetohydrodynamics code developed over the last decade for astrophysical applications in three dimensions. The code has been developed with a focus on stellar, galactic, planetary and high energy astrophysics and has already been used widely for studies of accretion discs and turbulence, from the birth of planets to how black holes accrete. Here we describe and test the core algorithms as well as modules for magnetohydrodynamics, self-gravity, sink particles, H_2 chemistry, dust-gas mixtures, physical viscosity, external forces including numerous galactic potentials as well as implementations of Lense-Thirring precession, Poynting-Robertson drag and stochastic turbulent driving. Phantom is hereby made publicly available.

Read this paper on arXiv…

D. Price, J. Wurster, C. Nixon, et. al.
Wed, 15 Feb 17

Comments: 77 pages, 51 figures, 335 equations, submitted to PASA. Code available from this https URL

Crossmatching variable objects with the Gaia data [IMA]

Tens of millions of new variable objects are expected to be identified in over a billion time series from the Gaia mission. Crossmatching known variable sources with those from Gaia is crucial to incorporate current knowledge, understand how these objects appear in the Gaia data, train supervised classifiers to recognise known classes, and validate the results of the Variability Processing and Analysis Coordination Unit (CU7) within the Gaia Data Analysis and Processing Consortium (DPAC). The method employed by CU7 to crossmatch variables for the first Gaia data release includes a binary classifier to take into account positional uncertainties, proper motion, targeted variability signals, and artefacts present in the early calibration of the Gaia data. Crossmatching with a classifier makes it possible to automate all those decisions which are typically made during visual inspection. The classifier can be trained with objects characterized by a variety of attributes to ensure similarity in multiple dimensions (astrometry, photometry, time-series features), with no need for a-priori transformations to compare different photometric bands, or of predictive models of the motion of objects to compare positions. Other advantages as well as some disadvantages of the method are discussed. Implementation steps from the training to the assessment of the crossmatch classifier and selection of results are described.

Read this paper on arXiv…

L. Rimoldini, K. Nienartowicz, M. Suveges, et. al.
Wed, 15 Feb 17

Comments: 4 pages, 1 figure, in Astronomical Data Analysis Software and Systems XXVI, Astronomical Society of the Pacific Conference Series

On-sky performance analysis of the vector Apodizing Phase Plate coronagraph on MagAO/Clio2 [IMA]

We report on the performance of a vector apodizing phase plate coronagraph that operates over a wavelength range of $2-5 \mu$m and is installed in MagAO/Clio2 at the 6.5 m Magellan Clay telescope at Las Campanas Observatory, Chile. The coronagraph manipulates the phase in the pupil to produce three beams yielding two coronagraphic point-spread functions (PSFs) and one faint leakage PSF. The phase pattern is imposed through the inherently achromatic geometric phase, enabled by liquid crystal technology and polarization techniques. The coronagraphic optic is manufactured using a direct-write technique for precise control of the liquid crystal pattern, and multitwist retarders for achromatization. By integrating a linear phase ramp to the coronagraphic phase pattern, two separated coronagraphic PSFs are created with a single pupil-plane optic, which makes it robust and easy to install in existing telescopes. The two coronagraphic PSFs contain a 180$^\circ$ dark hole on each side of a star, and these complementary copies of the star are used to correct the seeing halo close to the star. To characterize the coronagraph, we collected a dataset of a bright ($m_L=0-1$) nearby star with $\sim$1.5 hr of observing time. By rotating and optimally scaling one PSF and subtracting it from the other PSF, we see a contrast improvement by 1.46 magnitudes at $3.5 \lambda/D$. With regular angular differential imaging at 3.9 $\mu$m, the MagAO vector apodizing phase plate coronagraph delivers a $5\sigma\ \Delta$ mag contrast of 8.3 ($=10^{-3.3}$) at 2 $\lambda/D$ and 12.2 ($=10^{-4.8}$) at $3.5 \lambda/D$.

Read this paper on arXiv…

G. Otten, F. Snik, M. Kenworthy, et. al.
Wed, 15 Feb 17

Comments: Published in ApJ. 8 figures, 1 table. Received 2016 June 17; revised 2016 November 3; accepted 2016 November 28; published 2017 January 12

Thermal, Structural, and Optical Analysis of a Balloon-Based Imaging System [IMA]

The Subarcsecond Telescope And BaLloon Experiment, STABLE, is the fine stage of a guidance system for a high-altitude ballooning platform designed to demonstrate subarcsecond pointing stability, over one minute using relatively dim guide stars in the visible spectrum. The STABLE system uses an attitude rate sensor and the motion of the guide star on a detector to control a Fast Steering Mirror in order to stabilize the image. The characteristics of the thermal-optical-mechanical elements in the system directly affect the quality of the point spread function of the guide star on the detector, and so, a series of thermal, structural, and optical models were built to simulate system performance and ultimately inform the final pointing stability predictions. This paper describes the modeling techniques employed in each of these subsystems. The results from those models are discussed in detail, highlighting the development of the worst-case cold and hot cases, the optical metrics generated from the finite element model, and the expected STABLE residual wavefront error and decenter. Finally, the paper concludes with the predicted sensitivities in the STABLE system, which show that thermal deadbanding, structural preloading and self-deflection under different loading conditions, and the speed of individual optical elements were particularly important to the resulting STABLE optical performance.

Read this paper on arXiv…

M. Borden, D. Lewis, H. Ochoa, et. al.
Wed, 15 Feb 17

Comments: 42 pages, 39 figures

LSSGalPy: Interactive Visualization of the Large-scale Environment Around Galaxies [IMA]

New tools are needed to handle the growth of data in astrophysics delivered by recent and upcoming surveys. We aim to build open-source, light, flexible, and interactive software designed to visualize extensive three-dimensional (3D) tabular data. Entirely written in the Python language, we have developed interactive tools to browse and visualize the positions of galaxies in the universe and their positions with respect to its large-scale structures (LSS). Motivated by a previous study, we created two codes using Mollweide projection and wedge diagram visualizations, where survey galaxies can be overplotted on the LSS of the universe. These are interactive representations where the visualizations can be controlled by widgets. We have released these open-source codes that have been designed to be easily re-used and customized by the scientific community to fulfill their needs. The codes are adaptable to other kinds of 3D tabular data and are robust enough to handle several millions of objects.

Read this paper on arXiv…

M. Argudo-Fernandez, S. Puertas, J. Ruiz, et. al.
Wed, 15 Feb 17

Comments: 7 pages, 2 figures; accepted for publication in PASP Special Focus Issue: Techniques and Methods for Astrophysical Data Visualization