Deep-HiTS: Rotation Invariant Convolutional Neural Network for Transient Detection [IMA]

http://arxiv.org/abs/1701.00458


We introduce Deep-HiTS, a rotation invariant convolutional neural network (CNN) model for classifying images of transients candidates into artifacts or real sources for the High cadence Transient Survey (HiTS). CNNs have the advantage of learning the features automatically from the data while achieving high performance. We compare our CNN model against a feature engineering approach using random forests (RF). We show that our CNN significantly outperforms the RF model reducing the error by almost half. Furthermore, for a fixed number of approximately 2,000 allowed false transient candidates per night we are able to reduce the miss-classified real transients by approximately 1/5. To the best of our knowledge, this is the first time CNNs have been used to detect astronomical transient events. Our approach will be very useful when processing images from next generation instruments such as the Large Synoptic Survey Telescope (LSST). We have made all our code and data available to the community for the sake of allowing further developments and comparisons at https://github.com/guille-c/Deep-HiTS.

Read this paper on arXiv…

G. Cabrera-Vives, I. Reyes, F. Forster, et. al.
Tue, 3 Jan 17
4/55

Comments: N/A

Advertisements

Astronomical image reconstruction with convolutional neural networks [CL]

http://arxiv.org/abs/1612.04526


State of the art methods in astronomical image reconstruction rely on the resolution of a regularized or constrained optimization problem. Solving this problem can be computationally intensive and usually leads to a quadratic or at least superlinear complexity w.r.t. the number of pixels in the image. We investigate in this work the use of convolutional neural networks for image reconstruction in astronomy. With neural networks, the computationally intensive tasks is the training step, but the prediction step has a fixed complexity per pixel, i.e. a linear complexity. Numerical experiments show that our approach is both computationally efficient and competitive with other state of the art methods in addition to being interpretable.

Read this paper on arXiv…

R. Flamary
Thu, 15 Dec 16
33/59

Comments: N/A

Constraint matrix factorization for space variant PSFs fiel restoration [CL]

http://arxiv.org/abs/1608.08104


Context: in large-scale spatial surveys, the Point Spread Function (PSF) varies across the instrument ?eld of view (FOV). Local measurements of the PSFs are given by the isolated stars images. Yet, these estimates may not be directly usable for post-processings because of the observational noise and potentially the aliasing. Aims: given a set of aliased and noisy stars images from a telescope, we want to estimate well-resolved and noise-free PSFs at the observed stars positions, in particular, exploiting the spatial correlation of the PSFs across the FOV. Contributions: we introduce RCA (Resolved Components Analysis) which is a noise-robust dimension reduction and super-resolution method based on matrix- factorization. We propose an original way of using the PSFs spatial correlation in the restoration process through sparsity. The introduced formalism can be applied to correlated data sets with respect to any euclidean parametric space. Results: we tested our method on simulated monochromatic PSFs of Euclid telescope (launch planned for 2020). The proposed method outperforms existing PSFs restoration and dimension reduction methods. We show that a coupled sparsity constraint on individual PSFs and their spatial distribution yields a signi?cant improvement on both the restored PSFs shapes and the PSFs subspace identi?cation, in presence of aliasing. Perspectives: RCA can be naturally extended to account for the wavelength dependency of the PSFs.

Read this paper on arXiv…

F. Mboula, J. Starck, K. Okumura, et. al.
Tue, 30 Aug 16
69/78

Comments: 33 pages

Star-galaxy Classification Using Deep Convolutional Neural Networks [IMA]

http://arxiv.org/abs/1608.04369


Most existing star-galaxy classifiers use the reduced summary information from catalogs, requiring careful feature extraction and selection. The latest advances in machine learning that use deep convolutional neural networks allow a machine to automatically learn the features directly from data, minimizing the need for input from human experts. We present a star-galaxy classification framework that uses deep convolutional neural networks (ConvNets) directly on the reduced, calibrated pixel values. Using data from the Sloan Digital Sky Survey (SDSS) and the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS), we demonstrate that ConvNets are able to produce accurate and well-calibrated probabilistic classifications that are competitive with conventional machine learning techniques. Future advances in deep learning may bring more success with current and forthcoming photometric surveys, such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST), because deep neural networks require very little, manual feature engineering.

Read this paper on arXiv…

E. Kim and R. Brunner
Tue, 16 Aug 16
33/57

Comments: 13 page, 13 figures. Submitted to MNRAS. Code available at this https URL

WAHRSIS: A Low-cost, High-resolution Whole Sky Imager With Near-Infrared Capabilities [IMA]

http://arxiv.org/abs/1605.06595


Cloud imaging using ground-based whole sky imagers is essential for a fine-grained understanding of the effects of cloud formations, which can be useful in many applications. Some such imagers are available commercially, but their cost is relatively high, and their flexibility is limited. Therefore, we built a new daytime Whole Sky Imager (WSI) called Wide Angle High-Resolution Sky Imaging System. The strengths of our new design are its simplicity, low manufacturing cost and high resolution. Our imager captures the entire hemisphere in a single high-resolution picture via a digital camera using a fish-eye lens. The camera was modified to capture light across the visible as well as the near-infrared spectral ranges. This paper describes the design of the device as well as the geometric and radiometric calibration of the imaging system.

Read this paper on arXiv…

S. Dev, F. Savoy, Y. Lee, et. al.
Tue, 24 May 16
5/73

Comments: Proc. IS&T/SPIE Infrared Imaging Systems, 2014

A Selection of Giant Radio Sources from NVSS [GA]

http://arxiv.org/abs/1603.06895


Results of the application of pattern recognition techniques to the problem of identifying Giant Radio Sources (GRS) from the data in the NVSS catalog are presented and issues affecting the process are explored. Decision-tree pattern recognition software was applied to training set source pairs developed from known NVSS large angular size radio galaxies. The full training set consisted of 51,195 source pairs, 48 of which were known GRS for which each lobe was primarily represented by a single catalog component. The source pairs had a maximum separation of 20 arc minutes and a minimum component area of 1.87 square arc minutes at the 1.4 mJy level. The importance of comparing resulting probability distributions of the training and application sets for cases of unknown class ratio is demonstrated. The probability of correctly ranking a randomly selected (GRS, non-GRS) pair from the best of the tested classifiers was determined to be 97.8 +/- 1.5%. The best classifiers were applied to the over 870,000 candidate pairs from the entire catalog. Images of higher ranked sources were visually screened and a table of over sixteen hundred candidates, including morphological annotation, is presented. These systems include doubles and triples, Wide-Angle Tail (WAT) and Narrow-Angle Tail (NAT), S- or Z-shaped systems, and core-jets and resolved cores. While some resolved lobe systems are recovered with this technique, generally it is expected that such systems would require a different approach.

Read this paper on arXiv…

D. Proctor
Wed, 23 Mar 16
34/73

Comments: 20 pages of text, 6 figures, 22 pages tables, total 55 pages. The stub for Table 6 is followed by the complete machine readable file. To be published in The Astrophysical Journal Supplement

Computational Imaging for VLBI Image Reconstruction [IMA]

http://arxiv.org/abs/1512.01413


Very long baseline interferometry (VLBI) is a technique for imaging celestial radio emissions by simultaneously observing a source from telescopes distributed across Earth. The challenges in reconstructing images from fine angular resolution VLBI data are immense. The data is extremely sparse and noisy, thus requiring statistical image models such as those designed in the computer vision community. In this paper we present a novel Bayesian approach for VLBI image reconstruction. While other methods require careful tuning and parameter selection for different types of images, our method is robust and produces good results under different settings such as low SNR or extended emissions. The success of our method is demonstrated on realistic synthetic experiments as well as publicly available real data. We present this problem in a way that is accessible to members of the computer vision community, and provide a dataset website (vlbiimaging.csail.mit.edu) to allow for controlled comparisons across algorithms. This dataset can foster development of new methods by making VLBI easily approachable to computer vision researchers.

Read this paper on arXiv…

K. Bouman, M. Johnson, D. Zoran, et. al.
Mon, 7 Dec 15
17/46

Comments: 10 pages, project website: this http URL