Deep-HiTS: Rotation Invariant Convolutional Neural Network for Transient Detection [IMA]

http://arxiv.org/abs/1701.00458


We introduce Deep-HiTS, a rotation invariant convolutional neural network (CNN) model for classifying images of transients candidates into artifacts or real sources for the High cadence Transient Survey (HiTS). CNNs have the advantage of learning the features automatically from the data while achieving high performance. We compare our CNN model against a feature engineering approach using random forests (RF). We show that our CNN significantly outperforms the RF model reducing the error by almost half. Furthermore, for a fixed number of approximately 2,000 allowed false transient candidates per night we are able to reduce the miss-classified real transients by approximately 1/5. To the best of our knowledge, this is the first time CNNs have been used to detect astronomical transient events. Our approach will be very useful when processing images from next generation instruments such as the Large Synoptic Survey Telescope (LSST). We have made all our code and data available to the community for the sake of allowing further developments and comparisons at https://github.com/guille-c/Deep-HiTS.

Read this paper on arXiv…

G. Cabrera-Vives, I. Reyes, F. Forster, et. al.
Tue, 3 Jan 17
4/55

Comments: N/A

Advertisement

Astronomical image reconstruction with convolutional neural networks [CL]

http://arxiv.org/abs/1612.04526


State of the art methods in astronomical image reconstruction rely on the resolution of a regularized or constrained optimization problem. Solving this problem can be computationally intensive and usually leads to a quadratic or at least superlinear complexity w.r.t. the number of pixels in the image. We investigate in this work the use of convolutional neural networks for image reconstruction in astronomy. With neural networks, the computationally intensive tasks is the training step, but the prediction step has a fixed complexity per pixel, i.e. a linear complexity. Numerical experiments show that our approach is both computationally efficient and competitive with other state of the art methods in addition to being interpretable.

Read this paper on arXiv…

R. Flamary
Thu, 15 Dec 16
33/59

Comments: N/A

Constraint matrix factorization for space variant PSFs fiel restoration [CL]

http://arxiv.org/abs/1608.08104


Context: in large-scale spatial surveys, the Point Spread Function (PSF) varies across the instrument ?eld of view (FOV). Local measurements of the PSFs are given by the isolated stars images. Yet, these estimates may not be directly usable for post-processings because of the observational noise and potentially the aliasing. Aims: given a set of aliased and noisy stars images from a telescope, we want to estimate well-resolved and noise-free PSFs at the observed stars positions, in particular, exploiting the spatial correlation of the PSFs across the FOV. Contributions: we introduce RCA (Resolved Components Analysis) which is a noise-robust dimension reduction and super-resolution method based on matrix- factorization. We propose an original way of using the PSFs spatial correlation in the restoration process through sparsity. The introduced formalism can be applied to correlated data sets with respect to any euclidean parametric space. Results: we tested our method on simulated monochromatic PSFs of Euclid telescope (launch planned for 2020). The proposed method outperforms existing PSFs restoration and dimension reduction methods. We show that a coupled sparsity constraint on individual PSFs and their spatial distribution yields a signi?cant improvement on both the restored PSFs shapes and the PSFs subspace identi?cation, in presence of aliasing. Perspectives: RCA can be naturally extended to account for the wavelength dependency of the PSFs.

Read this paper on arXiv…

F. Mboula, J. Starck, K. Okumura, et. al.
Tue, 30 Aug 16
69/78

Comments: 33 pages

Star-galaxy Classification Using Deep Convolutional Neural Networks [IMA]

http://arxiv.org/abs/1608.04369


Most existing star-galaxy classifiers use the reduced summary information from catalogs, requiring careful feature extraction and selection. The latest advances in machine learning that use deep convolutional neural networks allow a machine to automatically learn the features directly from data, minimizing the need for input from human experts. We present a star-galaxy classification framework that uses deep convolutional neural networks (ConvNets) directly on the reduced, calibrated pixel values. Using data from the Sloan Digital Sky Survey (SDSS) and the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS), we demonstrate that ConvNets are able to produce accurate and well-calibrated probabilistic classifications that are competitive with conventional machine learning techniques. Future advances in deep learning may bring more success with current and forthcoming photometric surveys, such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST), because deep neural networks require very little, manual feature engineering.

Read this paper on arXiv…

E. Kim and R. Brunner
Tue, 16 Aug 16
33/57

Comments: 13 page, 13 figures. Submitted to MNRAS. Code available at this https URL

WAHRSIS: A Low-cost, High-resolution Whole Sky Imager With Near-Infrared Capabilities [IMA]

http://arxiv.org/abs/1605.06595


Cloud imaging using ground-based whole sky imagers is essential for a fine-grained understanding of the effects of cloud formations, which can be useful in many applications. Some such imagers are available commercially, but their cost is relatively high, and their flexibility is limited. Therefore, we built a new daytime Whole Sky Imager (WSI) called Wide Angle High-Resolution Sky Imaging System. The strengths of our new design are its simplicity, low manufacturing cost and high resolution. Our imager captures the entire hemisphere in a single high-resolution picture via a digital camera using a fish-eye lens. The camera was modified to capture light across the visible as well as the near-infrared spectral ranges. This paper describes the design of the device as well as the geometric and radiometric calibration of the imaging system.

Read this paper on arXiv…

S. Dev, F. Savoy, Y. Lee, et. al.
Tue, 24 May 16
5/73

Comments: Proc. IS&T/SPIE Infrared Imaging Systems, 2014

A Selection of Giant Radio Sources from NVSS [GA]

http://arxiv.org/abs/1603.06895


Results of the application of pattern recognition techniques to the problem of identifying Giant Radio Sources (GRS) from the data in the NVSS catalog are presented and issues affecting the process are explored. Decision-tree pattern recognition software was applied to training set source pairs developed from known NVSS large angular size radio galaxies. The full training set consisted of 51,195 source pairs, 48 of which were known GRS for which each lobe was primarily represented by a single catalog component. The source pairs had a maximum separation of 20 arc minutes and a minimum component area of 1.87 square arc minutes at the 1.4 mJy level. The importance of comparing resulting probability distributions of the training and application sets for cases of unknown class ratio is demonstrated. The probability of correctly ranking a randomly selected (GRS, non-GRS) pair from the best of the tested classifiers was determined to be 97.8 +/- 1.5%. The best classifiers were applied to the over 870,000 candidate pairs from the entire catalog. Images of higher ranked sources were visually screened and a table of over sixteen hundred candidates, including morphological annotation, is presented. These systems include doubles and triples, Wide-Angle Tail (WAT) and Narrow-Angle Tail (NAT), S- or Z-shaped systems, and core-jets and resolved cores. While some resolved lobe systems are recovered with this technique, generally it is expected that such systems would require a different approach.

Read this paper on arXiv…

D. Proctor
Wed, 23 Mar 16
34/73

Comments: 20 pages of text, 6 figures, 22 pages tables, total 55 pages. The stub for Table 6 is followed by the complete machine readable file. To be published in The Astrophysical Journal Supplement

Computational Imaging for VLBI Image Reconstruction [IMA]

http://arxiv.org/abs/1512.01413


Very long baseline interferometry (VLBI) is a technique for imaging celestial radio emissions by simultaneously observing a source from telescopes distributed across Earth. The challenges in reconstructing images from fine angular resolution VLBI data are immense. The data is extremely sparse and noisy, thus requiring statistical image models such as those designed in the computer vision community. In this paper we present a novel Bayesian approach for VLBI image reconstruction. While other methods require careful tuning and parameter selection for different types of images, our method is robust and produces good results under different settings such as low SNR or extended emissions. The success of our method is demonstrated on realistic synthetic experiments as well as publicly available real data. We present this problem in a way that is accessible to members of the computer vision community, and provide a dataset website (vlbiimaging.csail.mit.edu) to allow for controlled comparisons across algorithms. This dataset can foster development of new methods by making VLBI easily approachable to computer vision researchers.

Read this paper on arXiv…

K. Bouman, M. Johnson, D. Zoran, et. al.
Mon, 7 Dec 15
17/46

Comments: 10 pages, project website: this http URL

Distributed image reconstruction for very large arrays in radio astronomy [IMA]

http://arxiv.org/abs/1507.00501


Current and future radio interferometric arrays such as LOFAR and SKA are characterized by a paradox. Their large number of receptors (up to millions) allow theoretically unprecedented high imaging resolution. In the same time, the ultra massive amounts of samples makes the data transfer and computational loads (correlation and calibration) order of magnitudes too high to allow any currently existing image reconstruction algorithm to achieve, or even approach, the theoretical resolution. We investigate here decentralized and distributed image reconstruction strategies which select, transfer and process only a fraction of the total data. The loss in MSE incurred by the proposed approach is evaluated theoretically and numerically on simple test cases.

Read this paper on arXiv…

A. Ferrari, D. Mary, R. Flamary, et. al.
Fri, 3 Jul 15
36/50

Comments: Sensor Array and Multichannel Signal Processing Workshop (SAM), 2014 IEEE 8th, Jun 2014, Coruna, Spain. 2014

Machine learning based data mining for Milky Way filamentary structures reconstruction [IMA]

http://arxiv.org/abs/1505.06621


We present an innovative method called FilExSeC (Filaments Extraction, Selection and Classification), a data mining tool developed to investigate the possibility to refine and optimize the shape reconstruction of filamentary structures detected with a consolidated method based on the flux derivative analysis, through the column-density maps computed from Herschel infrared Galactic Plane Survey (Hi-GAL) observations of the Galactic plane. The present methodology is based on a feature extraction module followed by a machine learning model (Random Forest) dedicated to select features and to classify the pixels of the input images. From tests on both simulations and real observations the method appears reliable and robust with respect to the variability of shape and distribution of filaments. In the cases of highly defined filament structures, the presented method is able to bridge the gaps among the detected fragments, thus improving their shape reconstruction. From a preliminary “a posteriori” analysis of derived filament physical parameters, the method appears potentially able to add a sufficient contribution to complete and refine the filament reconstruction.

Read this paper on arXiv…

G. Riccio, S. Cavuoti, E. Schisano, et. al.
Tue, 26 May 15
1/67

Comments: Accepted by peer reviewed WIRN 2015 Conference, to appear on Smart Innovation, Systems and Technology, Springer, ISSN 2190-3018, 9 pages, 4 figures

A Sparse Gaussian Process Framework for Photometric Redshift Estimation [IMA]

http://arxiv.org/abs/1505.05489


Accurate photometric redshift are a lynchpin for many future experiments to pin down the cosmological model and for studies of galaxy evolution. In this study, a novel sparse regression framework for photometric redshift estimation is presented. Data from a simulated survey was used to train and test the proposed models. We show that approaches which include careful data preparation and model design offer a significant improvement in comparison with several competing machine learning algorithms. Standard implementation of most regression algorithms has as the objective the minimization of the sum of squared errors. For redshift inference, however, this induces a bias in the posterior mean of the output distribution, which can be problematic. In this paper we optimize to directly target minimizing $\Delta z = (z_\textrm{s} – z_\textrm{p})/(1+z_\textrm{s})$ and address the bias problem via a distribution-based weighting scheme, incorporated as part of the optimization objective. The results are compared with other machine learning algorithms in the field such as Artificial Neural Networks (ANN), Gaussian Processes (GPs) and sparse GPs. The proposed framework reaches a mean absolute $\Delta z = 0.002(1+z_\textrm{s})$, with a maximum absolute error of 0.0432, over the redshift range of $0.2 \le z_\textrm{s} \le 2$, a factor of three improvement over standard ANNs used in the literature. We also investigate how the relative size of the training affects the photometric redshift accuracy. We find that a training set of $>$30 per cent of total sample size, provides little additional constraint on the photometric redshifts, and note that our GP formalism strongly outperforms ANN in the sparse data regime.

Read this paper on arXiv…

I. Almosallam, S. Lindsay, M. Jarvis, et. al.
Thu, 21 May 15
15/59

Comments: N/A

Meta learning of bounds on the Bayes classifier error [CL]

http://arxiv.org/abs/1504.07116


Meta learning uses information from base learners (e.g. classifiers or estimators) as well as information about the learning problem to improve upon the performance of a single base learner. For example, the Bayes error rate of a given feature space, if known, can be used to aid in choosing a classifier, as well as in feature selection and model selection for the base classifiers and the meta classifier. Recent work in the field of f-divergence functional estimation has led to the development of simple and rapidly converging estimators that can be used to estimate various bounds on the Bayes error. We estimate multiple bounds on the Bayes error using an estimator that applies meta learning to slowly converging plug-in estimators to obtain the parametric convergence rate. We compare the estimated bounds empirically on simulated data and then estimate the tighter bounds on features extracted from an image patch analysis of sunspot continuum and magnetogram images.

Read this paper on arXiv…

K. Moon, V. Delouille and A. Hero
Tue, 28 Apr 15
36/70

Comments: 6 pages, 3 figures

A spectral optical flow method for determining velocities from digital imagery [CL]

http://arxiv.org/abs/1504.04660


We present a method for determining surface flows from solar images based upon optical flow techniques. We apply the method to sets of images obtained by a variety of solar imagers to assess its performance. The {\tt opflow3d} procedure is shown to extract accurate velocity estimates when provided perfect test data and quickly generates results consistent with completely distinct methods when applied on global scales. We also validate it in detail by comparing it to an established method when applied to high-resolution datasets and find that it provides comparable results without the need to tune, filter or otherwise preprocess the images before its application.

Read this paper on arXiv…

N. Hurlburt and S. Jaffey
Tue, 21 Apr 15
57/69

Comments: 12 pages, 5 figures. Submitted to Earth Science Informatics

Image patch analysis of sunspots and active regions. II. Clustering via dictionary learning [SSA]

http://arxiv.org/abs/1504.02762


Separating active regions that are quiet from potentially eruptive ones is a key issue in Space Weather applications. Traditional classification schemes such as Mount Wilson and McIntosh have been effective in relating an active region large scale magnetic configuration to its ability to produce eruptive events. However, their qualitative nature prevents systematic studies of an active region’s evolution for example. We introduce a new clustering of active regions that is based on the local geometry observed in Line of Sight magnetogram and continuum images. We use a reduced-dimension representation of an active region that is obtained by factoring (i.e. applying dictionary learning to) the corresponding data matrix comprised of local image patches. Two factorizations can be compared via the definition of appropriate metrics on the resulting factors. The distances obtained from these metrics are then used to cluster the active regions. We find that these metrics result in natural clusterings of active regions. The clusterings are related to large scale descriptors of an active region such as its size, its local magnetic field distribution, and its complexity as measured by the Mount Wilson classification scheme. We also find that including data focused on the neutral line of an active region can result in an increased correspondence between the Mount Wilson classifications and our clustering results. We provide some recommendations for which metrics and matrix factorization techniques to use to study small, large, complex, or simple active regions.

Read this paper on arXiv…

K. Moon, V. Delouille, J. Li, et. al.
Mon, 13 Apr 15
16/54

Comments: 31 pages, 15 figures

Linearly Supporting Feature Extraction For Automated Estimation Of Stellar Atmospheric Parameters [SSA]

http://arxiv.org/abs/1504.02164


We describe a scheme to extract linearly supporting (LSU) features from stellar spectra to automatically estimate the atmospheric parameters $T_{eff}$, log$~g$, and [Fe/H]. “Linearly supporting” means that the atmospheric parameters can be accurately estimated from the extracted features through a linear model. The successive steps of the process are as follow: first, decompose the spectrum using a wavelet packet (WP) and represent it by the derived decomposition coefficients; second, detect representative spectral features from the decomposition coefficients using the proposed method Least Absolute Shrinkage and Selection Operator (LARS)$_{bs}$; third, estimate the atmospheric parameters $T_{eff}$, log$~g$, and [Fe/H] from the detected features using a linear regression method. One prominent characteristic of this scheme is its ability to evaluate quantitatively the contribution of each detected feature to the atmospheric parameter estimate and also to trace back the physical significance of that feature. This work also shows that the usefulness of a component depends on both wavelength and frequency. The proposed scheme has been evaluated on both real spectra from the Sloan Digital Sky Survey (SDSS)/SEGUE and synthetic spectra calculated from Kurucz’s NEWODF models. On real spectra, we extracted 23 features to estimate $T_{eff}$, 62 features for log$~g$, and 68 features for [Fe/H]. Test consistencies between our estimates and those provided by the Spectroscopic Sarameter Pipeline of SDSS show that the mean absolute errors (MAEs) are 0.0062 dex for log$~T_{eff}$ (83 K for $T_{eff}$), 0.2345 dex for log$~g$, and 0.1564 dex for [Fe/H]. For the synthetic spectra, the MAE test accuracies are 0.0022 dex for log$~T_{eff}$ (32 K for $T_{eff}$), 0.0337 dex for log$~g$, and 0.0268 dex for [Fe/H].

Read this paper on arXiv…

X. Li, Y. Lu, G. Comte, et. al.
Fri, 10 Apr 15
68/68

Comments: 21 pages, 7 figures, 8 tables, The Astrophysical Journal Supplement Series (accepted for publication)

Rotation-invariant convolutional neural networks for galaxy morphology prediction [IMA]

http://arxiv.org/abs/1503.07077


Measuring the morphological parameters of galaxies is a key requirement for studying their formation and evolution. Surveys such as the Sloan Digital Sky Survey (SDSS) have resulted in the availability of very large collections of images, which have permitted population-wide analyses of galaxy morphology. Morphological analysis has traditionally been carried out mostly via visual inspection by trained experts, which is time-consuming and does not scale to large ($\gtrsim10^4$) numbers of images.
Although attempts have been made to build automated classification systems, these have not been able to achieve the desired level of accuracy. The Galaxy Zoo project successfully applied a crowdsourcing strategy, inviting online users to classify images by answering a series of questions. Unfortunately, even this approach does not scale well enough to keep up with the increasing availability of galaxy images.
We present a deep neural network model for galaxy morphology classification which exploits translational and rotational symmetry. It was developed in the context of the Galaxy Challenge, an international competition to build the best model for morphology classification based on annotated images from the Galaxy Zoo project.
For images with high agreement among the Galaxy Zoo participants, our model is able to reproduce their consensus with near-perfect accuracy ($> 99\%$) for most questions. Confident model predictions are highly accurate, which makes the model suitable for filtering large collections of images and forwarding challenging images to experts for manual annotation. This approach greatly reduces the experts’ workload without affecting accuracy. The application of these algorithms to larger sets of training data will be critical for analysing results from future surveys such as the LSST.

Read this paper on arXiv…

S. Dieleman, K. Willett and J. Dambre
Wed, 25 Mar 15
30/38

Comments: Accepted for publication in MNRAS. 20 pages, 14 figures

Towards radio astronomical imaging using an arbitrary basis [IMA]

http://arxiv.org/abs/1503.04338


The new generation of radio telescopes, such as the Square Kilometer Array (SKA), requires dramatic advances in computer hardware and software, in order to process the large amounts of produced data efficiently. In this document, we explore a new approach to wide-field imaging. By generalizing the image reconstruction, which is performed by an inverse Fourier transform, to arbitrary transformations, we gain enormous new possibilities. In particular, we outline an approach that might allow to obtain a sky image of size P times Q in (optimal) O(PQ) time. This could be a step in the direction of real-time, wide-field sky imaging for future telescopes.

Read this paper on arXiv…

M. Petschow
Tue, 17 Mar 15
8/79

Comments: N/A

DESAT: an SSW tool for SDO/AIA image de-saturation [IMA]

http://arxiv.org/abs/1503.02302


Saturation affects a significant rate of images recorded by the Atmospheric Imaging Assembly on the Solar Dynamics Observatory. This paper describes a computational method and a technological pipeline for the de-saturation of such images, based on several mathematical ingredients like Expectation Maximization, image correlation and interpolation. An analysis of the computational properties and demands of the pipeline, together with an assessment of its reliability are performed against a set of data recorded from the Feburary 25 2014 flaring event.

Read this paper on arXiv…

R. Schwartz, G. Torre, A. Massone, et. al.
Tue, 10 Mar 15
3/77

Comments: N/A

Montblanc: GPU accelerated Radio Interferometer Measurement Equations in support of Bayesian Inference for Radio Observations [CL]

http://arxiv.org/abs/1501.07719


We present Montblanc, a GPU implementation of the Radio interferometer measurement equation (RIME) in support of the Bayesian inference for radio observations (BIRO) technique. BIRO uses Bayesian inference to select sky models that best match the visibilities observed by a radio interferometer. To accomplish this, BIRO evaluates the RIME multiple times, varying sky model parameters to produce multiple model visibilities. Chi-squared values computed from the model and observed visibilities are used as likelihood values to drive the Bayesian sampling process and select the best sky model.
As most of the elements of the RIME and chi-squared calculation are independent of one another, they are highly amenable to parallel computation. Additionally, Montblanc caters for iterative RIME evaluation to produce multiple chi-squared values. Only modified model parameters are transferred to the GPU between each iteration.
We implemented Montblanc as a Python package based upon NVIDIA’s CUDA architecture. As such, it is easy to extend and implement different pipelines. At present, Montblanc supports point and Gaussian morphologies, but is designed for easy addition of new source profiles. Montblanc’s RIME implementation is performant: On an NVIDIA K40, it is approximately 250 times faster than MeqTrees on a dual hexacore Intel E5{2620v2 CPU. Compared to the OSKAR simulator’s GPU-implemented RIME components it is 7.7 and 12 times faster on the same K40 for single and double-precision oating point respectively. However, OSKAR’s RIME implementation is more general than Montblanc’s BIRO-tailored RIME.
Theoretical analysis of Montblanc’s dominant CUDA kernel suggests that it is memory bound. In practice, profiling shows that is balanced between compute and memory, as much of the data required by the problem is retained in L1 and L2 cache.

Read this paper on arXiv…

S. Perkins, P. Maraism, J. Zwart, et. al.
Mon, 2 Feb 15
21/49

Comments: Submitted to Astronomy and Computing (this http URL). The code is available online at this https URL 26 pages long, with 13 figures, 6 tables and 3 algorithms

Non-parametric PSF estimation from celestial transit solar images using blind deconvolution [CL]

http://arxiv.org/abs/1412.6279


Context: Characterization of instrumental effects in astronomical imaging is important in order to extract accurate physical information from the observations. Optics are never perfect and the non-ideal path through the telescope is usually represented by the convolution of an ideal image with a Point Spread Function (PSF). Other sources of noise (read-out, Photon) also contaminate the image acquisition process. The problem of estimating both the PSF filter and a denoised image is called blind deconvolution and is ill-posed.
Aims: We propose a blind deconvolution scheme that relies on image regularization. Contrarily to most methods presented in the literature, it does not assume a parametric model of the PSF and can thus be applied to any telescope.
Methods: Our scheme uses a wavelet analysis image prior model and weak assumptions on the PSF filter’s response. We use the observations from a celestial body transit where such object can be assumed to be a black disk. Such constraints limits the interchangeability between the filter and the image in the blind deconvolution problem.
Results: Our method is applied on synthetic and experimental data. We compute the PSF for SECCHI/EUVI instrument using the 2007 Lunar transit, and for SDO/AIA with the 2012 Venus transit. Results show that the proposed non-parametric blind deconvolution method is able to estimate the core of the PSF with a similar quality than parametric methods proposed in the literature. We also show that, if these parametric estimations are incorporated in the acquisition model, the resulting PSF outperforms both the parametric and non-parametric methods.

Read this paper on arXiv…

A. Gonzalez, V. Delouille and L. Jacques
Mon, 19 Jan 15
4/50

Comments: 19 pages, 23 figures

Spectral classification using convolutional neural networks [CL]

http://arxiv.org/abs/1412.8341


There is a great need for accurate and autonomous spectral classification methods in astrophysics. This thesis is about training a convolutional neural network (ConvNet) to recognize an object class (quasar, star or galaxy) from one-dimension spectra only. Author developed several scripts and C programs for datasets preparation, preprocessing and postprocessing of the data. EBLearn library (developed by Pierre Sermanet and Yann LeCun) was used to create ConvNets. Application on dataset of more than 60000 spectra yielded success rate of nearly 95%. This thesis conclusively proved great potential of convolutional neural networks and deep learning methods in astrophysics.

Read this paper on arXiv…

P. Hala
Tue, 30 Dec 14
81/83

Comments: 71 pages, 50 figures, Master’s thesis, Masaryk University

High-level numerical simulations of noise in CCD and CMOS photosensors: review and tutorial [IMA]

http://arxiv.org/abs/1412.4031


In many applications, such as development and testing of image processing algorithms, it is often necessary to simulate images containing realistic noise from solid-state photosensors. A high-level model of CCD and CMOS photosensors based on a literature review is formulated in this paper. The model includes photo-response non-uniformity, photon shot noise, dark current Fixed Pattern Noise, dark current shot noise, offset Fixed Pattern Noise, source follower noise, sense node reset noise, and quantisation noise. The model also includes voltage-to-voltage, voltage-to-electrons, and analogue-to-digital converter non-linearities. The formulated model can be used to create synthetic images for testing and validation of image processing algorithms in the presence of realistic images noise. An example of the simulated CMOS photosensor and a comparison with a custom-made CMOS hardware sensor is presented. Procedures for characterisation from both light and dark noises are described. Experimental results that confirm the validity of the numerical model are provided. The paper addresses the issue of the lack of comprehensive high-level photosensor models that enable engineers to simulate realistic effects of noise on the images obtained from solid-state photosensors.

Read this paper on arXiv…

M. Konnik and J. Welsh
Mon, 15 Dec 14
45/53

Comments: N/A

Super-resolution method using sparse regularization for point-spread function recovery [CL]

http://arxiv.org/abs/1410.7679


In large-scale spatial surveys, such as the forthcoming ESA Euclid mission, images may be undersampled due to the optical sensors sizes. Therefore, one may consider using a super-resolution (SR) method to recover aliased frequencies, prior to further analysis. This is particularly relevant for point-source images, which provide direct measurements of the instrument point-spread function (PSF). We introduce SPRITE, SParse Recovery of InsTrumental rEsponse, which is an SR algorithm using a sparse analysis prior. We show that such a prior provides significant improvements over existing methods, especially on low SNR PSFs.

Read this paper on arXiv…

F. Mboula, J. Starck, S. Ronayette, et. al.
Wed, 29 Oct 14
60/81

Comments: N/A

Combining human and machine learning for morphological analysis of galaxy images [IMA]

http://arxiv.org/abs/1409.7935


The increasing importance of digital sky surveys collecting many millions of galaxy images has reinforced the need for robust methods that can perform morphological analysis of large galaxy image databases. Citizen science initiatives such as Galaxy Zoo showed that large datasets of galaxy images can be analyzed effectively by non-scientist volunteers, but since databases generated by robotic telescopes grow much faster than the processing power of any group of citizen scientists, it is clear that computer analysis is required. Here we propose to use citizen science data for training machine learning systems, and show experimental results demonstrating that machine learning systems can be trained with citizen science data. Our findings show that the performance of machine learning depends on the quality of the data, which can be improved by using samples that have a high degree of agreement between the citizen scientists. The source code of the method is publicly available.

Read this paper on arXiv…

E. Kuminski, J. George, J. Wallin, et. al.
Tue, 30 Sep 14
6/81

Comments: PASP, accepted

Machine Learning Classification of SDSS Transient Survey Images [IMA]

http://arxiv.org/abs/1407.4118


We show that multiple machine learning algorithms can match human performance in classifying transient imaging data from the SDSS supernova survey into real objects and artefacts. This is the first step in any transient science pipeline and is currently still done by humans, but future surveys such as LSST will necessitate fully machine-enabled solutions. Using features trained from eigenimage analysis (PCA) of single-epoch g, r, i-difference images we can reach a completeness (recall) of 95%, while only incorrectly classifying 18% of artefacts as real objects, corresponding to a precision (purity) of 85%. In general the k-nearest neighbour and the SkyNet artificial neural net algorithms performed most robustly compared to other methods such as naive Bayes and kernel SVM. Our results show that PCA-based machine learning can match human success levels and can naturally be extended by including multiple epochs of data, transient colours and host galaxy information which should allow for significant further improvements, especially at low signal to noise.

Read this paper on arXiv…

L. Buisson, N. Sivanandam, B. Bassett, et. al.
Thu, 17 Jul 14
5/66

Comments: 11 pages, 8 figures

PAINTER: a spatio-spectral image reconstruction algorithm for optical interferometry [IMA]

http://arxiv.org/abs/1407.1885


Astronomical optical interferometers sample the Fourier transform of the intensity distribution of a source at the observation wavelength. Because of rapid perturbations caused by atmospheric turbulence, the phases of the complex Fourier samples (visibilities) cannot be directly exploited. Consequently, specific image reconstruction methods have been devised in the last few decades. Modern polychromatic optical interferometric instruments are now paving the way to multiwavelength imaging. This paper is devoted to the derivation of a spatio-spectral (3D) image reconstruction algorithm, coined PAINTER (Polychromatic opticAl INTErferometric Reconstruction software). The algorithm relies on an iterative process, which alternates estimation of polychromatic images and of complex visibilities. The complex visibilities are not only estimated from squared moduli and closure phases, but also differential phases, which helps to better constrain the polychromatic reconstruction. Simulations on synthetic data illustrate the efficiency of the algorithm and in particular the relevance of injecting a differential phases model in the reconstruction.

Read this paper on arXiv…

A. Schutz, A. Ferrari, D. Mary, et. al.
Wed, 9 Jul 14
64/74

Comments: 12 pages, 10 figures

Towards building a Crowd-Sourced Sky Map [CL]

http://arxiv.org/abs/1406.1528


We describe a system that builds a high dynamic-range and wide-angle image of the night sky by combining a large set of input images. The method makes use of pixel-rank information in the individual input images to improve a “consensus” pixel rank in the combined image. Because it only makes use of ranks and the complexity of the algorithm is linear in the number of images, the method is useful for large sets of uncalibrated images that might have undergone unknown non-linear tone mapping transformations for visualization or aesthetic reasons. We apply the method to images of the night sky (of unknown provenance) discovered on the Web. The method permits discovery of astronomical objects or features that are not visible in any of the input images taken individually. More importantly, however, it permits scientific exploitation of a huge source of astronomical images that would not be available to astronomical research without our automatic system.

Read this paper on arXiv…

D. Lang, D. Hogg and B. Scholkopf
Mon, 9 Jun 14
19/40

Comments: Appeared at AI-STATS 2014

Sparsity averaging for radio-interferometric imaging [IMA]

http://arxiv.org/abs/1402.2335


We propose a novel regularization method for compressive imaging in the context of the compressed sensing (CS) theory with coherent and redundant dictionaries. Natural images are often complicated and several types of structures can be present at once. It is well known that piecewise smooth images exhibit gradient sparsity, and that images with extended structures are better encapsulated in wavelet frames. Therefore, we here conjecture that promoting average sparsity or compressibility over multiple frames rather than single frames is an extremely powerful regularization prior.

Read this paper on arXiv…

R. Carrillo, J. McEwen and Y. Wiaux
Wed, 12 Feb 14
34/67

Reconstruction of Complex-Valued Fractional Brownian Motion Fields Based on Compressive Sampling and Its Application to PSF Interpolation in Weak Lensing Survey [CL]

http://arxiv.org/abs/1311.0124


A new reconstruction method of complex-valued fractional Brownian motion (CV-fBm) field based on Compressive Sampling (CS) is proposed. The decay property of Fourier coefficients magnitude of the fBm signals/ fields indicates that fBms are compressible. Therefore, a few numbers of samples will be sufficient for a CS based method to reconstruct the full field. The effectiveness of the proposed method is showed by simulating, random sampling, and reconstructing CV-fBm fields. Performance evaluation shows advantages of the proposed method over boxcar filtering and thin plate methods. It is also found that the reconstruction performance depends on both of the fBm’s Hurst parameter and the number of samples, which in fact is consistent with the CS reconstruction theory. In contrast to other fBm or fractal interpolation methods, the proposed CS based method does not require the knowledge of fractal parameters in the reconstruction process; the inherent sparsity is just sufficient for the CS to do the reconstruction. Potential applicability of the proposed method in weak gravitational lensing survey, particularly for interpolating non-smooth PSF (Point Spread Function) distribution representing distortion by a turbulent field is also discussed.

Read this paper on arXiv…

Tue, 5 Nov 13
29/73

A Parallel Compressive Imaging Architecture for One-Shot Acquisition [CL]

http://arxiv.org/abs/1311.0646


A limitation of many compressive imaging architectures lies in the sequential nature of the sensing process, which leads to long sensing times. In this paper we present a novel architecture that uses fewer detectors than the number of reconstructed pixels and is able to acquire the image in a single acquisition. This paves the way for the development of video architectures that acquire several frames per second. We specifically address the diffraction problem, showing that deconvolution normally used to recover diffraction blur can be replaced by convolution of the sensing matrix, and how measurements of a 0/1 physical sensing matrix can be converted to -1/1 compressive sensing matrix without any extra acquisitions. Simulations of our architecture show that the image quality is comparable to that of a classic Compressive Imaging camera, whereas the proposed architecture avoids long acquisition times due to sequential sensing. This one-shot procedure also allows to employ a fixed sensing matrix instead of a complex device such as a Digital Micro Mirror array or Spatial Light Modulator. It also enables imaging at bandwidths where these are not efficient.

Read this paper on arXiv…

Tue, 5 Nov 13
68/73

Feature Selection Strategies for Classifying High Dimensional Astronomical Data Sets


The amount of collected data in many scientific fields is increasing, all of them requiring a common task: extract knowledge from massive, multi parametric data sets, as rapidly and efficiently possible. This is especially true in astronomy where synoptic sky surveys are enabling new research frontiers in the time domain astronomy and posing several new object classification challenges in multi dimensional spaces; given the high number of parameters available for each object, feature selection is quickly becoming a crucial task in analyzing astronomical data sets. Using data sets extracted from the ongoing Catalina Real-Time Transient Surveys (CRTS) and the Kepler Mission we illustrate a variety of feature selection strategies used to identify the subsets that give the most information and the results achieved applying these techniques to three major astronomical problems.

Read this paper on arXiv…

Date added: Wed, 9 Oct 13

Singular Value Decomposition of Images from Scanned Photographic Plates


We want to approximate the mxn image A from scanned astronomical photographic plates (from the Sofia Sky Archive Data Center) by using far fewer entries than in the original matrix. By using rank of a matrix, k we remove the redundant information or noise and use as Wiener filter, when rank k<m or k<n. With this approximation more than 98% compression ration of image of astronomical plate without that image details, is obtained. The SVD of images from scanned photographic plates (SPP) is considered and its possible image compression.

Read this paper on arXiv…

Date added: Tue, 8 Oct 13