Radio interferometry

Motivated by my work on LensClean, especially at Jodrell Bank, I got very interested in radio interferometry in general and especially the technical (mathematical that is) details. Working at JIVE was a great opportunity to learn a lot about VLBI and do some theoretical and practical work in this field.

Subjects of interest include:

  1. noise properties of CLEAN

    In this project I simulated CLEAN deconvolutions of realistic observations of extended sources in order to investigate how the errors (statistical and systematic) are distributed in Fourier space. I confirm the expectation that the errors are largest in gaps in the uv coverage (and outside it) and that they grow rapidly with increasing size of the gaps.
    Quantitatively I find that the statistical flux error made when integrating over extended sources can be estimated from the noise in the dirty map and the number of independent beams over the area of the source. This estimate is quite accurate for natural weighting but becomes too pessimistic for uniform weighting. Interestingly the uncertainties of direct model fits of extended sources are very similar to the errors of integrating over a CLEAN map. Since the former can be calculated analytically, we also have a handle on the latter.

  2. bandwidth smearing

    Many radio telescopes (particularly the old correlator of the VLA and thus all (non-E) VLA observations in the archive) have serious limitations on the number of frequency channels. This means that the usable field of view is limited by bandwidth smearing. A radial deconvolution can correct for this if the bandpass functions for all telescopes are (i) accurately known and (ii) all equal. Unfortunately these assumptions are not valid. I developed a method to solve this problem by fitting the a priori unknown bandpass functions of all telescopes to the observed data. These functions are then in turn used to correct for the bandwidth smearing in the process of source model fitting. In an iterative process we can now solve for both (source structure and bandpasses) and in this way improve the image quality significantly.

  3. low-frequency VLBI

    I conducted the first serious low-frequency VLBI experiment to observe the gravitational lens B0218+357. Main aim was to image the structures in the Einstein ring with high resolution (for lens modelling purposes) and to study propagation effects. The analysis is not finished yet, because the (self-) calibration is made particularly difficult by the structure of the lensed source that does not show compact components at low frequencies. In addition, the same data were used as input for the following project:

  4. VLBI wide-field mapping

    Using the data from my project described above, a group including Emil Lenc for the first time mapped sources over the whole primary beam of the VLBA telescopes at 90cm wavelength. An important result is a robust estimate of the density of calibrator sources for long-baseline LOFAR observations. In addition we showed that the ionosphere can (at least under good conditions) be calibrated even for very long baselines.

    As extension of this project, in which we only imaged small areas around sources known from low-resolution surveys, I developed a new method to image the complete primary beam with an efficiency that is increased by several orders of magnitude. This makes it for the first time possible to extract the information for large fields of view from the data, which was far too expensive with standard methods. Similar methods are essential for the success of the international LOFAR and eventually the SKA.

  5. Fringe-fitting techniques for low-frequency interferometry (LOFAR)

    Long baselines of LOFAR require new techniques to correct for clock offsets, ionospheric delays and Faraday rotation between the stations. I started generalising the VLBI technique of fringe-fitting to the needs of LOFAR. Main challenge is that we have to fit coherently over many subbands, spanning large frequency ranges (e.g. 30-80 MHz). Over this range we have to distinguish explicitly between non-dispersive delays and rates (clock offsets, errors in station or source positions) and dispersive effects due to the ionosphere. Concerning the latter, it is even necessary to go one order further and include Faraday rotation directly in the fringe-fitting.
    First applications of early versions of these algorithms have already been used to find the first international LOFAR fringes, confirm fringes to Tautenburg, and find the first fringes between and to three German stations. This work is essential for the further development of long baselines in LOFAR.

  6. imaging of time-variable sources, wide-band imaging and multi-frequency synthesis

    As part of the preparations for using LOFAR for lens searches, I developed a difference imaging technique for radio interferometry. Simply producing imaging for different epochs with subsequent subtraction is not a good approach, because the differences will often be dominated by deconvolution artifacts. Mathematically speaking, the constant part of the emission contaminates the variable part in the deconvolution if the uv coverage is not exactly the same in all epochs.
    Similar to this, the imaging of the constant part is seriously affected by the presence of variable components; a well known problem in interferometry. My new multi-channel CLEAN approach solves both problems at the same time, by effectively applying the CLEAN regularisation (a sparseness constraint) to independent sum and difference output channels directly. In this way the artifacts can be reduced dramatically.
    The same approach is also useful to image wide-band data for sources with spectral index variation over the field. In this case the output channels can be polynomial coefficients of an expansion in wavelength. Suche methods are desperately needed for the new wide-band arrays EVLA, e-MERLIN, ATCA-CABB and others. My approach is slightly different from classical ideas and should use the available information for effectively.

  7. new deconvolution methods in interferometry

    Even 35 years after its invention, Hoegbom's "CLEAN" is still (one of) the most successful deconvolution algorithm(s) in radio interferometry. In spite of this, we should not forget that we are still lacking a satisfactory analysis of CLEAN's properties. With CLEAN we have a simple method that solves the deconvolution problem, but we do not know in which way exactly. New approaches of sparse reconstruction that have emerged over the last few years in signal processing offer a promising alternative. The solution they produce is very similar to the one of CLEAN, but they define exactly which problem they are solving. This opens the route to substantial improvements and important generalisations of radio deconvolution.
    I am currently implementing, extending and testing these and other approaches for highly improved deconvolution methods in interferometry. First preliminary results are very promising.

  8. more exotic subjects like Image coherence in gravitational lenses
There is more to come on this page!



To my homepage
To the AIfA
To the University of Bonn


This document last modified Wed Feb 24 9:19:37 UTC 2010