My work in the field of gravitational lenses covers several areas:

- Basic theory
- Classical lens modelling
- Lensing theory and model degeneracies
- Lens modelling with extended sources: LensClean
- Microlensing
- The golden lens B0218+357
- Lensed water and extreme VLBI structure: MG0414+0534
- A lensed LBG: The 8 o'clock arc
- RXS J1131-1231
- Lensing as a tool for other fields
- Future lens surveys
- More exotic questions in lensing

In order to fit a lens model to observed image positions, one has to find the
observables that would correspond to a given lens+source model. In lensing
this involves the highly non-trivial inversion of the lens equation. With a
given lens model, it is usually easy to find a *source position* given
an *image position.* The inverse process, finding *all* image
positions for a given source position, on the other hand, is a real nightmare.
Instead of comparing the predictions from the model with the measurements in
the image plane, one can approximate the comparison by projecting it into the
source plane. The idea is to back-project (easy!) the observed image positions
into the source plane. For a correct model, the source positions of all images
should coincide. In reality, one will have deviations, which can be minimised
to find the best lens model.
The deviations can be approximately projected back to the image plane by using
the magnification matrices, but the accuracy of this approach is not always
sufficient.
Intermediate approaches can also be used, in which the comparison is made in
the image plane, but the inversion of the lens equation is still avoided.

I developed my own software to perform different kinds of classical lens modelling using very general lens models. The goodness of fit can be measured in the source plane or in the image plane, with several different algorithms. Results of my work in this field have been published for several lens systems:

- HE
1104-1805

First models in the context of the detection of the lens galaxy, and with some estimates of the accuracy to interpret the first time-delay estimate. For the latter publication I also analysed the light-curves to determine the time delay. The models were then used to find the most probable redshift of the lens, which was not known at that time, so that a determination of the Hubble constant was not possible.

- RX
J0911.4+0551

For this lens I produced the first models, which were presented in an invited talk and in a paper about the lens.RXJ0911+0551 is the rare case of a quad in which the radial distances of the images from the lens centre are not very similar. Generally, four images are produced by a small perturbation of an Einstein ring, which makes the radii very similar. Having different distances is advantageous, because it provides contraints on the radial mass profile.

- HE 2149-2745

I was co-author of a paper that claimed a detection of the lensing galaxy in this system. After subtraction of the QSO images, our data showed strong residuals almost exactly halfway between them. New data from the CASTLE survey provide a very different position quite close to the fainter of the two QSO images.Our publication may be interesting for lens modellers anyway because it is to my knowledge the first time that the possibility of

*up to eight*(or nine if not singular) images of one source with simple elliptical models plus external shear was mentioned.

- H1517+656

This BL Lac is not lensed itself, but there is evidence that its host galaxy is in turn acting as a lens that distorts background galaxies to striking arcs. If the interpretation is correct, the mass of the galaxy is extremely high. We have a paper about the determination of the redshift of this object which includes a short section on lensing.The interpretation as lensed arcs is disputed by others, but so far none of the alternatives is really proven.

- B0218+357

This is still my favourite lens system. I started with classical modelling of the system, only to learn that the constraints provided by the two bright images are not sufficient to determine the position of the lensing galaxy with any accuracy. The galaxy is not detected at radio wavelengths. Optical observations, on the other hand, are difficult because of the very small image separation in the system.Later we were successful to measure the optical position using 36 orbits with HST to produce what was then the deepest optical image ever taken. The analysis was difficult and is presented in a publication.

The difficulties with classical modelling of this system were the main motivation to start my work on LensClean.

Global VLBI observations of this system were able to detect and resolve both images of the jet. These provided additional constraints for classical models and were used to study scatter broadening in the lensing galaxy. This brings us to the subject of using lenses as a tool to study propagation effects. More VLBI observations were conducted in this context.

More information about my work on this system can be found below.

A well-known degeneracy is the so-called *mass-sheet degeneracy.*
If the density of a mass model is scaled with *1-k*, while at the same
time a homogeneous mass-sheet with density *k* is added, the
observables will not change if the size of the source is also scaled with
*1-k*. The additional constant mass density amplifies the effect of the
scaled mass distribution, so that the total effect is the same.
The only observables which are affected are the time-delays between
images. That means that the mass-sheet degeneracy is a serious problem for the
determination of the Hubble constant.

Another important degeneracy is the one of the radial mass profile. If the
images are located at similar distances from the lens centre, it is often
possible to fit the observables with models of very different mass
profiles. This does again affect the time-delays, which become smaller for
shallower profiles and larger for steeper ones.
The supervisor of my PhD work, Sjur Refsdal, showed that this degeneracy is
basically the same as the mass-sheet degeneracy. Scaling the mass profile with
a factor *1-k* with the addition of a constant density makes the
profile shallower, while it keeps the image positions constant, if the source
is also scaled. We presented this idea in a conference poster in 1999.

Later I extended this work to include the effects of ellipticity and external
shear for quadruply lensed systems. In my publication about the
subject I studied a very general family of lens models in which the
potential follows a power-law *r ^{b}* in the radial direction
and can have an arbitrary azimuthal shape. I found that when the external
shear is kept fixed, the time-delays (or the Hubble constant, if the
time-delays are measured) scale with

In the same paper I introduced the concept of a “critical
shear”. A shear of this value (and direction) has the effect that (when
the ellipticity is fitted accordingly) all time-delays exactly vanish. This is
still true if the shear is then varied orthogonally relative to this critical
shear. The figure illustrates a nice geometric
property. The critical shear is defined by the ellipticity of the
‘roundest’ ellipse going through all four images.
This has a direct significant consequence for the determination of the Hubble
constant from time-delays. Systems which are very ‘round’ have a
small critical shear, which means that unknown contributions to the real shear
have a large effect on the time-delays and the Hubble constant. More
asymmetric systems are much more robust in this respect.

Much more general is the use of
general extended sources, in which the number of subcomponents and thus
constraints can be very large. The disadvantage is that modelling such systems
is a very complex task. The basic difficulty is that the true (unlensed)
source structure is not known a priori but must be fitted simultaneously with
the lens. In the case of radio observations, it is not a good idea to first
create maps of the lensed source and then use these to model the lens, because
the artifacts created by the deconvolution will affect the lens modelling
results. A better approach is to combine the two pieces and try to solve the
complete inverse problem. This has been tried before with the development of
LensClean. I had data available of the lens system B0218+357, where the original LensClean algorithm proved to
be insufficient to determine the position of the lens galaxy with good
accuracy.
The idea of LensClean is very simple. In the standard Clean
algorithm, a radio source is decomposed into a collection of point-like
components, which can be placed arbitrarily. In the lensed situation, it has
to be taken into account that only certain combinations of multiply lensed
components, with their corresponding magnification ratios, are
allowed. LensClean builds a model by decomposing the *source plane*
into point-like components, so that a consistent model of the source is found
for a given lens model. In an outer loop, the lens model can then in turn be
varied to produce a simultaneous fit of lens and source.

In order to extract all available information from the radio data, I developed a new version of LensClean and applied it to the case of B0218+357. One of the new concepts introduced is the correction for bias effects in LensClean. The standard algorithm preferably cleaned regions with higher multiplicity, because the residuals decrease faster there. This is corrected in my unbiased LensClean, which helped a lot to obtain good results.

LensClean relies on a good method to invert the lens equation, which means
finding all image positions for a given source position and lens model. This
has to be done so many times, that a reliable (failure less than one in
100 million) and fast method is essential. For this purpose I developed
‘LenTil”, a tiling algorithm that can find *all* images
even for complicated models extremely reliably. The basic idea is simple, but
the implementation became pretty complicated, as explained in my PhD thesis.

One might be sceptic about the method to determine the lens position very indirectly using LensClean. It took me many tests to convince myself that the result is reliable. Finally it was possible to measure the position directly with a very deep HST exposure. The analysis of the maps confirms my results, even though the accuracy of the optical measurement is not comparable with my LensClean result. Higher resolution observations were later made with the VLA + Pie Town at 15 GHz. The analysis is still in progress.

The best lens model is the most important result of my LensClean work. On the
way to this goal, LensClean also produces the optimal model of the (unlensed)
source plane. I developed a new method to take into account the resolution of
the instrument combined with the lens and ‘convolve’ the best
model with what I have defined as the ‘Clean beam in the source
plane’. This approach is superiour to further ideas, but still not
optimal. I am working on methods in which the regularisation is incorporated
directly into the mapping process and not applied afterwards.

The question of frequency-dependent flux ratios of the bright images was (with
contributions from me) investigated by
Mittal et al. (2006) and
Mittal, Porcas &
Wucknitz (2007). We found out that the proposed structural changes of the
source with frequency (together with magnification gradients) cannot be
responsible for this effect. Instead we found a plausible explanation in
free-free absorption in the ISM of the lensing galaxy.

The same lens was also target for a 90cm VLBI experiment that led to the very
first VLBI map of an Einstein ring.
We still do not understand the significant differences between the structures
at 90cm from the ones known from higher frequencies (e.g. 2cm in comparison).

The data from this experiment also served as basis for the first wide-field
VLBI project at low frequencies. Our results (published in Lenc at al., 2008) provide important
input for future low-frequency work, in particular with LOFAR.

The 1.7 GHz VLBI data are analysed by my student Filomena Volino, the 8.4 GHz I am
doing myself. Preliminary maps are featured in a recent EVN newsletter.

In a case study, I used HST
spectra to determine the difference of the spectra of both images in the lens
HE0512-3329. I found that both differential extinction and microlensing
produce important differences. By studying the continuum separately from the
emission lines, I was able to disentangle the effect for the first time. This
sets the new standard for future projects in this field. Unfortunately many
groups still study differential extinction neglecting microlensing or
vice-versa. At least in the case of HE0512-3329 I showed that this is a very
questionable approach, because both effects can be about equally strong.

Another system where I am involved in the analysis is MS0451.6-0305, in which an
ensemble of background sources is magnified by a cluster of galaxies in the
foreground.
The lens spreads the emission from several components of a merging system over
such a wide area that resolved studies can be performed with current radio
telescopes. A more detailed study was carried out later and a new paper was submitted.

An e-MERLIN legacy project (PI: Neal Jackson) to study all known radio lenses
was approved recently. My role in that project is the development of imaging
methods for wide-band observations and the final lens modelling.

For transversal motion of the lens, an additional gravitational redshift is introduced. I explain this effect in the much simplified picture of an elastic collision of photon and lens. The very complicated calculations others used to describe this effect are in fact not necessary.

In a recent publication, my results on the effect of radial motion were disputed. The situation discussed in that paper, however, is not equivalent to the one we had discussed, because they are described in different reference frames. Once the proper translation is applied, the new publication does in fact

To my homepage

To the AIfA

To the University of Bonn

This document last modified Wed Jul 15 11:08:07 UTC 2009