dofibers objects
The dofibers reduction task is specialized for scattered light subtraction, extraction, flat fielding, fiber throughput correction, wavelength calibration, and sky subtraction of multifiber spectra. It is a command language script which collects and combines the functions and parameters of many general purpose tasks to provide a single complete data reduction path. The task provides a degree of guidance, automation, and record keeping necessary when dealing with the large amount of data generated by multifiber instruments. Variants of this task are doargus, dofoe, dohydra, and do3fiber.
objects
List of object spectra to be processed. Previously processed spectra are
ignored unless the redo flag is set or the update flag is set and
dependent calibration data has changed. Extracted spectra are ignored.
apref =
Aperture reference spectrum. This spectrum is used to define the basic
extraction apertures and is typically a flat field spectrum.
flat = (optional)
Flat field spectrum. If specified the one dimensional flat field spectra
are extracted and used to make flat field calibrations. If a separate
throughput file or image is not specified the flat field is also used
for computing a fiber throughput correction.
throughput = (optional)
Throughput file or image. If an image is specified, typically a blank
sky observation, the total flux through
each fiber is used to correct for fiber throughput. If a file consisting
of lines with the aperture number and relative throughput is specified
then the fiber throughput will be corrected by those values. If neither
is specified but a flat field image is given it is used to compute the
throughput.
arcs1 = (at least one if dispersion correcting)
List of primary arc spectra. These spectra are used to define the dispersion
functions for each fiber apart from a possible zero point correction made
with secondary shift spectra or arc calibration fibers in the object spectra.
One fiber from the first spectrum is used to mark lines and set the dispersion
function interactively and dispersion functions for all other fibers and
arc spectra are derived from it.
arcs2 = (optional)
List of optional shift arc spectra. Features in these secondary observations
are used to supply a wavelength zero point shift through the observing
sequence. One type of observation is dome lamps containing characteristic
emission lines.
arctable = (optional) (refspectra)
Table defining arc spectra to be assigned to object
spectra (see refspectra). If not specified an assignment based
on a header parameter, params.sort, such as the observation time is made.
readnoise = 0. (apsum)
Read out noise in photons. This parameter defines the minimum noise
sigma. It is defined in terms of photons (or electrons) and scales
to the data values through the gain parameter. A image header keyword
(case insensitive) may be specified to get the value from the image.
gain = 1. (apsum)
Detector gain or conversion factor between photons/electrons and
data values. It is specified as the number of photons per data value.
A image header keyword (case insensitive) may be specified to get the value
from the image.
datamax = INDEF (apsum.saturation)
The maximum data value which is not a cosmic ray.
When cleaning cosmic rays and/or using variance weighted extraction
very strong cosmic rays (pixel values much larger than the data) can
cause these operations to behave poorly. If a value other than INDEF
is specified then all data pixels in excess of this value will be
excluded and the algorithms will yield improved results.
This applies only to the object spectra and not the flat field or
arc spectra. For more
on this see the discussion of the saturation parameter in the
apextract package.
fibers = 97 (apfind)
Number of fibers. This number is used during the automatic definition of
the apertures from the aperture reference spectrum. It is best if this
reflects the actual number of fibers which may be found in the aperture
reference image. The interactive
review of the aperture assignments allows verification and adjustments
to the automatic aperture definitions.
width = 12. (apedit)
Approximate base full width of the fiber profiles. This parameter is used
for the profile centering algorithm.
minsep = 8. (apfind)
Minimum separation between fibers. Weaker spectra or noise within this
distance of a stronger spectrum are rejected.
maxsep = 15. (apfind)
Maximum separation between adjacent fibers. This parameter
is used to identify missing fibers. If two adjacent spectra exceed this
separation then it is assumed that a fiber is missing and the aperture
identification assignments will be adjusted accordingly.
apidtable = (apfind)
Aperture identification table containing the fiber number, beam number
defining object (1), sky (0), and arc (2) fibers, and a spectrum title.
objaps = , skyaps = , arcaps =
List of object, sky, and arc aperture numbers. These are used to
identify arc apertures for wavelength calibration and object and sky
apertures for sky subtraction. Note sky apertures may be identified as
both object and sky if one wants to subtract the mean sky from the
individual sky spectra. Typically the different spectrum types are
identified by their beam numbers and the default, null string,
lists select all apertures.
objbeams = 0,1 , skybeams = 0 , arcbeams = 2
List of object, sky, and arc beam numbers. The convention is that sky
fibers are given a beam number of 0, object fibers a beam number of 1, and
arc fibers a beam number of 2. The beam numbers are typically set in the
apidtable. Unassigned or broken fibers may be given a beam number of
-1 in the aperture identification table since apertures with negative beam
numbers are not extracted. Note it is valid to identify sky fibers as both
object and sky.
scattered = no (apscatter)
Smooth and subtracted scattered light from the object and flat field
images. This operation consists of fitting independent smooth functions
across the dispersion using data outside the fiber apertures and then
smoothing the individual fits along the dispersion. The initial
flat field, or if none is given the aperture reference image, are
done interactively to allow setting the fitting parameters. All
subsequent subtractions use the same fitting parameters.
fitflat = yes (flat1d)
Fit the composite flat field spectrum by a smooth function and divide each
flat field spectrum by this function? This operation removes the average
spectral signature of the flat field lamp from the sensitivity correction to
avoid modifying the object fluxes.
clean = yes (apsum)
Detect and correct for bad pixels during extraction? This is the same
as the clean option in the apextract package. If yes this also
implies variance weighted extraction and requires reasonably good values
for the readout noise and gain. In addition the datamax parameters
can be useful.
dispcor = yes
Dispersion correct spectra? Depending on the params.linearize
parameter this may either resample the spectra or insert a dispersion
function in the image header.
savearcs = yes
Save any simultaneous arc apertures? If no then the arc apertures will
be deleted after use.
skysubtract = yes
Subtract sky from the object spectra? If yes the sky spectra are combined
and subtracted from the object spectra as defined by the object and sky
aperture/beam parameters.
skyedit = yes
Overplot all the sky spectra and allow contaminated sky spectra to be
deleted?
saveskys = yes
Save the combined sky spectrum? If no then the sky spectrum will be
deleted after sky subtraction is completed.
splot = no
Plot the final spectra with the task splot?
redo = no
Redo operations previously done? If no then previously processed spectra
in the objects list will not be processed (unless they need to be updated).
update = yes
Update processing of previously processed spectra if aperture, flat
field, or dispersion reference definitions are changed?
batch = no
Process spectra as a background or batch job provided there are no interactive
options (skyedit and splot) selected.
listonly = no
List processing steps but don't process?
params = (pset)
Name of parameter set containing additional processing parameters. The
default is parameter set params. The parameter set may be examined
and modified in the usual ways (typically with "epar params" or ":e params"
from the parameter editor). Note that using a different parameter file
is not allowed. The parameters are described below.
Package parameters are those which generally apply to all task in the
package. This is also true of dofibers.
dispaxis = 2
Default dispersion axis. The dispersion axis is 1 for dispersion
running along image lines and 2 for dispersion running along image
columns. If the image header parameter DISPAXIS is defined it has
precedence over this parameter. The default value defers to the
package parameter of the same name.
observatory = observatory
Observatory at which the spectra were obtained if not specified in the
image header by the keyword OBSERVAT. See observatory for more
details.
interp = poly5 (nearest|linear|poly3|poly5|spline3|sinc)
Spectrum interpolation type used when spectra are resampled. The choices are:
nearest - nearest neighbor linear - linear poly3 - 3rd order polynomial poly5 - 5th order polynomial spline3 - cubic spline sinc - sinc function.le
verbose = no
Print verbose information available with various tasks.
logfile = logfile , plotfile =
Text and plot log files. If a filename is not specified then no log is
kept. The plot file contains IRAF graphics metacode which may be examined
in various ways such as with gkimosaic.
records =
Dummy parameter to be ignored.
version = SPECRED: ...
Version of the package.
The following parameters are part of the params parameter set and define various algorithm parameters for dofibers.
order = decreasing (apfind)
When assigning aperture identifications order the spectra "increasing"
or "decreasing" with increasing pixel position (left-to-right or
right-to-left in a cross-section plot of the image).
extras = no (apsum)
Include extra information in the output spectra? When cleaning or using
variance weighting the cleaned and weighted spectra are recorded in the
first 2D plane of a 3D image, the raw, simple sum spectra are recorded in
the second plane, and the estimated sigmas are recorded in the third plane.
t_function = spline3 , t_order = 3 (aptrace)
Default trace fitting function and order. The fitting function types are
"chebyshev" polynomial, "legendre" polynomial, "spline1" linear spline, and
"spline3" cubic spline. The order refers to the number of
terms in the polynomial functions or the number of spline pieces in the spline
functions.
t_niterate = 1, t_low = 3., t_high = 3. (aptrace)
Default number of rejection iterations and rejection sigma thresholds.
apscat1 = (apscatter)
Fitting parameters across the dispersion. This references an additional
set of parameters for the ICFIT package. The default is the "apscat1"
parameter set.
apscat2 = (apscatter)
Fitting parameters along the dispersion. This references an additional
set of parameters for the ICFIT package. The default is the "apscat2"
parameter set.
variance
The extraction is weighted by the variance based on the data values
and a poisson/ccd model using the gain and readnoise
parameters.
pfit = fit1d (apsum) (fit1d|fit2d)
Profile fitting algorithm for cleaning and variance weighted extractions.
The default is generally appropriate for multifiber data but users
may try the other algorithm. See approfiles for further information.
lsigma = 3., usigma = 3. (apsum)
Lower and upper rejection thresholds, given as a number of times the
estimated sigma of a pixel, for cleaning.
nsubaps = 1 (apsum)
During extraction it is possible to equally divide the apertures into
this number of subapertures.
f_function = spline3 , f_order = 10 (fit1d)
Function and order used to fit the composite one dimensional flat field
spectrum. The functions are "legendre", "chebyshev", "spline1", and
"spline3". The spline functions are linear and cubic splines with the
order specifying the number of pieces.
match = 10. (identify)
The maximum difference for a match between the dispersion function prediction
value and a wavelength in the coordinate list.
fwidth = 4. (identify)
Approximate full base width (in pixels) of arc lines.
cradius = 10. (reidentify)
Radius from previous position to reidentify arc line.
i_function = spline3 , i_order = 3 (identify)
The default function and order to be fit to the arc wavelengths as a
function of the pixel coordinate. The functions choices are "chebyshev",
"legendre", "spline1", or "spline3".
i_niterate = 2, i_low = 3.0, i_high = 3.0 (identify)
Number of rejection iterations and sigma thresholds for rejecting arc
lines from the dispersion function fits.
refit = yes (reidentify)
Refit the dispersion function? If yes and there is more than 1 line
and a dispersion function was defined in the arc reference then a new
dispersion function of the same type as in the reference image is fit
using the new pixel positions. Otherwise only a zero point shift is
determined for the revised fitted coordinates without changing the
form of the dispersion function.
addfeatures = no (reidentify)
Add new features from a line list during each reidentification?
This option can be used to compensate for lost features from the
reference solution. Care should be exercised that misidentified features
are not introduced.
following
Select the nearest following spectrum in the reference list based on the
sorting parameter. If there is no following spectrum use the nearest preceding
spectrum.
interp
Interpolate between the preceding and following spectra in the reference
list based on the sorting parameter. If there is no preceding and following
spectrum use the nearest spectrum. The interpolation is weighted by the
relative distances of the sorting parameter.
match
Match each input spectrum with the reference spectrum list in order.
This overrides the reference aperture check.
nearest
Select the nearest spectrum in the reference list based on the sorting
parameter.
preceding
Select the nearest preceding spectrum in the reference list based on the
sorting parameter. If there is no preceding spectrum use the nearest following
spectrum.
sort = jd , group = ljd (refspectra)
Image header keywords to be used as the sorting parameter for selection
based on order and to group spectra.
A null string, "", or the word "none" may be use to disable the sorting
or grouping parameters.
The sorting parameter
must be numeric but otherwise may be anything. The grouping parameter
may be a string or number and must simply be the same for all spectra within
the same group (say a single night).
Common sorting parameters are times or positions.
In dofibers the Julian date (JD) and the local Julian day number (LJD)
at the middle of the exposure are automatically computed from the universal
time at the beginning of the exposure and the exposure time. Also the
parameter UTMIDDLE is computed.
time = no, timewrap = 17. (refspectra)
Is the sorting parameter a 24 hour time? If so then the time origin
for the sorting is specified by the timewrap parameter. This time
should precede the first observation and follow the last observation
in a 24 hour cycle.
log = no (dispcor)
Use linear logarithmic wavelength coordinates? Linear logarithmic
wavelength coordinates have wavelength intervals which are constant
in the logarithm of the wavelength.
flux = yes (dispcor)
Conserve the total flux during interpolation? If no the output
spectrum is interpolated from the input spectrum at each output
wavelength coordinate. If yes the input spectrum is integrated
over the extent of each output pixel. This is slower than
simple interpolation.
reject = none (scombine) (none|minmax|avsigclip)
Type of rejection operation performed on the pixels which overlap at each
dispersion coordinate. The algorithms are discussed in the
help for scombine. The rejection choices are:
none - No rejection minmax - Reject the low and high pixels avsigclip - Reject pixels using an averaged sigma clipping algorithm
scale = none (none|mode|median|mean)
Multiplicative scaling to be applied to each spectrum. The choices are none
or scale by the mode, median, or mean. This should not be necessary if the
flat field and throughput corrections have been properly made.
The environment parameter imtype is used to determine the extension of the images to be processed and created. This allows use with any supported image extension. For STF images the extension has to be exact; for example "d1h".
The dofibers reduction task is specialized for scattered light subtraction, extraction, flat fielding, fiber throughput correction, wavelength calibration, and sky subtraction of multifiber spectra. It is a command language script which collects and combines the functions and parameters of many general purpose tasks to provide a single complete data reduction path. The task provides a degree of guidance, automation, and record keeping necessary when dealing with the large amount of data generated by multifiber instruments. Variants of this task are doargus, dofoe, dohydra, and do3fiber.
The general organization of the task is to do the interactive setup steps first using representative calibration data and then perform the majority of the reductions automatically, and possibly as a background process, with reference to the setup data. In addition, the task determines which setup and processing operations have been completed in previous executions of the task and, contingent on the redo and update options, skip or repeat some or all the steps.
The description is divided into a quick usage outline followed by details of the parameters and algorithms. The usage outline is provided as a checklist and a refresher for those familiar with this task and the component tasks. It presents only the default or recommended usage Since dofibers combines many separate, general purpose tasks the description given here refers to these tasks and leaves some of the details to their help documentation.
Usage Outline
6 [1]
The images are first processed with ccdproc for overscan,
bias, and dark corrections.
The dofibers task will abort if the image header keyword CCDPROC,
which is added by ccdproc, is missing. If the data processed outside
of the IRAF ccdred package then a dummy CCDPROC keyword should be
added to the image headers; say with hedit.
[2]
Set the dofibers parameters with eparam. Specify the object
images to be processed, the flat field image as the aperture reference and
the flat field, and one or more arc images. A throughput file or image,
such as a blank sky observation, may also be specified. If there are many
object or arc spectra per setup you might want to prepare "@ files".
Specify the aperture identification file for the configuration
if one has been created.
You might wish to verify the geometry parameters,
separations, dispersion direction, etc., which may
change with different detector setups. The processing parameters are set
for complete reductions but for quicklook you might not use the clean
option or dispersion calibration and sky subtraction.
The parameters are set for a particular configuration and different configurations may use different flat fields, arcs, and aperture identification tables.
[3]
Run the task. This may be repeated multiple times with different
observations and the task will generally only do the setup steps
once and only process new images. Queries presented during the
execution for various interactive operations may be answered with
"yes", "no", "YES", or "NO". The lower case responses apply just
to that query while the upper case responses apply to all further
such queries during the execution and no further queries of that
type will be made.
[4]
The apertures are defined using the specified aperture reference image.
The spectra are found automatically and apertures assigned based on
task parameters and the aperture identification table. Unassigned
fibers may have a negative beam number and will be ignored in subsequent
processing. The resize option sets the aperture size to the widths of
the profiles at a fixed fraction of the peak height. The interactive
review of the apertures is recommended. If the identifications are off
by a shift the 'o' key is used. To exit the aperture review type 'q'.
[5]
The fiber positions at a series of points along the dispersion are measured
and a function is fit to these positions. This may be done interactively to
adjust the fitting parameters. Not all fibers need be examined and the "NO"
response will quit the interactive fitting. To exit the interactive
fitting type 'q'.
[6]
If scattered light subtraction is to be done the flat field image is
used to define the scattered light fitting parameters interactively.
If one is not specified then the aperture reference image is used for
this purpose.
There are two queries for the interactive fitting. A graph of the data between the defined reference apertures separated by a specified buffer distance is first shown. The function order and type may be adjusted. After quiting with 'q' the user has the option of changing the buffer value and returning to the fitting, changing the image line or column to check if the fit parameters are satisfactory at other points, or to quit and accept the fit parameters. After fitting all points across the dispersion another graph showing the scattered light from the individual fits is shown and the smoothing parameters along the dispersion may be adjusted. Upon quiting with 'q' you have the option of checking other cuts parallel to the dispersion or quiting and finishing the scattered light function smoothing and subtraction.
If there is a throughput image then this is corrected for scattered light noninteractively using the previous fitting parameters.
[7]
If flat fielding is to be done the flat field spectra are extracted. The
average spectrum over all fibers is determined and a function is fit
interactively (exit with 'q'). This function is generally of sufficiently
high order that the overall shape is well fit. This function is then used
to normalize the individual flat field spectra. If a throughput image, a
sky flat, is specified then the total sky counts through each fiber are
used to correct the total flat field counts. Alternatively, a separately
derived throughput file can be used for specifying throughput corrections.
If neither type of throughput is used the flat field also provides the
throughput correction. The final response spectra are normalized to a unit
mean over all fibers. The relative average throughput for each fiber is
recorded in the log and possibly printed to the terminal.
[8]
If dispersion correction is selected the first arc in the arc list is
extracted. The middle fiber is used to identify the arc lines and define
the dispersion function using the task identify. Identify a few arc
lines with 'm' and use the 'l' line list identification command to
automatically add additional lines and fit the dispersion function. Check
the quality of the dispersion function fit with 'f'. When satisfied exit
with 'q'.
[9]
The remaining fibers are automatically reidentified. You have the option
to review the line identifications and dispersion function for each fiber
and interactively add or delete arc lines and change fitting parameters.
This can be done selectively, such as when the reported RMS increases
significantly.
[10]
If the spectra are to be resampled to a linear dispersion system
(which will be the same for all spectra) default dispersion parameters
are printed and you are allowed to adjust these as desired.
[11]
The object spectra are now automatically scattered light subtracted,
extracted, flat fielded, and dispersion corrected.
[12]
When sky subtracting, the individual sky spectra may be reviewed and some
spectra eliminated using the 'd' key. The last deleted spectrum may be
recovered with the 'e' key. After exiting the review with 'q' you are
asked for the combining option. The type of combining is dictated by the
number of sky fibers.
[13]
The option to examine the final spectra with splot may be given.
To exit type 'q'.
[14]
If scattered light is subtracted from the input data a copy of the
original image is made by appending "noscat" to the image name.
If the data are reprocessed with the redo flag the original
image will be used again to allow modification of the scattered
light parameters.
The final spectra will have the same name as the original 2D images with a ".ms" extension added. The flat field and arc spectra will also have part of the aperture identification table name added to allow different configurations to use the same 2D flat field and arcs but with different aperture definitions.
Spectra and Data Files
The basic input consists of multifiber object and calibration spectra stored as IRAF images. The type of image format is defined by the environment parameter imtype. Only images with that extension will be processed and created. The raw CCD images must be processed to remove overscan, bias, and dark count effects. This is generally done using the ccdred package. The dofibers task will abort if the image header keyword CCDPROC, which is added by ccdproc, is missing. If the data processed outside of the IRAF ccdred package then a dummy CCDPROC keyword should be added to the image headers; say with hedit. Flat fielding is generally not done at this stage but as part of dofibers. If flat fielding is done as part of the basic CCD processing then a flattened flat field, blank sky observation, or throughput file should still be created for applying fiber throughput corrections.
The task dofibers uses several types of calibration spectra. These are flat fields, blank sky flat fields, comparison lamp spectra, auxiliary mercury line (from the dome lights) or sky line spectra, and simultaneous arc spectra taken during the object observation. The flat field, throughput image or file, auxiliary emission line spectra, and simultaneous comparison fibers are optional. If a flat field is used then the sky flat or throughput file is optional assuming the flat field has the same fiber illumination. It is legal to specify only a throughput image or file and leave the flat field blank in order to simply apply a throughput correction. Because only the total counts through each fiber are used from a throughput image, sky flat exposures need not be of high signal per pixel.
There are three types of arc calibration methods. One is to take arc
calibration exposures through all fibers periodically and apply the
dispersion function derived from one or interpolated between pairs to the
object fibers. This is the most common method. Another method is to
use only one or two all-fiber arcs to define the shape of the dispersion
function and track zero point wavelength shifts with simultaneous arc
fibers taken during the object exposure. The simultaneous arcs may or may
not be available at the instrument but dofibers can use this type of
observation. The arc fibers are identified by their beam or aperture
numbers. A related and mutually exclusive method is to use auxiliary
line spectra such as lines in the dome lights or sky lines to monitor
shifts relative to a few actual arc exposures. The main reason to do this
is if taking arc exposures through all fibers is inconvenient.
The assignment of arc or auxiliary line calibration exposures to object
exposures is generally done by selecting the nearest in time and
interpolating. There are other options possible which are described under
the task refspectra. The most general option is to define a table
giving the object image name and the one or two arc spectra to be assigned
to that object. That file is called an arc assignment table and it
is one of the optional setup files which can used with dofibers.
The first step in the processing is identifying the spectra in the images.
The aperture identification file contains information about the fiber
assignments. The identification file is not mandatory, sequential numbering
will be used, but it is highly recommended for keeping track of the objects
assigned to the fibers. The aperture identification file contains lines
consisting of an aperture number, a beam number, and an object
identification. These must be in the same order as the fibers in the
image. The aperture number may be any unique number but it is recommended
that the fiber number be used. The beam number is used to flag object,
sky, arc, or other types of spectra. The default beam numbers used by the
task are 0 for sky, 1 for object, and 2 for arc. The object
identifications are optional but it is good practice to include them so
that the data will contain the object information independent of other
records. Figure 1 shows an example identification file called M33Sch2.
An alternative to using an aperture identification file is to give no
file, the "" empty string, and to explicitly give a range of
aperture numbers for the skys and possibly for the sky subtraction
object list in the parameters objaps, skyaps, arcaps, objbeams,
skybeams, and arcbeams. This is reasonable if the fibers always
have a fixed typed. As an example the CTIO Argus instrument always
alternates object and sky fibers so the object apertures can be given
as 1x2 and the sky fibers as 2x2; i.e. objects are the odd numbered
apertures and skys are the even numbered apertures.
The final reduced spectra are recorded in two or three dimensional IRAF
images. The images have the same name as the original images with an added
".ms" extension. Each line in the reduced image is a one dimensional
spectrum with associated aperture, wavelength, and identification
information. When the extras parameter is set the lines in the
third dimension contain additional information (see
apsum for further details). These spectral formats are accepted by the
one dimensional spectroscopy tools such as the plotting tasks splot
and specplot. The special task scopy may be used to extract
specific apertures or to change format to individual one dimensional
images.
Package Parameters
The specred package parameters set parameters affecting all the
tasks in the package.
The dispersion axis parameter defines the image axis along which the
dispersion runs. This is used if the image header doesn't define the
dispersion axis with the DISPAXIS keyword.
The observatory parameter is used if there is no
OBSERVAT keyword in the image header (see observatory for more
details). The spectrum interpolation type might be changed to "sinc" but
with the cautions given in onedspec.package.
The other parameters define the standard I/O functions.
The verbose parameter selects whether to print everything which goes
into the log file on the terminal. It is useful for monitoring
what the dofibers task does. The log and plot files are useful for
keeping a record of the processing. A log file is highly recommended.
A plot file provides a record of apertures, traces, and extracted spectra
but can become quite large.
The plotfile is most conveniently viewed and printed with gkimosaic.
Processing Parameters
The list of objects and arcs can be @ files if desired. The aperture
reference spectrum is usually the same as the flat field spectrum though it
could be any exposure with enough signal to accurately define the positions
and trace the spectra. The first list of arcs are the standard Th-Ar or
HeNeAr comparison arc spectra (they must all be of the same type). The
second list of arcs are the auxiliary emission line exposures mentioned
previously.
The detector read out noise and gain are used for cleaning and variance
(optimal) extraction. This is specified either explicitly or by reference
to an image header keyword.
The dispersion axis defines the wavelength direction of spectra in
the image if not defined in the image header by the keyword DISPAXIS. The
width and separation parameters define the dimensions (in pixels) of the
spectra (fiber profile) across the dispersion. The width parameter
primarily affects the centering. The maximum separation parameter is
important if missing spectra from the aperture identification file are to
be correctly skipped. The number of fibers is either the actual number
of fibers or the number in the aperture identification file. An attempt
is made to account for unassigned or missing fibers. As a recommendation
the actual number of fibers should be specified.
The task needs to know which fibers are object, sky if sky subtraction is
to be done, and simultaneous arcs if used. One could explicitly give the
aperture numbers but the recommended way, provided an aperture
identification file is used, is to select the apertures based on the beam
numbers. The default values are recommended beam numbers. Sky
subtracted sky spectra are useful for evaluating the sky subtraction.
Since only the spectra identified as objects are sky subtracted one can
exclude fibers from the sky subtraction. For example, if the
objbeams parameter is set to 1 then only those fibers with a beam of
1 will be sky subtracted. All other fibers will remain in the extracted
spectra but will not be sky subtracted.
The next set of parameters select the processing steps and options. The
scattered light option allows fitting and subtracting a scattered light
surface from the input object and flat field. If there is significant
scattered light which is not subtracted the fiber throughput correction
will not be accurate. The
flat fitting option allows fitting and removing the overall shape of the
flat field spectra while preserving the pixel-to-pixel response
corrections. This is useful for maintaining the approximate object count
levels and not introducing the reciprocal of the flat field spectrum into
the object spectra. The clean option invokes a profile fitting and
deviant point rejection algorithm as well as a variance weighting of points
in the aperture. These options require knowing the effective (i.e.
accounting for any image combining) read out noise and gain. For a
discussion of cleaning and variance weighted extraction see
apvariance and approfiles.
The dispersion correction option selects whether to extract arc spectra,
determine a dispersion function, assign them to the object spectra, and,
possibly, resample the spectra to a linear (or log-linear) wavelength
scale. If simultaneous arc fibers are defined there is an option to delete
them from the final spectra when they are no longer needed.
The sky subtraction option selects whether to combine the sky fiber spectra
and subtract this sky from the object fiber spectra. Dispersion
correction and sky subtraction are independent operations. This means
that if dispersion correction is not done then the sky subtraction will be
done with respect to pixel coordinates. This might be desirable in some
quick look cases though it is incorrect for final reductions.
The sky subtraction option has two additional options. The individual sky
spectra may be examined and contaminated spectra deleted interactively
before combining. This can be a useful feature in crowded regions. The
final combined sky spectrum may be saved for later inspection in an image
with the spectrum name prefixed by sky.
After a spectrum has been processed it is possible to examine the results
interactively using the splot tasks. This option has a query which
may be turned off with "YES" or "NO" if there are multiple spectra to be
processed.
Generally once a spectrum has been processed it will not be reprocessed if
specified as an input spectrum. However, changes to the underlying
calibration data can cause such spectra to be reprocessed if the
update flag is set. The changes which will cause an update are a new
aperture identification file, a new reference image, new flat fields, and a
new arc reference. If all input spectra are to be processed regardless of
previous processing the redo flag may be used. Note that
reprocessing clobbers the previously processed output spectra.
The batch processing option allows object spectra to be processed as
a background or batch job. This will only occur if sky spectra editing and
splot review (interactive operations) are turned off, either when the
task is run or by responding with "NO" to the queries during processing.
The listonly option prints a summary of the processing steps which
will be performed on the input spectra without actually doing anything.
This is useful for verifying which spectra will be affected if the input
list contains previously processed spectra. The listing does not include
any arc spectra which may be extracted to dispersion calibrate an object
spectrum.
The last parameter (excluding the task mode parameter) points to another
parameter set for the algorithm parameters. The way dofibers works
this may not have any value and the parameter set params is always
used. The algorithm parameters are discussed further in the next section.
Algorithms and Algorithm Parameters
This section summarizes the various algorithms used by the dofibers
task and the parameters which control and modify the algorithms. The
algorithm parameters available to the user are collected in the parameter
set params. These parameters are taken from the various general
purpose tasks used by the dofibers processing task. Additional
information about these parameters and algorithms may be found in the help
for the actual task executed. These tasks are identified in the parameter
section listing in parenthesis. The aim of this parameter set organization
is to collect all the algorithm parameters in one place separate from the
processing parameters and include only those which are relevant for
multifiber data. The parameter values can be changed from the
defaults by using the parameter editor,
Extraction
The identification of the spectra in the two dimensional images and their
scattered light subtraction and extraction to one dimensional spectra
in multispec format is accomplished
using the tasks from the apextract package. The first parameters
through nsubaps control the extractions.
The dispersion line is that used for finding the spectra, for plotting in
the aperture editor, and as the starting point for tracing. The default
value of INDEF selects the middle of the image. The aperture
finding, adjusting, editing, and tracing operations also allow summing a
number of dispersion lines to improve the signal. The number of lines is
set by the nsum parameter.
The order parameter defines whether the order of the aperture
identifications in the aperture identification file (or the default
sequential numbers if no file is used) is in the same sense as the image
coordinates (increasing) or the opposite sense (decreasing). If the
aperture identifications turn out to be opposite to what is desired when
viewed in the aperture editing graph then simply change this parameter.
The basic data output by the spectral extraction routines are the one
dimensional spectra. Additional information may be output when the
extras option is selected and the cleaning or variance weighting
options are also selected. In this case a three dimensional image is
produced with the first element of the third dimension being the cleaned
and/or weighted spectra, the second element being the uncleaned and
unweighted spectra, and the third element being an estimate of the sigma
of each pixel in the extracted spectrum. Currently the sigma data is not
used by any other tasks and is only for reference.
The initial step of finding the fiber spectra in the aperture reference
image consists of identifying the peaks in a cut across the dispersion,
eliminating those which are closer to each other than the minsep
distance, and then keeping the specified nfibers highest peaks. The
centers of the profiles are determined using the center1d algorithm
which uses the width parameter.
Apertures are then assigned to each spectrum. The initial edges of the
aperture relative to the center are defined by the lower and
upper parameters. The trickiest part of assigning the apertures is
relating the aperture identification from the aperture identification file
to automatically selected fiber profiles. The first aperture id in the
file is assigned to the first spectrum found using the order parameter to
select the assignment direction. The numbering proceeds in this way except
that if a gap greater than a multiple of the maxsep parameter is
encountered then assignments in the file are skipped under the assumption
that a fiber is missing (broken). If unassigned fibers are still
visible in a flat field, either by design or by scattered light, the
unassigned fibers can be included in the number of fibers to find and
then the unassigned (negative beam number) apertures are excluded from
any extraction. For more on the finding and
assignment algorithms see apfind.
The initial apertures are the same for all spectra but they can each be
automatically resized. The automatic resizing sets the aperture limits
at a fraction of the peak relative to the interfiber minimum.
The default ylevel is to resize the apertures to 5% of the peak.
See the description for the task apresize for further details.
The user is given the opportunity to graphically review and adjust the
aperture definitions. This is recommended. As mentioned previously, the
correct identification of the fibers is tricky and it is fundamentally
important that this be done correctly; otherwise the spectrum
identifications will not be for the objects they say. An important command in
this regard is the 'o' key which allows reordering the identifications
based on the aperture identification file. This is required if the first
fiber is actually missing since the initial assignment begins assigning the
first spectrum found with the first entry in the aperture file. The
aperture editor is a very powerful tool and is described in detail as
apedit.
The next set of parameters control the tracing and function fitting of the
aperture reference positions along the dispersion direction. The position
of a spectrum across the dispersion is determined by the centering
algorithm (see center1d) at a series of evenly spaced steps, given by
the parameter t_step, along the dispersion. The step size should be
fine enough to follow position changes but it is not necessary to measure
every point. The fitted points may jump around a little bit due to noise
and cosmic rays even when summing a number of lines. Thus, a smooth
function is fit. The function type, order, and iterative rejection of
deviant points is controlled by the other trace parameters. For more
discussion consult the help pages for aptrace and icfit. The
default is to fit a cubic spline of three pieces with a single iteration of
3 sigma rejection.
The actual extraction of the spectra by summing across the aperture at each
point along the dispersion is controlled by the next set of parameters.
The default extraction simply sums the pixels using partial pixels at the
ends. The options allow selection of a weighted sum based on a Poisson
variance model using the readnoise and gain detector
parameters. Note that if the clean option is selected the variance
weighted extraction is used regardless of the weights parameter. The
sigma thresholds for cleaning are also set in the params parameters.
For more on the variance weighted extraction and cleaning see
apvariance and approfiles as well as apsum.
The last parameter, nsubaps, is used only in special cases when it is
desired to subdivide the fiber profiles into subapertures prior to
dispersion correction. After dispersion correction the subapertures are
then added together. The purpose of this is to correct for wavelength
shifts across a fiber.
Scattered Light Subtraction
Scattered light may be subtracted from the input two dimensional image as
the first step. This is done using the algorithm described in
apscatter. This can be important if there is significant scattered
light since the flat field/throughput correction will otherwise be
incorrect. The algorithm consists of fitting a function to the data
outside the defined apertures by a specified buffer at each line or
column across the dispersion. The function fitting parameters are the same
at each line. Because the fitted functions are independent at each line or
column a second set of one dimensional functions are fit parallel to the
dispersion using the evaluated fit values from the cross-dispersion step.
This produces a smooth scattered light surface which is finally subtracted
from the input image. Again the function fitting parameters are the
same at each line or column though they may be different than the parameters
used to fit across the dispersion.
The first time the task is run with a particular flat field (or aperture
reference image if no flat field is used) the scattered light fitting
parameters are set interactively using that image. The interactive step
selects a particular line or column upon which the fitting is done
interactively with the icfit commands. A query is first issued
which allows skipping this interactive stage. Note that the interactive
fitting is only for defining the fitting functions and orders. When
the graphical icfit fitting is exited (with 'q') there is a second prompt
allowing you to change the buffer distance (in the first cross-dispersion
stage) from the apertures, change the line/column, or finally quit.
The initial fitting parameters and the final set parameters are recorded
in the apscat1 and apscat2 hidden parameter sets. These
parameters are then used automatically for every subsequent image
which is scattered light corrected.
The scattered light subtraction modifies the input 2D images. To preserve
the original data a copy of the original image is made with the same
root name and the word "noscat" appended. The scattered light subtracted
images will have the header keyword "APSCATTE" which is how the task
avoids repeating the scattered light subtraction during any reprocessing.
However if the redo option is selected the scattered light subtraction
will also be redone by first restoring the "noscat" images to the original
input names.
Flat Field and Fiber Throughput Corrections
Flat field corrections may be made during the basic CCD processing; i.e.
direct division by the two dimensional flat field observation. In that
case do not specify a flat field spectrum; use the null string "". The
dofibers task provides an alternative flat field response correction
based on division of the extracted object spectra by the extracted flat field
spectra. A discussion of the theory and merits of flat fielding directly
verses using the extracted spectra will not be made here. The
dofibers flat fielding algorithm is the recommended method for
flat fielding since it works well and is not subject to the many problems
involved in two dimensional flat fielding.
In addition to correcting for pixel-to-pixel response the flat field step
also corrects for differences in the fiber throughput. Thus, even if the
pixel-to-pixel flat field corrections have been made in some other way it
is desirable to use a sky or dome flat observation for determining a fiber
throughput correction. Alternatively, a separately derived throughput
file may be specified. This file consists of the aperture numbers
(the same as used for the aperture reference) and relative throughput
numbers.
The first step is extraction of the flat field spectrum, if specified,
using the reference apertures. Only one flat field is allowed so if
multiple flat fields are required the data must be reduced in groups.
After extraction one or more corrections are applied. If the fitflat
option is selected (the default) the extracted flat field spectra are
averaged together and a smooth function is fit. The default fitting
function and order are given by the parameters f_function and
f_order. If the parameter f_interactive is "yes" then the
fitting is done interactively using the fit1d task which uses the
icfit interactive fitting commands.
The fitted function is divided into the individual flat field spectra to
remove the basic shape of the spectrum while maintaining the relative
individual pixel responses and any fiber to fiber differences. This step
avoids introducing the flat field spectrum shape into the object spectra
and closely preserves the object counts.
If a throughput image is available (an observation of blank sky
usually at twilight) it is extracted. If no flat field is used the average
signal through each fiber is computed and this becomes the response
normalization function. Note that a dome flat may be used in place of a
sky in the sky flat field parameter for producing throughput only
corrections. If a flat field is specified then each sky spectrum is
divided by the appropriate flat field spectrum. The total counts through
each fiber are multiplied into the flat field spectrum thus making the sky
throughput of each fiber the same. This correction is important if the
illumination of the fibers differs between the flat field source and the
sky. Since only the total counts are required the sky or dome flat field
spectra need not be particularly strong though care must be taken to avoid
objects.
Instead of a sky flat or other throughput image a separately derived
throughput file may be used. It may be used with or without a
flat field.
The final step is to normalize the flat field spectra by the mean counts of
all the fibers. This normalization step is simply to preserve the average
counts of the extracted object and arc spectra after division by the
response spectra. The final relative throughput values are recorded in the
log and possibly printed on the terminal.
These flat field response steps and algorithm are available as a separate
task called msresp1d.
Dispersion Correction
Dispersion corrections are applied to the extracted spectra if the
dispcor parameter is set. This can be a complicated process which
the dofibers task tries to simplify for you. There are three basic
steps involved; determining the dispersion functions relating pixel
position to wavelength, assigning the appropriate dispersion function to a
particular observation, and resampling the spectra to evenly spaced pixels
in wavelength.
The comparison arc spectra are used to define dispersion functions for the
fibers using the tasks identify and reidentify. The
interactive identify task is only used on the central fiber of the
first arc spectrum to define the basic reference dispersion solution from
which all other fibers and arc spectra are automatically derived using
reidentify.
The set of arc dispersion function parameters are from identify and
reidentify. The parameters define a line list for use in
automatically assigning wavelengths to arc lines, a parameter controlling
the width of the centering window (which should match the base line
widths), the dispersion function type and order, parameters to exclude bad
lines from function fits, and parameters defining whether to refit the
dispersion function, as opposed to simply determining a zero point shift,
and the addition of new lines from the line list when reidentifying
additional arc spectra. The defaults should generally be adequate and the
dispersion function fitting parameters may be altered interactively. One
should consult the help for the two tasks for additional details of these
parameters and the operation of identify.
Generally, taking a number of comparison arc lamp exposures interspersed
with the program spectra is sufficient to accurately dispersion calibrate
multifiber spectra. However, there are some other calibration options
which may be of interest. These options apply additional calibration data
consisting either of auxiliary line spectra, such as from dome lights or
night sky lines, or simultaneous arc lamp spectra taken through a few
fibers during the object exposure. These options add complexity to the
dispersion calibration process.
When only arc comparison lamp spectra are used, dispersion functions are
determined independently for each fiber of each arc image and then assigned
to the matching fibers in the program object observations. The assignment
consists of selecting one or two arc images to calibrate each object
image. When two bracketing arc spectra are used the dispersion functions
are linearly interpolated (usually based on the time of the observations).
If taking comparison exposures is time-consuming, possibly requiring
reconfiguration to illuminate the fibers, and the spectrograph is
expected to be fairly stable apart from small shifts, there are two
mutually exclusive methods for monitoring
shifts in the dispersion zero point from the basic arc lamp spectra other
than taking many arc lamp exposures. One is to use some fibers to take a
simultaneous arc spectrum while observing the program objects. The fibers
are identified by aperture or beam numbers. The second method is to use
auxiliary line spectra, such as mercury lines from the dome lights.
These spectra are specified with an auxiliary shift arc list, arc2.
When using auxiliary line spectra for monitoring zero point shifts one of
these spectra is plotted interactively by identify with the
reference dispersion function from the reference arc spectrum. The user
marks one or more lines which will be used to compute zero point wavelength
shifts in the dispersion functions automatically. The actual wavelengths
of the lines need not be known. In this case accept the wavelength based
on the reference dispersion function. As other observations of the same
features are made the changes in the positions of the features will be
tracked as zero point wavelength changes such that wavelengths of the
features remain constant.
When using auxiliary line spectra the only arc lamp spectrum used is the
initial arc reference spectrum (the first image in the arcs1 list).
The master dispersion functions are then shifted based on the spectra in
the arcs2 list (which must all be of the same type). The dispersion
function assignments made by refspectra using either the arc
assignment file or based on header keywords is done in the same way as
described for the arc lamp images except using the auxiliary spectra.
If simultaneous arcs are used the arc lines are reidentified to determine a
zero point shift relative to the comparison lamp spectra selected, by
refspectra, of the same fiber. A linear function of aperture
position on the image across the dispersion verses the zero point shifts
from the arc fibers is determined and applied to the dispersion functions
from the assigned calibration arcs for the non-arc fibers. Note that if
there are two comparison lamp spectra (before and after the object
exposure) then there will be two shifts applied to two dispersion functions
which are then combined using the weights based on the header parameters
(usually the observation time).
The arc assignments may be done either explicitly with an arc assignment
table (parameter arctable) or based on a header parameter. The task
used is refspectra and the user should consult this task if the
default behavior is not what is desired. The default is to interpolate
linearly between the nearest arcs based on the Julian date (corrected to
the middle of the exposure). The Julian date and a local Julian day number
(the day number at local noon) are computed automatically by the task
setjd and recorded in the image headers under the keywords JD and
LJD. In addition the universal time at the middle of the exposure, keyword
UTMIDDLE, is computed by the task setairmass and this may also be used
for ordering the arc and object observations.
The last step of dispersion correction (resampling the spectrum to evenly
spaced pixels in wavelength) is optional and relatively straightforward.
If the linearize parameter is no then the spectra are not resampled
and the nonlinear dispersion information is recorded in the image header.
Other IRAF tasks (the coordinate description is specific to IRAF) will use
this information whenever wavelengths are needed. If linearizing is
selected a linear dispersion relation, either linear in the wavelength or
the log of the wavelength, is defined once and applied to every extracted
spectrum. The resampling algorithm parameters allow selecting the
interpolation function type, whether to conserve flux per pixel by
integrating across the extent of the final pixel, and whether to linearize
to equal linear or logarithmic intervals. The latter may be appropriate
for radial velocity studies. The default is to use a fifth order
polynomial for interpolation, to conserve flux, and to not use logarithmic
wavelength bins. These parameters are described fully in the help for the
task dispcor which performs the correction. The interpolation
function options and the nonlinear dispersion coordinate system is
described in the help topic onedspec.package.
Sky Subtraction
Sky subtraction is selected with the skysubtract processing option.
The sky spectra are selected by their aperture and beam numbers and
combined into a single master sky spectrum
which is then subtracted from each object spectrum. If the skyedit
option is selected the sky spectra are plotted using the task
specplot. By default they are superposed to allow identifying
spectra with unusually high signal due to object contamination. To
eliminate a sky spectrum from consideration point at it with the cursor and
type 'd'. The last deleted spectrum may be undeleted with 'e'. This
allows recovery of incorrect or accidental deletions.
The sky combining algorithm parameters define how the individual sky fiber
spectra, after interactive editing, are combined before subtraction from
the object fibers. The goals of combining are to reduce noise, eliminate
cosmic-rays, and eliminate fibers with inadvertent objects. The common
methods for doing this to use a median and/or a special sigma clipping
algorithm (see scombine for details). The scale
parameter determines whether the individual skys are first scaled to a
common mode. The scaling should be used if the throughput is uncertain,
but in that case you probably did the wrong thing in the throughput
correction. If the sky subtraction is done interactively, i.e. with the
skyedit option selected, then after selecting the spectra to be
combined a query is made for the combining algorithm. This allows
modifying the default algorithm based on the number of sky spectra
selected since the "avsigclip" rejection algorithm requires at least
three spectra.
The combined sky spectrum is subtracted from only those spectra specified
by the object aperture and beam numbers. Other spectra, such as comparison
arc spectra, are retained unchanged. One may include the sky spectra as
object spectra to produce residual sky spectra for analysis. The combined
master sky spectra may be saved if the saveskys parameter is set.
The saved sky is given the name of the object spectrum with the prefix
"sky".
1. The following example uses artificial data and may be executed
at the terminal (with IRAF V2.10). This is also the sequence performed
by the test procedure "demos dohydra" from the hydra package..
REVISIONS
apedit,
apfind,
approfiles,
aprecenter,
apresize,
apsum,
aptrace,
,
apvariance,
ccdred,
center1d,
doargus,
dohydra,
dofoe,
do3fiber,
dispcor,
,
fit1d,
icfit,
identify,
msresp1d,
observatory,
onedspec.package,
,
refspectra,
reidentify,
scombine,
setairmass,
setjd,
specplot,
splot,
Figure 1: Example Aperture Identification File
cl> type m33sch2
1 1 143
2 1 254
3 0 sky
4 -1 Broken
5 2 arc
.
.
.
44 1 s92
45 -1 Unassigned
46 2 arc
47 0 sky
48 1 phil2
Note the identification of the sky fibers with beam number 0, the object
fibers with 1, and the arc fibers with 2.
The broken and unassigned fiber entries, given beam
number -1, are optional but recommended to give the automatic spectrum
finding operation the best chance to make the correct identifications. The
identification file will vary for each plugboard setup. Additional
information about the aperture identification file may be found in the
description of the task apfind.
cl> epar params
or simple typing params. The parameter editor can also be
entered when editing the dofibers parameters by typing :e
params or simply :e if positioned at the params
parameter.
EXAMPLES
sp> hydra
hy> demos mkhydra
Creating image demoobj ...
Creating image demoflat ...
Creating image demoarc ...
hy> bye
sp> type demoapid
===> demoapid <===
36 1
37 0
38 1
39 1
41 0
42 1
43 1
44 0
45 1
46 -1
47 0
48 1
sp> specred.verbose = yes
sp> dofibers demoobj apref=demoflat flat=demoflat arcs1=demoarc
>>> fib=12 apid=demoapid width=4. minsep=5. maxsep=7. clean- splot+
Set reference apertures for demoflat
Resize apertures for demoflat? (yes):
Edit apertures for demoflat? (yes):
REVISIONS
DOFIBERS V2.10.3
The usual output WCS format is "equispec". The image format type to be
processed is selected with the imtype environment parameter. The
dispersion axis parameter is now a package parameter. Images will only
be processed if the have the CCDPROC keyword. A datamax parameter
has been added to help improve cosmic ray rejection. A scattered
light subtraction processing option has been added.
SEE ALSO
This page automatically generated from the iraf .hlp file. If you
would like your local iraf package .hlp files converted into HTML
please contact Dave Mills at NOAO.dmills@noao.edu