IRAF help page for package noao.imred.argus, program doargus

from NOAO doargus -- Argus data reduction taskUSAGESUMMARYPARAMETERS-- PACKAGE PARAMETERSPARAMS PARAMETERS-- GENERAL PARAMETERS ---- DEFAULT APERTURE LIMITS ---- AUTOMATIC APERTURE RESIZING PARAMETERS ---- TRACE PARAMETERS ---- SCATTERED LIGHT PARAMETERS ---- APERTURE EXTRACTION PARAMETERS ---- FLAT FIELD FUNCTION FITTING PARAMETERS ---- ARC DISPERSION FUNCTION PARAMETERS ---- AUTOMATIC ARC ASSIGNMENT PARAMETERS ---- DISPERSION CORRECTION PARAMETERS ---- SKY SUBTRACTION PARAMETERS --ENVIRONMENT PARAMETERSDESCRIPTIONEXAMPLESREVISIONSSEE ALSO

doargus -- Argus data reduction task


USAGE

doargus objects


SUMMARY

The doargus reduction task is specialized for scattered light subtraction, extraction, flat fielding, fiber throughput correction, wavelength calibration, and sky subtraction of Argus fiber spectra. It is a command language script which collects and combines the functions and parameters of many general purpose tasks to provide a single complete data reduction path. The task provides a degree of guidance, automation, and record keeping necessary when dealing with the large amount of data generated by this multifiber instrument.


PARAMETERS

objects

List of object spectra to be processed. Previously processed spectra are ignored unless the redo flag is set or the update flag is set and dependent calibration data has changed. Extracted spectra are ignored.

apref =

Aperture reference spectrum. This spectrum is used to define the basic extraction apertures and is typically a flat field spectrum.

flat = (optional)

Flat field spectrum. If specified the one dimensional flat field spectra are extracted and used to make flat field calibrations. If a separate throughput file or image is not specified the flat field is also used for computing a fiber throughput correction.

throughput = (optional)

Throughput file or image. If an image is specified, typically a blank sky observation, the total flux through each fiber is used to correct for fiber throughput. If a file consisting of lines with the aperture number and relative throughput is specified then the fiber throughput will be corrected by those values. If neither is specified but a flat field image is given it is used to compute the throughput.

arcs1 = (at least one if dispersion correcting)

List of primary arc spectra. These spectra are used to define the dispersion functions for each fiber apart from a possible zero point correction made with secondary shift spectra or arc calibration fibers in the object spectra. One fiber from the first spectrum is used to mark lines and set the dispersion function interactively and dispersion functions for all other fibers and arc spectra are derived from it.

arcs2 = (optional)

List of optional shift arc spectra. Features in these secondary observations are used to supply a wavelength zero point shift through the observing sequence. One type of observation is dome lamps containing characteristic emission lines.

arctable = (optional) (refspectra)

Table defining arc spectra to be assigned to object spectra (see refspectra). If not specified an assignment based on a header parameter, params.sort, such as the observation time is made.

readnoise = 0. (apsum)

Read out noise in photons. This parameter defines the minimum noise sigma. It is defined in terms of photons (or electrons) and scales to the data values through the gain parameter. A image header keyword (case insensitive) may be specified to get the value from the image.

gain = 1. (apsum)

Detector gain or conversion factor between photons/electrons and data values. It is specified as the number of photons per data value. A image header keyword (case insensitive) may be specified to get the value from the image.

datamax = INDEF (apsum.saturation)

The maximum data value which is not a cosmic ray. When cleaning cosmic rays and/or using variance weighted extraction very strong cosmic rays (pixel values much larger than the data) can cause these operations to behave poorly. If a value other than INDEF is specified then all data pixels in excess of this value will be excluded and the algorithms will yield improved results. This applies only to the object spectra and not the flat field or arc spectra. For more on this see the discussion of the saturation parameter in the apextract package.

fibers = 48 (apfind)

Number of fibers. This number is used during the automatic definition of the apertures from the aperture reference spectrum. It is best if this reflects the actual number of fibers which may be found in the aperture reference image. Note that Argus fibers which are unassigned will still contain enough light for identification and the aperture identification file will be used to eliminate the unassigned fibers. The interactive review of the aperture assignments allows verification and adjustments to the automatic aperture definitions.

width = 6. (apedit)

Approximate base full width of the fiber profiles. This parameter is used for the profile centering algorithm.

minsep = 8. (apfind)

Minimum separation between fibers. Weaker spectra or noise within this distance of a stronger spectrum are rejected.

maxsep = 10. (apfind)

Maximum separation between adjacent fibers. This parameter is used to identify missing fibers. If two adjacent spectra exceed this separation then it is assumed that a fiber is missing and the aperture identification assignments will be adjusted accordingly.

apidtable = (apfind)

Aperture identification table containing the fiber number, beam number defining object and sky fibers, and a spectrum title.

objaps = , skyaps = 2x2

List of object and sky aperture numbers. These are used to identify object and sky apertures for sky subtraction. Note sky apertures may be identified as both object and sky if one wants to subtract the mean sky from the individual sky spectra. Because the fibers typically alternate sky and object the default is to define the sky apertures by their aperture numbers and select both object and sky fibers for sky subtraction.

objbeams = , skybeams =

List of object and sky beam numbers. The beam numbers are typically the same as the aperture numbers unless set in the apidtable.

scattered = no (apscatter)

Smooth and subtracted scattered light from the object and flat field images. This operation consists of fitting independent smooth functions across the dispersion using data outside the fiber apertures and then smoothing the individual fits along the dispersion. The initial flat field, or if none is given the aperture reference image, are done interactively to allow setting the fitting parameters. All subsequent subtractions use the same fitting parameters.

fitflat = yes (flat1d)

Fit the composite flat field spectrum by a smooth function and divide each flat field spectrum by this function? This operation removes the average spectral signature of the flat field lamp from the sensitivity correction to avoid modifying the object fluxes.

clean = yes (apsum)

Detect and correct for bad pixels during extraction? This is the same as the clean option in the apextract package. If yes this also implies variance weighted extraction and requires reasonably good values for the readout noise and gain. In addition the datamax parameters can be useful.

dispcor = yes

Dispersion correct spectra? Depending on the params.linearize parameter this may either resample the spectra or insert a dispersion function in the image header.

skysubtract = yes

Subtract sky from the object spectra? If yes the sky spectra are combined and subtracted from the object spectra as defined by the object and sky aperture/beam parameters.

skyedit = yes

Overplot all the sky spectra and allow contaminated sky spectra to be deleted?

saveskys = yes

Save the combined sky spectrum? If no then the sky spectrum will be deleted after sky subtraction is completed.

splot = no

Plot the final spectra with the task splot?

redo = no

Redo operations previously done? If no then previously processed spectra in the objects list will not be processed (unless they need to be updated).

update = yes

Update processing of previously processed spectra if aperture, flat field, or dispersion reference definitions are changed?

batch = no

Process spectra as a background or batch job provided there are no interactive options (skyedit and splot) selected.

listonly = no

List processing steps but don't process?

params = (pset)

Name of parameter set containing additional processing parameters. The default is parameter set params. The parameter set may be examined and modified in the usual ways (typically with "epar params" or ":e params" from the parameter editor). Note that using a different parameter file is not allowed. The parameters are described below.


-- PACKAGE PARAMETERS

Package parameters are those which generally apply to all task in the package. This is also true of doargus.

dispaxis = 2

Default dispersion axis. The dispersion axis is 1 for dispersion running along image lines and 2 for dispersion running along image columns. If the image header parameter DISPAXIS is defined it has precedence over this parameter. The default value defers to the package parameter of the same name.

observatory = observatory

Observatory at which the spectra were obtained if not specified in the image header by the keyword OBSERVAT. For Argus data the image headers identify the observatory as "kpno" so this parameter is not used. For data from other observatories this parameter may be used as describe in observatory.

interp = poly5 (nearest|linear|poly3|poly5|spline3|sinc)

Spectrum interpolation type used when spectra are resampled. The choices are:

	nearest - nearest neighbor
	 linear - linear
	  poly3 - 3rd order polynomial
	  poly5 - 5th order polynomial
	spline3 - cubic spline
	   sinc - sinc function
.le

database = database

Database (directory) used for storing aperture and dispersion information.

verbose = no

Print verbose information available with various tasks.

logfile = logfile , plotfile =

Text and plot log files. If a filename is not specified then no log is kept. The plot file contains IRAF graphics metacode which may be examined in various ways such as with gkimosaic.

records =

Dummy parameter to be ignored.

version = ARGUS: ...

Version of the package.


PARAMS PARAMETERS

The following parameters are part of the params parameter set and define various algorithm parameters for doargus.


-- GENERAL PARAMETERS --

line = INDEF, nsum = 10

The dispersion line (line or column perpendicular to the dispersion axis) and number of adjacent lines (half before and half after unless at the end of the image) used in finding, recentering, resizing, editing, and tracing operations. A line of INDEF selects the middle of the image along the dispersion axis.

order = decreasing (apfind)

When assigning aperture identifications order the spectra "increasing" or "decreasing" with increasing pixel position (left-to-right or right-to-left in a cross-section plot of the image).

extras = no (apsum)

Include extra information in the output spectra? When cleaning or using variance weighting the cleaned and weighted spectra are recorded in the first 2D plane of a 3D image, the raw, simple sum spectra are recorded in the second plane, and the estimated sigmas are recorded in the third plane.


-- DEFAULT APERTURE LIMITS --

lower = -3., upper = 3. (apdefault)

Default lower and upper aperture limits relative to the aperture center. These limits are used when the apertures are first found and may be resized automatically or interactively.


-- AUTOMATIC APERTURE RESIZING PARAMETERS --

ylevel = 0.05 (apresize)

Data level at which to set aperture limits during automatic resizing. It is a fraction of the peak relative to a local background.


-- TRACE PARAMETERS --

t_step = 10 (aptrace)

Step along the dispersion axis between determination of the spectrum positions. Note the nsum parameter is also used to enhance the signal-to-noise at each step.

t_function = spline3 , t_order = 3 (aptrace)

Default trace fitting function and order. The fitting function types are "chebyshev" polynomial, "legendre" polynomial, "spline1" linear spline, and "spline3" cubic spline. The order refers to the number of terms in the polynomial functions or the number of spline pieces in the spline functions.

t_niterate = 1, t_low = 3., t_high = 3. (aptrace)

Default number of rejection iterations and rejection sigma thresholds.


-- SCATTERED LIGHT PARAMETERS --

buffer = 1. (apscatter)

Buffer distance from the aperture edges to be excluded in selecting the scattered light pixels to be used.

apscat1 = (apscatter)

Fitting parameters across the dispersion. This references an additional set of parameters for the ICFIT package. The default is the "apscat1" parameter set.

apscat2 = (apscatter)

Fitting parameters along the dispersion. This references an additional set of parameters for the ICFIT package. The default is the "apscat2" parameter set.


-- APERTURE EXTRACTION PARAMETERS --

weights = none (apsum)

Type of extraction weighting. Note that if the clean parameter is set then the weights used are "variance" regardless of the weights specified by this parameter. The choices are:

none

The pixels are summed without weights except for partial pixels at the ends.

variance

The extraction is weighted by the variance based on the data values and a poisson/ccd model using the gain and readnoise parameters.

pfit = fit1d (apsum) (fit1d|fit2d)

Profile fitting algorithm for cleaning and variance weighted extractions. The default is generally appropriate for Argus data but users may try the other algorithm. See approfiles for further information.

lsigma = 3., usigma = 3. (apsum)

Lower and upper rejection thresholds, given as a number of times the estimated sigma of a pixel, for cleaning.

nsubaps = 1 (apsum)

During extraction it is possible to equally divide the apertures into this number of subapertures.


-- FLAT FIELD FUNCTION FITTING PARAMETERS --

f_interactive = yes (fit1d)

Fit the composite one dimensional flat field spectrum interactively? This is used if fitflat is set and a two dimensional flat field spectrum is specified.

f_function = spline3 , f_order = 10 (fit1d)

Function and order used to fit the composite one dimensional flat field spectrum. The functions are "legendre", "chebyshev", "spline1", and "spline3". The spline functions are linear and cubic splines with the order specifying the number of pieces.


-- ARC DISPERSION FUNCTION PARAMETERS --

coordlist = linelists$ctiohenear.dat (identify)

Arc line list consisting of an ordered list of wavelengths. Some standard line lists are available in the directory "linelists$".

match = 10. (identify)

The maximum difference for a match between the dispersion function prediction value and a wavelength in the coordinate list.

fwidth = 4. (identify)

Approximate full base width (in pixels) of arc lines.

cradius = 10. (reidentify)

Radius from previous position to reidentify arc line.

i_function = chebyshev , i_order = 3 (identify)

The default function and order to be fit to the arc wavelengths as a function of the pixel coordinate. The functions choices are "chebyshev", "legendre", "spline1", or "spline3".

i_niterate = 2, i_low = 3.0, i_high = 3.0 (identify)

Number of rejection iterations and sigma thresholds for rejecting arc lines from the dispersion function fits.

refit = yes (reidentify)

Refit the dispersion function? If yes and there is more than 1 line and a dispersion function was defined in the arc reference then a new dispersion function of the same type as in the reference image is fit using the new pixel positions. Otherwise only a zero point shift is determined for the revised fitted coordinates without changing the form of the dispersion function.

addfeatures = no (reidentify)

Add new features from a line list during each reidentification? This option can be used to compensate for lost features from the reference solution. Care should be exercised that misidentified features are not introduced.


-- AUTOMATIC ARC ASSIGNMENT PARAMETERS --

select = interp (refspectra)

Selection method for assigning wavelength calibration spectra. Note that an arc assignment table may be used to override the selection method and explicitly assign arc spectra to object spectra. The automatic selection methods are:

average

Average two reference spectra without regard to any sort parameter. If only one reference spectrum is specified then it is assigned with a warning. If more than two reference spectra are specified then only the first two are used and a warning is given. This option is used to assign two reference spectra, with equal weights, independent of any sorting parameter.

following

Select the nearest following spectrum in the reference list based on the sorting parameter. If there is no following spectrum use the nearest preceding spectrum.

interp

Interpolate between the preceding and following spectra in the reference list based on the sorting parameter. If there is no preceding and following spectrum use the nearest spectrum. The interpolation is weighted by the relative distances of the sorting parameter.

match

Match each input spectrum with the reference spectrum list in order. This overrides the reference aperture check.

nearest

Select the nearest spectrum in the reference list based on the sorting parameter.

preceding

Select the nearest preceding spectrum in the reference list based on the sorting parameter. If there is no preceding spectrum use the nearest following spectrum.

sort = jd , group = ljd (refspectra)

Image header keywords to be used as the sorting parameter for selection based on order and to group spectra. A null string, "", or the word "none" may be use to disable the sorting or grouping parameters. The sorting parameter must be numeric but otherwise may be anything. The grouping parameter may be a string or number and must simply be the same for all spectra within the same group (say a single night). Common sorting parameters are times or positions. In doargus the Julian date (JD) and the local Julian day number (LJD) at the middle of the exposure are automatically computed from the universal time at the beginning of the exposure and the exposure time. Also the parameter UTMIDDLE is computed.

time = no, timewrap = 17. (refspectra)

Is the sorting parameter a 24 hour time? If so then the time origin for the sorting is specified by the timewrap parameter. This time should precede the first observation and follow the last observation in a 24 hour cycle.


-- DISPERSION CORRECTION PARAMETERS --

linearize = yes (dispcor)

Interpolate the spectra to a linear dispersion sampling? If yes the spectra will be interpolated to a linear or log linear sampling If no the nonlinear dispersion function(s) from the dispersion function database are assigned to the input image world coordinate system and the spectral data are not interpolated.

log = no (dispcor)

Use linear logarithmic wavelength coordinates? Linear logarithmic wavelength coordinates have wavelength intervals which are constant in the logarithm of the wavelength.

flux = yes (dispcor)

Conserve the total flux during interpolation? If no the output spectrum is interpolated from the input spectrum at each output wavelength coordinate. If yes the input spectrum is integrated over the extent of each output pixel. This is slower than simple interpolation.


-- SKY SUBTRACTION PARAMETERS --

combine = average (scombine) (average|median)

Option for combining sky pixels at the same dispersion coordinate after any rejection operation. The options are to compute the "average" or "median" of the pixels. The median uses the average of the two central values when the number of pixels is even.

reject = none (scombine) (none|minmax|avsigclip)

Type of rejection operation performed on the pixels which overlap at each dispersion coordinate. The algorithms are discussed in the help for scombine. The rejection choices are:

      none - No rejection
    minmax - Reject the low and high pixels
 avsigclip - Reject pixels using an averaged sigma clipping algorithm

scale = none (none|mode|median|mean)

Multiplicative scaling to be applied to each spectrum. The choices are none or scale by the mode, median, or mean. This should not be necessary if the flat field and throughput corrections have been properly made.


ENVIRONMENT PARAMETERS

The environment parameter imtype is used to determine the extension of the images to be processed and created. This allows use with any supported image extension. For STF images the extension has to be exact; for example "d1h".


DESCRIPTION

The doargus reduction task is specialized for scattered light subtraction, extraction, flat fielding, fiber throughput correction, wavelength calibration, and sky subtraction of Argus fiber spectra. It is a command language script which collects and combines the functions and parameters of many general purpose tasks to provide a single, complete data reduction path. The task provides a degree of guidance, automation, and record keeping necessary when dealing with the large amount of data generated by these multifiber instruments.

The general organization of the task is to do the interactive setup steps first using representative calibration data and then perform the majority of the reductions automatically, and possibly as a background process, with reference to the setup data. In addition, the task determines which setup and processing operations have been completed in previous executions of the task and, contingent on the redo and update options, skip or repeat some or all the steps.

The description is divided into a quick usage outline followed by details of the parameters and algorithms. The usage outline is provided as a checklist and a refresher for those familiar with this task and the component tasks. It presents only the default or recommended usage for Argus since there are many variations possible. Because doargus combines many separate, general purpose tasks the description given here refers to these tasks and leaves some of the details to their help documentation.

Usage Outline

6 [1]

The images are first processed with ccdproc for overscan, bias, and dark corrections. The doargus task will abort if the image header keyword CCDPROC, which is added by ccdproc, is missing. If the data processed outside of the IRAF ccdred package then a dummy CCDPROC keyword should be added to the image headers; say with hedit.

[2]

Set the doargus parameters with eparam. Specify the object images to be processed, the flat field image as the aperture reference and the flat field, and one or more arc images. A throughput file or image, such as a blank sky observation, may also be specified. If there are many object or arc spectra per setup you might want to prepare "@ files". Prepare and specify an aperture identification file if desired. You might wish to verify the geometry parameters, separations, dispersion direction, etc., which may change with different detector setups. The processing parameters are set for complete reductions but for quicklook you might not use the clean option or dispersion calibration and sky subtraction.

The parameters are set for a particular Argus configuration and different configurations may use different flat fields, arcs, and aperture identification tables.

[3]

Run the task. This may be repeated multiple times with different observations and the task will generally only do the setup steps once and only process new images. Queries presented during the execution for various interactive operations may be answered with "yes", "no", "YES", or "NO". The lower case responses apply just to that query while the upper case responses apply to all further such queries during the execution and no further queries of that type will be made.

[4]

The apertures are defined using the specified aperture reference image. The spectra are found automatically and apertures assigned based on task parameters and the aperture identification table. Unassigned fibers may have a negative beam number and will be ignored in subsequent processing. The resize option sets the aperture size to the widths of the profiles at a fixed fraction of the peak height. The interactive review of the apertures is recommended. If the identifications are off by a shift the 'o' key is used. To exit the aperture review type 'q'.

[5]

The fiber positions at a series of points along the dispersion are measured and a function is fit to these positions. This may be done interactively to adjust the fitting parameters. Not all fibers need be examined and the "NO" response will quit the interactive fitting. To exit the interactive fitting type 'q'.

[6]

If scattered light subtraction is to be done the flat field image is used to define the scattered light fitting parameters interactively. If one is not specified then the aperture reference image is used for this purpose.

There are two queries for the interactive fitting. A graph of the data between the defined reference apertures separated by a specified buffer distance is first shown. The function order and type may be adjusted. After quiting with 'q' the user has the option of changing the buffer value and returning to the fitting, changing the image line or column to check if the fit parameters are satisfactory at other points, or to quit and accept the fit parameters. After fitting all points across the dispersion another graph showing the scattered light from the individual fits is shown and the smoothing parameters along the dispersion may be adjusted. Upon quiting with 'q' you have the option of checking other cuts parallel to the dispersion or quiting and finishing the scattered light function smoothing and subtraction.

If there is a throughput image then this is corrected for scattered light noninteractively using the previous fitting parameters.

[7]

If flat fielding is to be done the flat field spectra are extracted. The average spectrum over all fibers is determined and a function is fit interactively (exit with 'q'). This function is generally of sufficiently high order that the overall shape is well fit. This function is then used to normalize the individual flat field spectra. If a throughput image, a sky flat, is specified then the total sky counts through each fiber are used to correct the total flat field counts. Alternatively, a separately derived throughput file can be used for specifying throughput corrections. If neither type of throughput is used the flat field also provides the throughput correction. The final response spectra are normalized to a unit mean over all fibers. The relative average throughput for each fiber is recorded in the log and possibly printed to the terminal.

[8]

If dispersion correction is selected the first arc in the arc list is extracted. The middle fiber is used to identify the arc lines and define the dispersion function using the task identify. Identify a few arc lines with 'm' and use the 'l' line list identification command to automatically add additional lines and fit the dispersion function. Check the quality of the dispersion function fit with 'f'. When satisfied exit with 'q'.

[9]

The remaining fibers are automatically reidentified. You have the option to review the line identifications and dispersion function for each fiber and interactively add or delete arc lines and change fitting parameters. This can be done selectively, such as when the reported RMS increases significantly.

[10]

If the spectra are to be resampled to a linear dispersion system (which will be the same for all spectra) default dispersion parameters are printed and you are allowed to adjust these as desired.

[11]

The object spectra are now automatically scattered light subtracted, extracted, flat fielded, and dispersion corrected.

[12]

When sky subtracting, the individual sky spectra may be reviewed and some spectra eliminated using the 'd' key. The last deleted spectrum may be recovered with the 'e' key. After exiting the review with 'q' you are asked for the combining option. The type of combining is dictated by the number of sky fibers.

[13]

The option to examine the final spectra with splot may be given. To exit type 'q'.

[14]

If scattered light is subtracted from the input data a copy of the original image is made by appending "noscat" to the image name. If the data are reprocessed with the redo flag the original image will be used again to allow modification of the scattered light parameters.

The final spectra will have the same name as the original 2D images with a ".ms" extension added. The flat field and arc spectra may also have part of the aperture identification table name added, if used, to allow different configurations to use the same 2D flat field and arcs but with different aperture definitions.

Spectra and Data Files

The basic input consists of Argus object and calibration spectra stored as IRAF images. The type of image format is defined by the environment parameter imtype. Only images with that extension will be processed and created. The raw CCD images must be processed to remove overscan, bias, and dark count effects. This is generally done using the ccdred package. The doargus task will abort if the image header keyword CCDPROC, which is added by ccdproc, is missing. If the data processed outside of the IRAF ccdred package then a dummy CCDPROC keyword should be added to the image headers; say with hedit. Flat fielding is generally not done at this stage but as part of doargus. If flat fielding is done as part of the basic CCD processing then a flattened flat field, blank sky observation, or throughput file should still be created for applying fiber throughput corrections.

The task doargus uses several types of calibration spectra. These are flat fields, blank sky flat fields, comparison lamp spectra, and auxiliary mercury line (from the dome lights) or sky line spectra. The flat field, throughput image or file, and auxiliary emission line spectra are optional. If a flat field is used then the sky flat or throughput file is optional assuming the flat field has the same fiber illumination. It is legal to specify only a throughput image or file and leave the flat field blank in order to simply apply a throughput correction. Because only the total counts through each fiber are used from a throughput image, sky flat exposures need not be of high signal per pixel.

There are two types of dispersion calibration methods. One is to take arc calibration exposures through all fibers periodically and apply the dispersion function derived from one or interpolated between pairs to the object fibers. This is the usual method with Argus. A second (uncommon) method is to use auxiliary line spectra such as lines in the dome lights or sky lines to monitor shifts relative to a few actual arc exposures. The main reason to do this is if taking arc exposures through all fibers is inconvenient.

The assignment of arc or auxiliary line calibration exposures to object exposures is generally done by selecting the nearest in time and interpolating. There are other options possible which are described under the task refspectra. The most general option is to define a table giving the object image name and the one or two arc spectra to be assigned to that object. That file is called an arc assignment table and it is one of the optional setup files which can used with doargus.

The first step in the processing is identifying the spectra in the images. The default method is to use the fact that object and sky fibers alternate and assign sequential numbers to the fibers so that the sky fibers are the even aperture numbers and the object fibers are the odd aperture numbers. In this case the beam numbers are not used (and are the same as the aperture numbers) and there are no object identifications associated with the spectra.

A very useful, optional, setup file is an aperture identification file. The file contains information about the fiber assignments including object titles. It must be prepared by the user for each configuration. The aperture identification file contains lines consisting of an aperture number, a beam number, and an object identification. These must be in the same order as the fibers in the image. The aperture number may be any unique number but it is recommended that the normal sequential fiber numbers be used. The beam number may be used to flag object or sky spectra or simply be the same as the aperture number. The object identifications are optional but it is good practice to include them so that the data will contain the object information independent of other records. Figure 1 shows an example for a configuration called LMC123.

    Figure 1: Example Aperture Identification File
    cl> type LMC124
    1 1 143
    2 0 sky
    3 1 121
       .
       .
       .
    47 1 s92
    48 0 sky
Note the identification of the sky fibers with beam number 0 and the object fibers with 1. Any broken fibers should be included and identified by a different beam number, say beam number -1, to give the automatic spectrum finding operation the best chance to make the correct identifications. Naturally the identification file will vary for each configuration. Additional information about the aperture identification file may be found in the description of the task apfind.

The final reduced spectra are recorded in two or three dimensional IRAF images. The images have the same name as the original images with an added ".ms" extension. Each line in the reduced image is a one dimensional spectrum with associated aperture, wavelength, and identification information. When the extras parameter is set the lines in the third dimension contain additional information (see apsum for further details). These spectral formats are accepted by the one dimensional spectroscopy tools such as the plotting tasks splot and specplot. The special task scopy may be used to extract specific apertures or to change format to individual one dimensional images.

Package Parameters

The argus package parameters set parameters affecting all the tasks in the package. The dispersion axis parameter defines the image axis along which the dispersion runs. This is used if the image header doesn't define the dispersion axis with the DISPAXIS keyword. The observatory parameter is only required for data taken with fiber instruments other than Argus. The spectrum interpolation type might be changed to "sinc" but with the cautions given in onedspec.package. The other parameters define the standard I/O functions. The verbose parameter selects whether to print everything which goes into the log file on the terminal. It is useful for monitoring what the doargus task does. The log and plot files are useful for keeping a record of the processing. A log file is highly recommended. A plot file provides a record of apertures, traces, and extracted spectra but can become quite large. The plotfile is most conveniently viewed and printed with gkimosaic.

Processing Parameters

The list of objects and arcs can be @ files if desired. The aperture reference spectrum is usually the same as the flat field spectrum though it could be any exposure with enough signal to accurately define the positions and trace the spectra. The first list of arcs are the standard Th-Ar or HeNeAr comparison arc spectra (they must all be of the same type). The second list of arcs are the auxiliary emission line exposures mentioned previously.

The detector read out noise and gain are used for cleaning and variance (optimal) extraction. The dispersion axis defines the wavelength direction of spectra in the image if not defined in the image header by the keyword DISPAXIS. The width and separation parameters define the dimensions (in pixels) of the spectra (fiber profile) across the dispersion. The width parameter primarily affects the centering. The maximum separation parameter is important if missing spectra are to be correctly skipped. The number of fibers can be left at the default and the task will try to account for unassigned or missing fibers. However, this may lead to occasional incorrect identifications so it is recommended that only the true number of fibers be specified. The aperture identification file was described earlier.

The task needs to know which fibers are object and which are sky if sky subtraction is to be done. One could explicitly give the aperture numbers but the recommended way is to use the default of selecting every second fiber as sky. If no list of aperture or beam numbers is given then all apertures or beam numbers are selected. Sky subtracted sky spectra are useful for evaluating the sky subtraction. Since only the spectra identified as objects are sky subtracted one can exclude fibers from the sky subtraction. For example, to eliminate the sky spectra from the final results the objaps parameter could be set to "1x2". All other fibers will remain in the extracted spectra but will not be sky subtracted.

The next set of parameters select the processing steps and options. The scattered light option allows fitting and subtracting a scattered light surface from the input object and flat field. If there is significant scattered light which is not subtracted the fiber throughput correction will not be accurate. The flat fitting option allows fitting and removing the overall shape of the flat field spectra while preserving the pixel-to-pixel response corrections. This is useful for maintaining the approximate object count levels and not introducing the reciprocal of the flat field spectrum into the object spectra. The clean option invokes a profile fitting and deviant point rejection algorithm as well as a variance weighting of points in the aperture. These options require knowing the effective (i.e. accounting for any image combining) read out noise and gain. For a discussion of cleaning and variance weighted extraction see apvariance and approfiles.

The dispersion correction option selects whether to extract arc spectra, determine a dispersion function, assign them to the object spectra, and, possibly, resample the spectra to a linear (or log-linear) wavelength scale. If simultaneous arc fibers are defined there is an option to delete them from the final spectra when they are no longer needed.

The sky subtraction option selects whether to combine the sky fiber spectra and subtract this sky from the object fiber spectra. It is also possible to subtract the sky and object fibers by pairs. Dispersion correction and sky subtraction are independent operations. This means that if dispersion correction is not done then the sky subtraction will be done with respect to pixel coordinates. This might be desirable in some quick look cases though it is incorrect for final reductions.

The sky subtraction option has two additional options. The individual sky spectra may be examined and contaminated spectra deleted interactively before combining. This can be a useful feature in crowded regions. The final combined sky spectrum (or individual skys if subtracting by pairs) may be saved for later inspection in an image with the spectrum name prefixed by sky.

After a spectrum has been processed it is possible to examine the results interactively using the splot tasks. This option has a query which may be turned off with "YES" or "NO" if there are multiple spectra to be processed.

Generally once a spectrum has been processed it will not be reprocessed if specified as an input spectrum. However, changes to the underlying calibration data can cause such spectra to be reprocessed if the update flag is set. The changes which will cause an update are a new aperture identification file, a new reference image, new flat fields, and a new arc reference. If all input spectra are to be processed regardless of previous processing the redo flag may be used. Note that reprocessing clobbers the previously processed output spectra.

The batch processing option allows object spectra to be processed as a background or batch job. This will only occur if sky spectra editing and splot review (interactive operations) are turned off, either when the task is run or by responding with "NO" to the queries during processing.

The listonly option prints a summary of the processing steps which will be performed on the input spectra without actually doing anything. This is useful for verifying which spectra will be affected if the input list contains previously processed spectra. The listing does not include any arc spectra which may be extracted to dispersion calibrate an object spectrum.

The last parameter (excluding the task mode parameter) points to another parameter set for the algorithm parameters. The way doargus works this may not have any value and the parameter set params is always used. The algorithm parameters are discussed further in the next section.

Algorithms and Algorithm Parameters

This section summarizes the various algorithms used by the doargus task and the parameters which control and modify the algorithms. The algorithm parameters available to the user are collected in the parameter set params. These parameters are taken from the various general purpose tasks used by the doargus processing task. Additional information about these parameters and algorithms may be found in the help for the actual task executed. These tasks are identified in the parameter section listing in parenthesis. The aim of this parameter set organization is to collect all the algorithm parameters in one place separate from the processing parameters and include only those which are relevant for Argus. The parameter values can be changed from the defaults by using the parameter editor,

	cl> epar params
or simple typing params. The parameter editor can also be entered when editing the doargus parameters by typing :e params or simply :e if positioned at the params parameter.

Extraction

The identification of the spectra in the two dimensional images and their scattered light subtraction and extraction to one dimensional spectra in multispec format is accomplished using the tasks from the apextract package. The first parameters through nsubaps control the extractions.

The dispersion line is that used for finding the spectra, for plotting in the aperture editor, and as the starting point for tracing. The default value of INDEF selects the middle of the image. The aperture finding, adjusting, editing, and tracing operations also allow summing a number of dispersion lines to improve the signal. The number of lines is set by the nsum parameter.

The order parameter defines whether the order of the aperture identifications in the aperture identification file (or the default sequential numbers if no file is used) is in the same sense as the image coordinates (increasing) or the opposite sense (decreasing). If the aperture identifications turn out to be opposite to what is desired when viewed in the aperture editing graph then simply change this parameter.

The basic data output by the spectral extraction routines are the one dimensional spectra. Additional information may be output when the extras option is selected and the cleaning or variance weighting options are also selected. In this case a three dimensional image is produced with the first element of the third dimension being the cleaned and/or weighted spectra, the second element being the uncleaned and unweighted spectra, and the third element being an estimate of the sigma of each pixel in the extracted spectrum. Currently the sigma data is not used by any other tasks and is only for reference.

The initial step of finding the fiber spectra in the aperture reference image consists of identifying the peaks in a cut across the dispersion, eliminating those which are closer to each other than the minsep distance, and then keeping the specified nfibers highest peaks. The centers of the profiles are determined using the center1d algorithm which uses the width parameter.

Apertures are then assigned to each spectrum. The initial edges of the aperture relative to the center are defined by the lower and upper parameters. The initial apertures are the same for all spectra but they can each be automatically resized. The automatic resizing sets the aperture limits at a fraction of the peak relative to the interfiber minimum. The default ylevel is to resize the apertures to 5% of the peak. See the description for the task apresize for further details.

The user is given the opportunity to graphically review and adjust the aperture definitions. This is recommended and it is fundamentally important that the correct aperture/beam numbers be associated with the proper fibers; otherwise the spectrum identifications will not be for the objects they say. An important command in this regard is the 'o' key which allows reordering the identifications. This is required if the first fiber is actually missing since the initial assignment begins with the first spectrum found. The aperture editor is a very powerful tool and is described in detail as apedit.

The next set of parameters control the tracing and function fitting of the aperture reference positions along the dispersion direction. The position of a spectrum across the dispersion is determined by the centering algorithm (see center1d) at a series of evenly spaced steps, given by the parameter t_step, along the dispersion. The step size should be fine enough to follow position changes but it is not necessary to measure every point. The fitted points may jump around a little bit due to noise and cosmic rays even when summing a number of lines. Thus, a smooth function is fit. The function type, order, and iterative rejection of deviant points is controlled by the other trace parameters. For more discussion consult the help pages for aptrace and icfit. The default is to fit a cubic spline of three pieces with a single iteration of 3 sigma rejection.

The actual extraction of the spectra by summing across the aperture at each point along the dispersion is controlled by the next set of parameters. The default extraction simply sums the pixels using partial pixels at the ends. The options allow selection of a weighted sum based on a Poisson variance model using the readnoise and gain detector parameters. Note that if the clean option is selected the variance weighted extraction is used regardless of the weights parameter. The sigma thresholds for cleaning are also set in the params parameters. For more on the variance weighted extraction and cleaning see apvariance and approfiles as well as apsum.

The last parameter, nsubaps, is used only in special cases when it is desired to subdivide the fiber profiles into subapertures prior to dispersion correction. After dispersion correction the subapertures are then added together. The purpose of this is to correct for wavelength shifts across a fiber.

Scattered Light Subtraction

Scattered light may be subtracted from the input two dimensional image as the first step. This is done using the algorithm described in apscatter. This can be important if there is significant scattered light since the flat field/throughput correction will otherwise be incorrect. The algorithm consists of fitting a function to the data outside the defined apertures by a specified buffer at each line or column across the dispersion. The function fitting parameters are the same at each line. Because the fitted functions are independent at each line or column a second set of one dimensional functions are fit parallel to the dispersion using the evaluated fit values from the cross-dispersion step. This produces a smooth scattered light surface which is finally subtracted from the input image. Again the function fitting parameters are the same at each line or column though they may be different than the parameters used to fit across the dispersion.

The first time the task is run with a particular flat field (or aperture reference image if no flat field is used) the scattered light fitting parameters are set interactively using that image. The interactive step selects a particular line or column upon which the fitting is done interactively with the icfit commands. A query is first issued which allows skipping this interactive stage. Note that the interactive fitting is only for defining the fitting functions and orders. When the graphical icfit fitting is exited (with 'q') there is a second prompt allowing you to change the buffer distance (in the first cross-dispersion stage) from the apertures, change the line/column, or finally quit.

The initial fitting parameters and the final set parameters are recorded in the apscat1 and apscat2 hidden parameter sets. These parameters are then used automatically for every subsequent image which is scattered light corrected.

The scattered light subtraction modifies the input 2D images. To preserve the original data a copy of the original image is made with the same root name and the word "noscat" appended. The scattered light subtracted images will have the header keyword "APSCATTE" which is how the task avoids repeating the scattered light subtraction during any reprocessing. However if the redo option is selected the scattered light subtraction will also be redone by first restoring the "noscat" images to the original input names.

Flat Field and Fiber Throughput Corrections

Flat field corrections may be made during the basic CCD processing; i.e. direct division by the two dimensional flat field observation. In that case do not specify a flat field spectrum; use the null string "". The doargus task provides an alternative flat field response correction based on division of the extracted object spectra by the extracted flat field spectra. A discussion of the theory and merits of flat fielding directly verses using the extracted spectra will not be made here. The doargus flat fielding algorithm is the recommended method for flat fielding since it works well and is not subject to the many problems involved in two dimensional flat fielding.

In addition to correcting for pixel-to-pixel response the flat field step also corrects for differences in the fiber throughput. Thus, even if the pixel-to-pixel flat field corrections have been made in some other way it is desirable to use a sky or dome flat observation for determining a fiber throughput correction. Alternatively, a separately derived throughput file may be specified. This file consists of the aperture numbers (the same as used for the aperture reference) and relative throughput numbers.

The first step is extraction of the flat field spectrum, if specified, using the reference apertures. Only one flat field is allowed so if multiple flat fields are required the data must be reduced in groups. After extraction one or more corrections are applied. If the fitflat option is selected (the default) the extracted flat field spectra are averaged together and a smooth function is fit. The default fitting function and order are given by the parameters f_function and f_order. If the parameter f_interactive is "yes" then the fitting is done interactively using the fit1d task which uses the icfit interactive fitting commands.

The fitted function is divided into the individual flat field spectra to remove the basic shape of the spectrum while maintaining the relative individual pixel responses and any fiber to fiber differences. This step avoids introducing the flat field spectrum shape into the object spectra and closely preserves the object counts.

If a throughput image is available (an observation of blank sky usually at twilight) it is extracted. If no flat field is used the average signal through each fiber is computed and this becomes the response normalization function. Note that a dome flat may be used in place of a sky in the sky flat field parameter for producing throughput only corrections. If a flat field is specified then each sky spectrum is divided by the appropriate flat field spectrum. The total counts through each fiber are multiplied into the flat field spectrum thus making the sky throughput of each fiber the same. This correction is important if the illumination of the fibers differs between the flat field source and the sky. Since only the total counts are required the sky or dome flat field spectra need not be particularly strong though care must be taken to avoid objects.

Instead of a sky flat or other throughput image a separately derived throughput file may be used. It may be used with or without a flat field.

The final step is to normalize the flat field spectra by the mean counts of all the fibers. This normalization step is simply to preserve the average counts of the extracted object and arc spectra after division by the response spectra. The final relative throughput values are recorded in the log and possibly printed on the terminal.

These flat field response steps and algorithm are available as a separate task called msresp1d.

Dispersion Correction

Dispersion corrections are applied to the extracted spectra if the dispcor parameter is set. This can be a complicated process which the doargus task tries to simplify for you. There are three basic steps involved; determining the dispersion functions relating pixel position to wavelength, assigning the appropriate dispersion function to a particular observation, and resampling the spectra to evenly spaced pixels in wavelength.

The comparison arc spectra are used to define dispersion functions for the fibers using the tasks identify and reidentify. The interactive identify task is only used on the central fiber of the first arc spectrum to define the basic reference dispersion solution from which all other fibers and arc spectra are automatically derived using reidentify.

The set of arc dispersion function parameters are from identify and reidentify. The parameters define a line list for use in automatically assigning wavelengths to arc lines, a parameter controlling the width of the centering window (which should match the base line widths), the dispersion function type and order, parameters to exclude bad lines from function fits, and parameters defining whether to refit the dispersion function, as opposed to simply determining a zero point shift, and the addition of new lines from the line list when reidentifying additional arc spectra. The defaults should generally be adequate and the dispersion function fitting parameters may be altered interactively. One should consult the help for the two tasks for additional details of these parameters and the operation of identify.

Generally, taking a number of comparison arc lamp exposures interspersed with the program spectra is sufficient to accurately dispersion calibrate Argus spectra. Dispersion functions are determined independently for each fiber of each arc image and then assigned to the matching fibers in the program object observations. The assignment consists of selecting one or two arc images to calibrate each object image.

However, there is another calibration option which may be of interest. This option uses auxiliary line spectra, such as from dome lights or night sky lines, to monitor wavelength zero point shifts which are added to the basic dispersion function derived from a single reference arc. Initially one of the auxiliary fiber spectra is plotted interactively by identify with the reference dispersion function for the appropriate fiber. The user marks one or more lines which will be used to compute zero point wavelength shifts in the dispersion functions automatically. In this case it is auxiliary arc images which are assigned to particular objects.

The arc or auxiliary line image assignments may be done either explicitly with an arc assignment table (parameter arctable) or based on a header parameter. The task used is refspectra and the user should consult this task if the default behavior is not what is desired. The default is to interpolate linearly between the nearest arcs based on the Julian date (corrected to the middle of the exposure). The Julian date and a local Julian day number (the day number at local noon) are computed automatically by the task setjd and recorded in the image headers under the keywords JD and LJD. In addition the universal time at the middle of the exposure, keyword UTMIDDLE, is computed by the task setairmass and this may also be used for ordering the arc and object observations.

The last step of dispersion correction (resampling the spectrum to evenly spaced pixels in wavelength) is optional and relatively straightforward. If the linearize parameter is no then the spectra are not resampled and the nonlinear dispersion information is recorded in the image header. Other IRAF tasks (the coordinate description is specific to IRAF) will use this information whenever wavelengths are needed. If linearizing is selected a linear dispersion relation, either linear in the wavelength or the log of the wavelength, is defined once and applied to every extracted spectrum. The resampling algorithm parameters allow selecting the interpolation function type, whether to conserve flux per pixel by integrating across the extent of the final pixel, and whether to linearize to equal linear or logarithmic intervals. The latter may be appropriate for radial velocity studies. The default is to use a fifth order polynomial for interpolation, to conserve flux, and to not use logarithmic wavelength bins. These parameters are described fully in the help for the task dispcor which performs the correction. The interpolation function options and the nonlinear dispersion coordinate system is described in the help topic onedspec.package.

Sky Subtraction

Sky subtraction is selected with the skysubtract processing option. The sky spectra are selected by their aperture and beam numbers. If the skyedit option is selected the sky spectra are plotted using the task specplot. By default they are superposed to allow identifying spectra with unusually high signal due to object contamination. To eliminate a sky spectrum from consideration point at it with the cursor and type 'd'. The last deleted spectrum may be undeleted with 'e'. This allows recovery of incorrect or accidental deletions.

If the combining option is "none" then the sky and object fibers are paired and one sky is subtracted from one object and the saved sky will be the individual sky fiber spectra.

However, the usual case is to combine the individual skys into a single master sky spectrum which is then subtracted from each object spectrum. The sky combining algorithm parameters define how the individual sky fiber spectra, after interactive editing, are combined before subtraction from the object fibers. The goals of combining are to reduce noise, eliminate cosmic-rays, and eliminate fibers with inadvertent objects. The common methods for doing this to use a median and/or a special sigma clipping algorithm (see scombine for details). The scale parameter determines whether the individual skys are first scaled to a common mode. The scaling should be used if the throughput is uncertain, but in that case you probably did the wrong thing in the throughput correction. If the sky subtraction is done interactively, i.e. with the skyedit option selected, then after selecting the spectra to be combined a query is made for the combining algorithm. This allows modifying the default algorithm based on the number of sky spectra selected since the "avsigclip" rejection algorithm requires at least three spectra.

The combined sky spectrum is subtracted from only those spectra specified by the object aperture and beam numbers. Other spectra are retained unchanged. One may include the sky spectra as object spectra to produce residual sky spectra for analysis. The combined master sky spectra may be saved if the saveskys parameter is set. The saved sky is given the name of the object spectrum with the prefix "sky".


EXAMPLES

1. The following example uses artificial data and may be executed at the terminal (with IRAF V2.10). This is also the sequence performed by the test procedure "demos qtest".

ar> demos mkqdata
Creating image demoobj ...
Creating image demoflat ...
Creating image demoarc ...
hy> argus.verbose = yes
hy> doargus demoobj apref=demoflat flat=demoflat arcs1=demoarc 
>>> fib=13 width=4. minsep=5. maxsep=7. clean- splot+
Set reference apertures for demoflat
Resize apertures for demoflat?  (yes):
Edit apertures for demoflat?  (yes):

Fit curve to aperture 1 of demoflat interactively  (yes):

Fit curve to aperture 2 of demoflat interactively  (yes): N
Create response function demoflatnorm.ms
Extract flat field demoflat
Fit and ratio flat field demoflat

Extract flat field demoflat
Fit and ratio flat field demoflat
Create the normalized response demoflatnorm.ms
demoflatnorm.ms -> demoflatnorm.ms  using bzero: 0.
    and bscale: 1.000001
    mean: 1.000001  median: 1.110622  mode: 1.331709
    upper: INDEF  lower: INDEF
Average aperture response:
1.  1.136281
2.  1.208727
3.  0.4720535
4.  1.308195
5.  1.344551
6.  1.330406
7.  0.7136359
8.  1.218975
9.  0.7845755
10.  0.9705642
11.  1.02654
12.  0.3745525
13.  1.110934
Extract arc reference image demoarc
Determine dispersion solution for demoarc




REIDENTIFY: NOAO/IRAF V2.10BETA valdes@puppis Tue 16:01:07 11-Feb-92
  Reference image = d....ms.imh, New image = d....ms, Refit = yes
     Image Data Found    Fit Pix Shift User Shift  Z Shift     RMS
d....ms - Ap 7  29/29  29/29   9.53E-4    0.00409  2.07E-7   0.273
Fit dispersion function interactively? (no|yes|NO|YES) (yes): n
d....ms - Ap 5  29/29  29/29   -0.0125    -0.0784  -1.2E-5   0.315
Fit dispersion function interactively? (no|yes|NO|YES) (no): y
d....ms - Ap 5  29/29  29/29   -0.0125    -0.0784  -1.2E-5   0.315
d....ms - Ap 4  29/29  29/29   -0.0016    -0.0118  -2.7E-6   0.284
Fit dispersion function interactively? (no|yes|NO|YES) (yes): N
d....ms - Ap 4  29/29  29/29   -0.0016    -0.0118  -2.7E-6   0.284
d....ms - Ap 3  29/29  29/29  -0.00112   -0.00865  -1.8E-6   0.282
d....ms - Ap 2  29/29  29/29  -0.00429    -0.0282  -4.9E-6   0.288
d....ms - Ap 1  29/29  28/29   0.00174    0.00883  6.63E-7   0.228
d....ms - Ap 9  29/29  29/29  -0.00601    -0.0387  -6.5E-6   0.268
d....ms - Ap 10 29/29  29/29  -9.26E-4   -0.00751  -1.7E-6   0.297
d....ms - Ap 11 29/29  29/29   0.00215     0.0114  1.05E-6   0.263
d....ms - Ap 12 29/29  29/29  -0.00222    -0.0154  -2.8E-6   0.293
d....ms - Ap 13 29/29  29/29   -0.0138    -0.0865  -1.4E-5    0.29
d....ms - Ap 14 29/29  29/29  -0.00584    -0.0378  -6.8E-6   0.281
Dispersion correct demoarc
demoarc.ms: w1 = 5785.8..., w2 = 7351.6..., dw = 6.14..., nw = 256
  Change wavelength coordinate assignments? (yes|no|NO): n
Extract object spectrum demoobj
Assign arc spectra for demoobj
[demoobj] refspec1='demoarc'
Dispersion correct demoobj
demoobj.ms.imh: w1 = 5785.833, w2 =  7351.63, dw = 6.140378, nw = 256
Sky subtract demoobj:  skybeams=0
Edit the sky spectra? (yes):

Sky rejection option (none|minmax|avsigclip) (avsigclip):
demoobj.ms.imh:
Splot spectrum? (no|yes|NO|YES) (yes):
Image line/aperture to plot (1:) (1):



REVISIONS

REVISIONS

DOARGUS V2.10.3

The usual output WCS format is "equispec". The image format type to be processed is selected with the imtype environment parameter. The dispersion axis parameter is now a package parameter. Images will only be processed if the have the CCDPROC keyword. A datamax parameter has been added to help improve cosmic ray rejection. A scattered light subtraction processing option has been added.


SEE ALSO

apedit, apfind, approfiles, aprecenter, apresize, apsum, aptrace, apvariance, , ccdred, center1d, dispcor, fit1d, icfit, identify, msresp1d, observatory, , onedspec.package, refspectra, reidentify, scombine, setairmass, setjd, , specplot, splot,


This page automatically generated from the iraf .hlp file. If you would like your local iraf package .hlp files converted into HTML please contact Dave Mills at NOAO.

dmills@noao.edu