Hello again
After all, we are prepared for actual processing so I would like to write something about basic steps of data reduction in IRAF which should be applied to all kind of data (photometric, spectroscopic..). These steps are focused on removing the detector 'finger print'.
***NOTE***
I was working with data from ESO archive, obtained from ESO VLT, specifically from its UVES spectrograph. For this reason, all the examples will be for echelle spectra from ESO. Though, main differences for other kinds of data will not be very relevant in this part of processing.
***
I was working with data from ESO archive, obtained from ESO VLT, specifically from its UVES spectrograph. For this reason, all the examples will be for echelle spectra from ESO. Though, main differences for other kinds of data will not be very relevant in this part of processing.
***
First of all, lets assume that we have downloaded dataset which contained following files:
Bias1.fits
Bias2.fits
Bias3.fits
Bias4.fits
Bias5.fits
Flat1.fits
Flat2.fits
Flat3.fits
Flat4.fits
Flat5.fits
Orderdef.fits
Wave.fits
Object.fits
Bias1-5 are bias frames, Flat1-5 are flatfield frames, orderdef is spectrum of lamp without any spectroscopic features, wave is spectrum of ThAr lamp (or any other calibration lamp) and object is our scientific frame.
Header adjusting
***ONLY FOR ESO DATA***
All ESO data contain ESO hierarchical keywords in their headers. It means that a huge part of keywords begin with the words ESO HIERARCH. This is a problem, because IRAF is not able to read these keywords.
A simple way to rewrite all these ESO HIERARCH keywords is to use esohdr task in esowfi package. However, this package is not a default part of IRAF and you need to install it separately. BUT, this package is part of Ureka distribution.
We are going to work with esohdr task so let's check its parameters:
vocl> esowfi
esohdr esosetinst
esowfi> epar esohdr
_______________________________________________________
_______________________________________________________
PACKAGE = esowfi
TASK = esohdr
input = * List of raw ESO MEF files
(queryty= yes) Query for observation type?
(namps = 1) Number of amps per CCD
(ntransi= 2) Number of bad transient columns
(wcsdb = esodb$wcs.db) WCS database
(redo = no) Redo?
obstype = Observation type
(fd1 = )
(fd2 = )
(mode = ql)
_______________________________________________________
_______________________________________________________
You want to apply it on all your data. If the queryty parameter is set to yes, task will be asking you for manual input of observation type (bias/zero/flat/object/comp) for each fits. We do not want to do it for every single image because (in ESO data) there are present some unknown obs. types which we do not want to classify. We can change this parameter to 'no' by epar or we can run this task like this:
esowfi> esohdr * queryty-
This will take a while according to the number of your images. If you want to check if the work was done correctly, run:
esowfi> imhead name_of_one_image.fits long+
and there should be no ESO HIERARCH parameters left.
***A little more of preparation
Now is the time to set instrument (as we have done it in previous part) and also add some keywords to the headers.
ccdred> setinstrument specphot //specphot in my case
ccdred> setjd
ccdred> setairmass
Do not forget to check parameters in tasks setjd and setairmass!!! First two paragraphs contain very important header-dependent parameters and you should check if they are given as they are in your headers. For example I had to edit epoch parameter in setjd and equinox parameter in setairmass:
ccdred> setjd * epoch=targepoc
ccdred> setairmass * equinox=targequi
After executing each one of these two tasks you will be asked for observatory identification. If you do not know the ID of observatory your data are from, type '?'
Removing overscan and bias
What the hell are overscan and bias!? - a little bit of theory
Overscan is the part (a few rows/columns) of your CCD image located on one of the edges. This part of image was not illuminated in the observation so it contains only the noise of the chip.
Bias frames are individual frames obtained with zero integration time and also contain only the noise of the chip.
***Fuuu that was a lot of theory!!***
Combining BIAS frames
Several bias frames are taken within one observation (usually 5-15). We need to combine all of them to create master bias frame. This is done by zerocombine task.
cl> noao
noao> imred
imred> ccdred
ccdred> epar zerocombine
_______________________________________________________
PACKAGE = ccdred
TASK = zerocombine
input = List of zero level images to combine
(output = Zero) Output zero level name
(combine= average) Type of combine operation
(reject = minmax) Type of rejection
(ccdtype= zero) CCD image type to combine
...
(rdnoise= outron) ccdclip: CCD readout noise (electrons)
(gain = outgain) ccdclip: CCD gain (electrons/DN)
...
(mode = ql)
_______________________________________________________
Bias frames are individual frames obtained with zero integration time and also contain only the noise of the chip.
***Fuuu that was a lot of theory!!***
Combining BIAS frames
Several bias frames are taken within one observation (usually 5-15). We need to combine all of them to create master bias frame. This is done by zerocombine task.
cl> noao
noao> imred
imred> ccdred
ccdred> epar zerocombine
_______________________________________________________
PACKAGE = ccdred
TASK = zerocombine
input = List of zero level images to combine
(output = Zero) Output zero level name
(combine= average) Type of combine operation
(reject = minmax) Type of rejection
(ccdtype= zero) CCD image type to combine
...
(rdnoise= outron) ccdclip: CCD readout noise (electrons)
(gain = outgain) ccdclip: CCD gain (electrons/DN)
...
(mode = ql)
_______________________________________________________
Here are the most important values of this task. As long as the parameter ccdtype is set to zero and in the recent directory are present data from only one dataset you can set the input as '*'. It is pretty practical to name output as Zero.fits. rdnoise and gain are parameters which refers to keywords in header. You should check these keywords in your headers and write them in there (in ESO data they are named outron and outgain respectively). Now you can run the task:
ccdred> zerocombine * out=Zero ccdtype=zero rdnoise=outron gain=outgain
Determining the overscan
The simplest way to determine the position of overscan is to look for the position in the header. The best way to determine its position is to look at the flat frame.
Some data contain in their headers full coordinates of overscan. If it is so in your data, you just find specific keyword and if you are happy with that value you can proceed to the next step.
Some data do not have given full coordinates but only the border pixels in X or Y coordinates. Again, if you are happy with these values you can proceed to the next step.
Finally, you can obtain data, where you do not have given any overscan position in header. In this case or if you want to check the given values from the cases above, you can do so by inspecting one or more flat frames from your auxiliary observation.
At first, you need to display the image:
***WARNING***
Brace yourself!!! This is the first time you will open graphical interface of IRAF all by yourself.
***
ccdred> implot flat1
As default, this task will plot the middle line of given image (in this case of image flat1.fits) and if the orders (if you are wandering what the word 'order' means keep reading) in your flat are oriented along the columns this plot will be absolutely great for you. However, if the orders in your data are oriented along lines you will need to change the plot from lines to columns. This is done by typing 'c' (as for columns) in newly opened window (this can be changed back to lines by typing 'l' as for lines). What you want to see is the cut of the picture across the orders which looks something like this ***NOTE***
Apply only for echelle spectra - plot for longslit spectra should be smoothly curved curve!!!
***
ccdred> zerocombine * out=Zero ccdtype=zero rdnoise=outron gain=outgain
Determining the overscan
The simplest way to determine the position of overscan is to look for the position in the header. The best way to determine its position is to look at the flat frame.
Some data contain in their headers full coordinates of overscan. If it is so in your data, you just find specific keyword and if you are happy with that value you can proceed to the next step.
Some data do not have given full coordinates but only the border pixels in X or Y coordinates. Again, if you are happy with these values you can proceed to the next step.
Finally, you can obtain data, where you do not have given any overscan position in header. In this case or if you want to check the given values from the cases above, you can do so by inspecting one or more flat frames from your auxiliary observation.
At first, you need to display the image:
***WARNING***
Brace yourself!!! This is the first time you will open graphical interface of IRAF all by yourself.
***
ccdred> implot flat1
As default, this task will plot the middle line of given image (in this case of image flat1.fits) and if the orders (if you are wandering what the word 'order' means keep reading) in your flat are oriented along the columns this plot will be absolutely great for you. However, if the orders in your data are oriented along lines you will need to change the plot from lines to columns. This is done by typing 'c' (as for columns) in newly opened window (this can be changed back to lines by typing 'l' as for lines). What you want to see is the cut of the picture across the orders which looks something like this ***NOTE***
Apply only for echelle spectra - plot for longslit spectra should be smoothly curved curve!!!
***
*** What are the orders?***
Basically, order is every single peak of this picture.
In echelle spectra, the spectrum is too long to be written in one line on the ccd chip. For this reason, the whole spectrum is divided into smaller parts which are recorder on the chip as parallel lines. Each line is called order.
Another term related to this is aperture (you will use and hate the term a lot) what is basically the model describing specific order for IRAF.
***
As I have outlined above, there are some shortcut keys to simplify usage of this graphical interface. For the full list type '?' and this list will be opened in IRAF terminal.
Back to overscan determination, we want to look closer to both edges of this image. This is done by typing 'e' on opposite corners (down-left corner and up-right corner) of new viewport.
After zooming both edges of graph, you should see something like this at least on one edge.
You can see, that the first 25 pixels have very similar values which are close to zero. This part of image is our overscan.
To confirm this conclusion you can average all these overscan columns and plot them by typing ':a N' and ':c N/2' where N is number of overscan columns (in this case - ':a 25' and ':c 12') (it is meant to average 25 columns around given column 12) (This applies only if the overscan is on the left - starting at the fist pixel, otherwise you should type ':a N' and ':c L-(N/2)' where L is the number of lines in the image.)
Now you should see similar image which should resemble plot of the bias frame.
Last step is to write down the coordinates of this region. It have to be written in this format:
TASK = ccdproc
images = List of CCD images to correct
(output = ) List of output CCD images
(ccdtype= object) CCD image type to correct
(max_cac= 0) Maximum image caching memory (in Mbytes)
(noproc = no) List processing steps only?
(fixpix = yes) Fix bad CCD lines and columns?
(oversca= yes) Apply overscan strip correction?
(trim = yes) Trim the image?
(zerocor= yes) Apply zero level correction?
(darkcor= yes) Apply dark count correction?
(flatcor= yes) Apply flat field correction?
(illumco= no) Apply illumination correction?
(fringec= no) Apply fringe correction?
(readcor= no) Convert zero level image to readout correction?
(scancor= no) Convert flat field image to scan correction?
(readaxi= line) Read out axis (column|line)
(fixfile= ) File describing the bad lines and columns
(biassec= ) Overscan strip image section
(trimsec= ) Trim data section
(zero = ) Zero level calibration image
(dark = ) Dark count calibration image
(flat = ) Flat field images
(illum = ) Illumination correction images
(fringe = ) Fringe correction images
(minrepl= 1.) Minimum flat field value
(scantyp= shortscan) Scan type (shortscan|longscan)
(nscan = 1) Number of short scan lines
(interac= no) Fit overscan interactively?
(functio= legendre) Fitting function
(order = 1) Number of polynomial terms or spline pieces
(sample = *) Sample points to fit
(naverag= 1) Number of sample points to combine
(niterat= 1) Number of rejection iterations
(low_rej= 3.) Low sigma rejection factor
(high_re= 3.) High sigma rejection factor
(grow = 0.) Rejection growing radius
(mode = ql)
images = @all_but_bias_frames List of CCD images to correct
(output = ) List of output CCD images
(ccdtype= ) CCD image type to correct
(fixpix = no) Fix bad CCD lines and columns?
(oversca= yes) Apply overscan strip correction?
(trim = yes) Trim the image?
(zerocor= yes) Apply zero level correction?
(darkcor= ) Apply dark count correction?
(flatcor= ) Apply flat field correction?
(illumco= no) Apply illumination correction?
(fringec= no) Apply fringe correction?
(biassec= [1:25,*]) Overscan strip image section
(trimsec= [26:1475,*] ) Trim data section
(zero = Zero) Zero level calibration image
Basically, order is every single peak of this picture.
In echelle spectra, the spectrum is too long to be written in one line on the ccd chip. For this reason, the whole spectrum is divided into smaller parts which are recorder on the chip as parallel lines. Each line is called order.
Another term related to this is aperture (you will use and hate the term a lot) what is basically the model describing specific order for IRAF.
***
As I have outlined above, there are some shortcut keys to simplify usage of this graphical interface. For the full list type '?' and this list will be opened in IRAF terminal.
Back to overscan determination, we want to look closer to both edges of this image. This is done by typing 'e' on opposite corners (down-left corner and up-right corner) of new viewport.
After zooming both edges of graph, you should see something like this at least on one edge.
You can see, that the first 25 pixels have very similar values which are close to zero. This part of image is our overscan.
To confirm this conclusion you can average all these overscan columns and plot them by typing ':a N' and ':c N/2' where N is number of overscan columns (in this case - ':a 25' and ':c 12') (it is meant to average 25 columns around given column 12) (This applies only if the overscan is on the left - starting at the fist pixel, otherwise you should type ':a N' and ':c L-(N/2)' where L is the number of lines in the image.)
Now you should see similar image which should resemble plot of the bias frame.
Last step is to write down the coordinates of this region. It have to be written in this format:
[ColumnA:ColumnB,LineA:LineB]
Where A and B are boundary pixels. In my case it is [1:25,*] (overscan is located in all lines so we can type '*').
***NOTE***
There is one optional step you can do for later quality control. More information at the end of this entry.
***
***HOLD ON***
Before you insanely scroll down for more information, you will be interested in reading part 'Working with flats' between '*' and '*'. Trust me, you will need it!
***HOLD ON***
Before you insanely scroll down for more information, you will be interested in reading part 'Working with flats' between '*' and '*'. Trust me, you will need it!
***
First run of CCDPROC
We are ready for the first reduction of our scientific data. However, before we proceed to this, I recommend you to make back-up copy of your data if you have not done it yet. The changes we are about to do with our data are irreversible.
As always, have a look at task's parameters:
cl> noao
noao>imred
imred>ccdred
ccdred> epar ccdproc
_______________________________________________________
PACKAGE = ccdredTASK = ccdproc
(output = ) List of output CCD images
(ccdtype= object) CCD image type to correct
(max_cac= 0) Maximum image caching memory (in Mbytes)
(noproc = no) List processing steps only?
(fixpix = yes) Fix bad CCD lines and columns?
(oversca= yes) Apply overscan strip correction?
(trim = yes) Trim the image?
(zerocor= yes) Apply zero level correction?
(darkcor= yes) Apply dark count correction?
(flatcor= yes) Apply flat field correction?
(illumco= no) Apply illumination correction?
(fringec= no) Apply fringe correction?
(readcor= no) Convert zero level image to readout correction?
(scancor= no) Convert flat field image to scan correction?
(readaxi= line) Read out axis (column|line)
(fixfile= ) File describing the bad lines and columns
(biassec= ) Overscan strip image section
(trimsec= ) Trim data section
(zero = ) Zero level calibration image
(dark = ) Dark count calibration image
(flat = ) Flat field images
(illum = ) Illumination correction images
(fringe = ) Fringe correction images
(minrepl= 1.) Minimum flat field value
(scantyp= shortscan) Scan type (shortscan|longscan)
(nscan = 1) Number of short scan lines
(interac= no) Fit overscan interactively?
(functio= legendre) Fitting function
(order = 1) Number of polynomial terms or spline pieces
(sample = *) Sample points to fit
(naverag= 1) Number of sample points to combine
(niterat= 1) Number of rejection iterations
(low_rej= 3.) Low sigma rejection factor
(high_re= 3.) High sigma rejection factor
(grow = 0.) Rejection growing radius
(mode = ql)
_______________________________________________________
This task should be run on all datatypes except bias frames for the first time. There are two ways how to do it. You can make a list in text editor where each line will contain name of one image and name it 'all_but_bias_frames' for example. In this case, ccdtype parameter should not have any value.
(output = ) List of output CCD images
(ccdtype= ) CCD image type to correct
Another way is to run this task several times with list of input images '*' and ccdtype parameter set to other vaule each time (run it for object, flat, unknown and dark (if you have some)).
There are also other parameters we need to change. This time, we want to do overscan correction, zero correction and trim the image.
(oversca= yes) Apply overscan strip correction?
(trim = yes) Trim the image?
(zerocor= yes) Apply zero level correction?
(darkcor= ) Apply dark count correction?
(flatcor= ) Apply flat field correction?
(illumco= no) Apply illumination correction?
(fringec= no) Apply fringe correction?
We also need to specify overscan region, trim section and combined bias frame. There is a new term trim section which means the part of image that contains scientific data. When we were talking about overscan determination, we found similar region on both edges of image. Overscan was located on both edges but we can work only with one part. Another should be removed because it do not contain any useful data. Similarly, we can remove badly illuminated columns on the edge.
(trimsec= [26:1475,*] ) Trim data section
(zero = Zero) Zero level calibration image
Finally, we can run the task
ccdred> ccdproc @all_but_bias_frames ccdtype='' fixpix- oversca+ trim+ zerocor+ darkcor- flatcor- illumco- fringec- biassec[1:25,*] trimsec[26:1475,*] zero=Zero
or alternatively
ccdred> ccdproc @all_but_bias_frames ccdtype='' fixpix- oversca+ trim+ zerocor+ darkcor- flatcor- illumco- fringec- biassec[1:25,*] trimsec[26:1475,*] zero=Zero
or alternatively
ccdred> ccdproc * ccdtype=object fixpix- oversca+ trim+ zerocor+ darkcor- flatcor- illumco- fringec- biassec[1:25,*] trimsec[26:1475,*] zero=Zero
Task will ask you, if you want to plot each processed frame. You can check first few frames and than answer 'NO'.
Working with flats
*
Here comes the most interactive part of data reduction which is also most crucial. You should take special care about these steps.
As it was in case of bias frames, there is also several flat frames you want to combine into one master frame. This is done by the task flatcombine
ccdred> epar flatcombine
_______________________________________________________
PACKAGE = ccdred
TASK = flatcombine
input = List of flat field images to combine
(output = ) Output flat field root name
(combine= average) Type of combine operation
(reject = crreject) Type of rejection
(ccdtype= flat) CCD image type to combine
(b_sampl= -15:-10,10:15) Background sample regions
Here comes the most interactive part of data reduction which is also most crucial. You should take special care about these steps.
As it was in case of bias frames, there is also several flat frames you want to combine into one master frame. This is done by the task flatcombine
ccdred> epar flatcombine
_______________________________________________________
PACKAGE = ccdred
TASK = flatcombine
input = List of flat field images to combine
(output = ) Output flat field root name
(combine= average) Type of combine operation
(reject = crreject) Type of rejection
(ccdtype= flat) CCD image type to combine
...
(rdnoise= rdnoise) ccdclip: CCD readout noise (electrons)
(gain = gain) ccdclip: CCD gain (electrons/DN)
...
_______________________________________________________
Here are the most important parameters you will want to check. List of input images can be set to '*' as long as ccdtype is set to 'flat'. Name of combined flat can be set as anything but it is practical to name it Flat.fits. rdnoise and gain parameters were discussed above but you have to be aware of their presence in this task. For my data, the command looks like this:
ccdred> flatcombine * out=Flat ccdtype=flat gain=outgain rdnoise=outron
The next step is to normalize our master flat. Orders of the spectrum are aligned either along lines or columns. Flat frames are taken with the aim to find the dependence of the sensitivity of the chip along the orders and to remove it from scientific observation. By the normalizing our flat we create frame with this function and we will be able to remove it. Though, this function is different for every order. For this reason, we need to find all orders in flat frames, define them with the apertures and find this function for each one (as you can notice, we are at the step where we are going to start hating the term "aperture").
All these procedures can be done within one task - apflatten. For better image of what we are doing I will describe this step by the tasks which are used by apflatten.
There is one special auxiliary frame we will need to do the job correctly. When you list your data by ccdlist task, it is referred to this frame as 'LAMP,ORDERDEF' in ESO data. When you open this frame in ds9 it should look like this
with distinct and narrow orders. This frame will be used for defining orders which will be used for Flat frame.
Before we start defining the apertures, we need to know the width of apertures. You just implot this frame, zoom it to the base of several orders with maximal height, set cursor on one side of base and press space-bar. In the lower-left corner you will see coordinates of pixel the cursor is located on. Do it for another side of given order and subtract these values. Their difference is its width. Do it for several orders and average found values.
In addition, we want to find the sample regions for background subtraction. We will define it to the apdefault task now but we will need it in later step. Background regions are found between the orders. We know the width of the apertures and we want to know the distances between them. We can subtract the coordinates of right base and left base of neighboring orders.
These values we want to update in the task apdefault
These values we want to update in the task apdefault
imred> echelle
echelle> epar apdefault
_______________________________________________________
PACKAGE = echelle
TASK = apdefault
(lower = -6.) Lower aperture limit relative to center
(upper = 6.) Upper aperture limit relative to center
(apidtab= ) Aperture ID table
...
...
_______________________________________________________
where we edit lower and upper parameter to the half of the width we have found in the plot of Flat frame.
Sample region is defined in form -upper:-lower,lower:upper where upper is upper limit of region and lower is lower limit of region (negative values are oriented leftward to the order). We get lower limit by adding a few pixels to the aperture limit and upper by adding five or ten pixels to the lower limit (according to the distance between orders).
Sample region is defined in form -upper:-lower,lower:upper where upper is upper limit of region and lower is lower limit of region (negative values are oriented leftward to the order). We get lower limit by adding a few pixels to the aperture limit and upper by adding five or ten pixels to the lower limit (according to the distance between orders).
Now, we have everything we need and we can proceed to actual work and find orders and define the apertures (if you are running apflatten this will be the first procedure). We are going to run aptrace task.
echelle> epar aptrace
_______________________________________________________
PACKAGE = echelle
TASK = aptrace
input = List of input images to trace
(apertur= ) Apertures
(referen= ) List of reference images
(interac= yes) Run task interactively?
(find = yes) Find apertures?
(recente= no) Recenter apertures?
(resize = no) Resize apertures?
(edit = no) Edit apertures?
(trace = yes) Trace apertures?
(fittrac= yes) Fit the traced points interactively?
(line = INDEF) Starting dispersion line
(nsum = 10) Number of dispersion lines to sum
(step = 10) Tracing step
(nlost = 3) Number of consecutive times profile is lost before quitting
(functio= legendre) Trace fitting function
(order = 2) Trace fitting function order
(sample = *) Trace sample regions
(naverag= 1) Trace average or median
(niterat= 0) Trace rejection iterations
(low_rej= 3.) Trace lower rejection sigma
(high_re= 3.) Trace upper rejection sigma
(grow = 0.) Trace rejection growing radius
(mode = ql)
_______________________________________________________
Input image will be our ORDERDEF frame. As we want to find apertures, we wan to run this task interactively, edit apertures, trace them and fit traced points. Also, we want to decrease the number of dispersion lines to sum (nsum), decrease the tracing step (step) and increase the number of consecutive times profile is lost (nlost). These changes will prolong the time of tracing orders, but the result will be more accurate. The task should then look like this:
echelle> aptrace orderdef interac+ find+ recenter- resize- edit+ trace+ fittrac+ nsum=5 (if needed less) step=5 (if needed less) nlost=10 (if needed more)
After executing this task, you will be asked for the number of apertures to be found automatically and then the graphical interface will be opened. It will look exactly like implot of orderdef frame.
If we want to change the automatically found apertures, we can delete them by typing 'd' with cursor above given aperture. New aperture will be defined by typing 'n' (as for new aperture) with cursor above the center of aperture. After first typing, it can ask you to order of numbering (increasing|decreasing) and it is up to you which one you choose. I am numbering from left to right. For better view, you can zoom the image. Unlike in the implot interface, you have to press 'w' to enter window mode to zoom by the double 'e'.
If we want to change the automatically found apertures, we can delete them by typing 'd' with cursor above given aperture. New aperture will be defined by typing 'n' (as for new aperture) with cursor above the center of aperture. After first typing, it can ask you to order of numbering (increasing|decreasing) and it is up to you which one you choose. I am numbering from left to right. For better view, you can zoom the image. Unlike in the implot interface, you have to press 'w' to enter window mode to zoom by the double 'e'.
When you define all apertures and you are happy with them, you can type 'q'. Program will ask you if you want to trace apertures and if you want to fit curves interactively. Confirm the answer in square brackets by pressing enter. For a few seconds (or minutes) it will be doing some magic and then it will ask you again if you want to fit something. Again confirm by pressing enter.
Absolutely new and never-seen-before graph will be shown.
Absolutely new and never-seen-before graph will be shown.
Here you can see the actual aperture describing the order (first graph is for the first aperture - order). If the orders in your orderdef.fits frame are straight lines, this graph should be also a straight line. If it is not, I am really sorry but you have done something wrong. You can decrease the value of nsum and step parameters or redefine apertures. And this is the reason why I HATE apertures.
In this graph, you can see, that there are two points that are not on the line. You can delete them by pressing 'd' with cursor above them. Also, the dashed line which represents fit do not exactly match found points on the edge. You can try to change this by changing the fitting parameters. Fitting order is changed by ':o n' where n is the number of order (do not set it too high! if you have nice straight line, 5th order is more than appropriate!). You can also increase the number of iterations by ':niterate n' (but again, do not set it too high! in this case the n=2 is more than appropriate!). These changes in fitting will be showed after pressing 'f' (as for fit).
When you are happy with your fit, type 'q'. You will be asked if you want to fit next aperture. Answer yes and you can again change the fitting parameters. After the last aperture you will be asked if you want to write apertures into the database. Answer 'yes'!! (This database is located in the sub-directory of your working directory and you should NOT delete it.)
*
After this, we want to assign found apertures to our Flat frame. Now we are going to use apflatten task.
echelle> epar apflatten
_______________________________________________________
In this graph, you can see, that there are two points that are not on the line. You can delete them by pressing 'd' with cursor above them. Also, the dashed line which represents fit do not exactly match found points on the edge. You can try to change this by changing the fitting parameters. Fitting order is changed by ':o n' where n is the number of order (do not set it too high! if you have nice straight line, 5th order is more than appropriate!). You can also increase the number of iterations by ':niterate n' (but again, do not set it too high! in this case the n=2 is more than appropriate!). These changes in fitting will be showed after pressing 'f' (as for fit).
When you are happy with your fit, type 'q'. You will be asked if you want to fit next aperture. Answer yes and you can again change the fitting parameters. After the last aperture you will be asked if you want to write apertures into the database. Answer 'yes'!! (This database is located in the sub-directory of your working directory and you should NOT delete it.)
*
After this, we want to assign found apertures to our Flat frame. Now we are going to use apflatten task.
echelle> epar apflatten
_______________________________________________________
input = List of images to flatten
output = List of output flatten images
(apertur= ) Apertures
(referen= ) List of reference images
(interac= yes) Run task interactively?
(find = yes) Find apertures?
(recente= yes) Recenter apertures?
(resize = yes) Resize apertures?
(edit = yes) Edit apertures?
(trace = yes) Trace apertures?
(fittrac= yes) Fit traced points interactively?
(flatten= yes) Flatten spectra?
(fitspec= yes) Fit normalization spectra interactively?
output = List of output flatten images
(apertur= ) Apertures
(referen= ) List of reference images
(interac= yes) Run task interactively?
(find = yes) Find apertures?
(recente= yes) Recenter apertures?
(resize = yes) Resize apertures?
(edit = yes) Edit apertures?
(trace = yes) Trace apertures?
(fittrac= yes) Fit traced points interactively?
(flatten= yes) Flatten spectra?
(fitspec= yes) Fit normalization spectra interactively?
...
(readnoi= 0.) Read out noise sigma (photons)
(gain = 1.) Photon gain (photons/data number)
...
_______________________________________________________
Here we use the Flat frame as input and ORDERDEF as reference image. We will be fitting normalized function, so we want to run task interactively. We have defined the apertures, so we want to assign them from database and we do not want to change them or trace them again. We want to flatten spectra and fit normalization spectra. There are also reandnoi and gain parameters.
echelle> apflatten Flat out=nFlat referen=orderdef.fits interac+ find+ recenter- resize- edit- trace- fittrac- flatten+ fitspec+ readnoi=outron gain=outgain
You will be asked if you want to write apertures for Flat to database, if you want to flatten apertures and if you want to fit spectra and also if you want to fit spectrum for the first aperture. Answer yes to all questions. If you are running this task more times in one directory with the same output name, you will be also asked, if you want to clobber existing output. Answer yes, otherwise it do not update the output.
You will get graph like this. You want to fit the dashed line on the spectrum. As it was in case of tracing, you will change the order and number of iterations, though you can set the order to much higher values (like 50 or even 100). When you are happy with your fit, type 'q'. Check all apertures and if you want, you can delete some points or change the fitting order. After the last aperture, there will be no question.
Now, the normalization of master flat should be done. You can check it in two ways. At first, implot nFlat.fits.
There should be only very small variations from 1 (actually +/- 0.1 - 0.15).
Another way is to display the frame by ds9.
If you have anything similar to this - you must be really happy (it took me 4 months! to get something like this :D ).
Normalization of scientific spectra
Finally, we are getting to the end of this post. the last step is to normalize spectra of observed object. (Actually, there are other steps like removing darks or illumination. My observations were not in need for these steps and I also did not have auxiliary observations for this corrections. In case I will work on them I will let you know!)
This is again done in ccdproc task. We know the crucial parameters already, so I am going to write only the final task:
ccdred> ccdproc object flatcor+ flat=nFlat ccdtype=object fixpix- oversca+ trim+ zerocor+ darkcor- flatcor+ illumco- fringec- biassec[1:25,*] trimsec[26:1475,*] zero=Zero
ccdred> ccdproc wave.fits flatcor+ flat=nFlat ccdtype=unknown fixpix- oversca+ trim+ zerocor+ darkcor- flatcor+ illumco- fringec- biassec[1:25,*] trimsec[26:1475,*] zero=Zero
You can see, that it is just previous task for ccdproc with some new parameters. Ccdproc task is intelligent enough not to do the same operation twice on the same frame. I wrote it this way to point out that we are changing only flatcor- to flatcor+ what means that we are going to apply flat correction. For this. we are defining also the reference flat frame (flat=nFlat). This time it should be run only on two frames - once on the scientific observation and once for LAMP,WAVE frame (Again, this is the ESO name for this frame. It is the calibration spectrum of ThAr lamp.)
If you are asked any question, answer yes.
Optional step that you cannot wait to read
Before you apply any changes to your scientific data, you can try to extract reference spectrum.
As you have already read (doesn't matter if you have read all blog already or if you skipped most of it) you know how to trace the apertures (if you have no idea what I am talking about, you skipped that part of 'Working with flats' between '*').
Now, you are going to do precisely the same tracing on the ORDERDEF (and after this, you will not need to do it before normalization of Flat).
When you do it you will run apall task. For now, you do not need to know what it is doing (it will be discussed in the next post) and it is sufficient to know that it extracts actual spectra.
Run it this way
echelle> apall object out=refspec format=echelle referen=orderdef interac+ find+ recenter- resize- edit- trace- fittrac- extract+ extras- review-
If you want to see your spectrum, you can use task splot (working with splot will be discussed later).
Hopefully, you have managed to do all reduction steps I have described. In the next post I will discuss extracting processed spectra in more details as well as their calibration.
Again, if you have any question feel free to ask me. I will be glad to answer. Also if you have found any mistake in my processing let me know what I am doing wrong.
Enjoy your processing,
Jakub
You will be asked if you want to write apertures for Flat to database, if you want to flatten apertures and if you want to fit spectra and also if you want to fit spectrum for the first aperture. Answer yes to all questions. If you are running this task more times in one directory with the same output name, you will be also asked, if you want to clobber existing output. Answer yes, otherwise it do not update the output.
You will get graph like this. You want to fit the dashed line on the spectrum. As it was in case of tracing, you will change the order and number of iterations, though you can set the order to much higher values (like 50 or even 100). When you are happy with your fit, type 'q'. Check all apertures and if you want, you can delete some points or change the fitting order. After the last aperture, there will be no question.
Now, the normalization of master flat should be done. You can check it in two ways. At first, implot nFlat.fits.
There should be only very small variations from 1 (actually +/- 0.1 - 0.15).
Another way is to display the frame by ds9.
If you have anything similar to this - you must be really happy (it took me 4 months! to get something like this :D ).
Normalization of scientific spectra
Finally, we are getting to the end of this post. the last step is to normalize spectra of observed object. (Actually, there are other steps like removing darks or illumination. My observations were not in need for these steps and I also did not have auxiliary observations for this corrections. In case I will work on them I will let you know!)
This is again done in ccdproc task. We know the crucial parameters already, so I am going to write only the final task:
ccdred> ccdproc object flatcor+ flat=nFlat ccdtype=object fixpix- oversca+ trim+ zerocor+ darkcor- flatcor+ illumco- fringec- biassec[1:25,*] trimsec[26:1475,*] zero=Zero
ccdred> ccdproc wave.fits flatcor+ flat=nFlat ccdtype=unknown fixpix- oversca+ trim+ zerocor+ darkcor- flatcor+ illumco- fringec- biassec[1:25,*] trimsec[26:1475,*] zero=Zero
You can see, that it is just previous task for ccdproc with some new parameters. Ccdproc task is intelligent enough not to do the same operation twice on the same frame. I wrote it this way to point out that we are changing only flatcor- to flatcor+ what means that we are going to apply flat correction. For this. we are defining also the reference flat frame (flat=nFlat). This time it should be run only on two frames - once on the scientific observation and once for LAMP,WAVE frame (Again, this is the ESO name for this frame. It is the calibration spectrum of ThAr lamp.)
If you are asked any question, answer yes.
Optional step that you cannot wait to read
Before you apply any changes to your scientific data, you can try to extract reference spectrum.
As you have already read (doesn't matter if you have read all blog already or if you skipped most of it) you know how to trace the apertures (if you have no idea what I am talking about, you skipped that part of 'Working with flats' between '*').
Now, you are going to do precisely the same tracing on the ORDERDEF (and after this, you will not need to do it before normalization of Flat).
When you do it you will run apall task. For now, you do not need to know what it is doing (it will be discussed in the next post) and it is sufficient to know that it extracts actual spectra.
Run it this way
echelle> apall object out=refspec format=echelle referen=orderdef interac+ find+ recenter- resize- edit- trace- fittrac- extract+ extras- review-
If you want to see your spectrum, you can use task splot (working with splot will be discussed later).
Hopefully, you have managed to do all reduction steps I have described. In the next post I will discuss extracting processed spectra in more details as well as their calibration.
Again, if you have any question feel free to ask me. I will be glad to answer. Also if you have found any mistake in my processing let me know what I am doing wrong.
Enjoy your processing,
Jakub
No comments:
Post a Comment