[Comm2011] flatfielding/illumination correction

Edwin A. Valentyn valentyn at astro.rug.nl
Sat May 21 16:58:56 CEST 2011


Dear all,

The first inspects and analysis of our flatfields and 'quick'  
illumination correction evaluation recipes _indicate_ the following:

- in u everything is dominated by poor output of the lamp and massive 
straylight gradients amplified by the angular sensitivity of the u 
interference filters  leading to very complex results

for the other bands: g,r,i,z, B, V:

- the illumination of the lamps on the screen appears quite flat, better 
than 5% in general (surprising)

- the gradients in detected straylight on the sky are substantial, and 
worse in interference filters,  and can amount up to 15% over the fov; 
in g its circular symmetric, while in r, i, z more stucture is detected, 
which can not be modelled with a simple axisymmetric low order 2-d polynome

With these notions in mind the set of divided dome/sky flats are 
revealing. Note, also the complex behaviour of the vignetting of the 
crosses in B,V!

These notions imply that in principle it is justified to construct and 
apply our masterflats in the way we planned; i.e a combined dome+sky 
flat with using small scale info from dome and large scale info from 
sky, and normalise these per chip. We then have to compute the 
appropriate Illum. corr, to compensate for the staylight gradients 
affecting the sky flats.
(Indeed if we would only use dome flats, and ignore sky flats we would 
probably get a 5% accuracy result- and this is why some parties prefer 
dome flats).

The main conclusion is that we can apply our software in the way we 
planned, with the notion that higher order polynomes are required for 
the IC fitting.

But we want to do better and characterize the gradients in straylight in 
the sky flats.
The only way to do this unambigously is to reduce an observation with a 
set of stars put on each chip. This is planned in COM2, but I made one 
on SA110 in g at our last night of COM1b.

We should start that reduction now straightaway, with our standard 
pipeline i.e.:
-  make combined dome flat + sky flat normalized (Masterdomeflat) in the 
standard way
- for the field with SA110 in the centre compute the set of 32 
zeropoints in the standard way (or even another standard field is 
allowed, when we don''t detect enough stars on each chip)
- apply these to 32 zeropoints to each of the 32 offset observations of 
SA110 in the standard way
- run sextractor on these and make the catalogues in the standard way, 
but note we need here not only the x-y position of the stars in the chip 
but also the x-y in the overall focal- If that's not already there  John 
could help.
- from here on we make a small interactive analysis;
- Associate the cats with reference cats and produce table
star1-ccd1- x,y_focalplane- Delta m
star1-ccd2, x,y,_focalplane-Delta_m
...
star1 ccd2, x,_focalplane-Delta-m
star2
...
star2
star3
...
star3
etc

Then we can fit a higher order-2-d poly and we are done.

So it's not so difficult and is essentially running our standard 
pipeline, as we planned.

One problem is that S110 does not have many stars on all chips, required 
for the delta_m and astrom for assocation.
So let's first focus on getting that right.
In the worse case there are two fall-backs, which require more work:
- ask Steffen to run the same OB with longer integrations
- make our own secondary standards base on these observations- but that 
seems a lot of work.

- alternatively, if we want to push for a throughput evaluaton,  we 
could do a reduction with the dome flats only to get a probably 5% 
accuracy zeropoint result, but without internal verifcation of our 
interpretations are correct

Koen: about ESO's desire for zeropoints:
Formally this was not planned as the outcome of OCAM1, but for OCAM2.  
Also in OCAM1 we were faced with continuously changing det/ampl settings.
In OCAM1 we would inspect anomalies, which we did find in the straylight 
gradients.
Any update on the throughputs of the instrument would have to wait until 
the careful analysis is done, otherwise its pretty meaningless- not also 
we will find a complex map of throughtput over the fov. and we have to 
decide which number makes sense (worset?, average?) I expect these to 
lay apart by 20 %
Of course we are working hard to make a first assesment now, we most 
likely will need the 32 chip measurements for the other filters, as we 
always have planned in OCAM2.

Please note, we have to do a lot of bootstrapping- I'm glad we know now 
how- , and our experience is that with 32 CCD's (and two FIERRA's) there 
are always 1 or 2 which behave strange- we have to gain a lot of 
experience to judge when to ignore this and when to further pay 
attentions to these anomalies.
I leave it up to you how you want to communicate this to ESO.

Coming week this will be our focal point to do the reduction of this, 
all other things being well under control (3 test astom fields are well, 
and PR images done).

kind regards

Edwin

-- 
-------------------------------------------------------------------------
Prof. Dr. Edwin A. Valentijn       University of Groningen
Coordinator Target                 www   : www.rug.nl/target
Head OmegaCEN                      www   : www.astro.rug.nl/omegacen
Kapteyn Astronomical Institute     tel   : +31 (0)50 3634011/4036 (secr)
P.O.Box 800                        mobile: +31 (0)6 48276416
NL-9700 AV Groningen               e-mail: valentyn at astro.rug.nl
The Netherlands                    www   : www.astro.rug.nl/~valentyn
-------------------------------------------------------------------------









More information about the Comm2011 mailing list