Previous: Operational requirements
Up: INTRODUCTION
Previous Page: Operational requirements
Next Page: WHICH CHIP? WHICH FOCUS?

Limitations, Extra Observations, and the First Stages of Analysis

The complex fabrication process for CCDs produces non-uniformities in pixel response on small and large scales, some surprising optical effects, and defects which stamp each CCD as an individual. For the most part, these features are readily removed by a few auxiliary observations and operational procedures, yielding clean CCD images characterised very closely by four parameters: pixel size, quantum efficiency, saturation level and readout noise.

Fig 1.5(a) is a raw CCD frame, subtraction of a mean bias level. Fig 1.5(d) is the same frame after the first stages of data-analysis in which the auxiliary observations have been applied to rectify the imperfections. These first stages clean the image from its raw (Fig 1.5(a)) to its laundered (Fig 1.5(d)) form, and a summary of these initial steps provides the basis for a description of the shortcomings of image 1.5(a);

Subtractions of bias and dark-count frames are unlikely to make any noticeable improvement, but together they represent a zero level which must be removed before multiplicative operations can be performed. The bias is a DC level, preset electronically, to ensure that only positive numbers result in the digitizing process. The bias frame may be modelled as (A + F(x, y)); F, the pixel-to-pixel structure of the frame, is time-invariant, but experience has shown that A, the overall level, may vary on time scales less than one hour. Determing F is simply a matter of reading out the CCD many times without opening the shutter, i.e. recording many exposures of zero seconds. Adding these together and normalizing defines F(x, y) with minimal uncertainties due to readout noise. A, the level of the bias frame for each exposure, is best determined by the commonly-used overscan procedure, clocking out a number of pixels on the chip from which the charge signal has already been extracted and measured. The result is an oversize array with a strip of signal-free pixels from which A can be measured for the particular exposure. The bias level measured for Fig 1.5 was flat at the 1% level all the way down the chip in the direction, and fell off by about 2% towards the edge of the overscan in .

Strictly speaking, a dark-count frame should also be subtracted from each raw frame at the start of the reduction process. In practice, this is rarely necessary. If the dark count is significant (e.g. long exposures, very narrow-band filters, very few photons from the sky background), then the chip dark-count response must be measured. This requires long integrations with the shutter closed. Addition of many such integrations (bias removed as above) yields a master dark-count frame, relatively free from readout noise. This frame can then be scaled according to exposure time and subtracted from the data frame. Dark-count frames show significant structure, usually having a `warm' corner near the readout point.

It is flat-fielding which has the most dramatic effect on image quality. There are four features of Fig 1.5(a) with which it deals :-

Large-scale `warp' or sensitivity variation across the chip.

The large-scale sensitivity variations are certainly produced by variation in the substrate thickness, which may occur in the thinning process. This may be deduced from the dependence on wavelength; very little large-scale structure is apparent in the blue, while it increases to perhaps 10% overall in the far red, where the photoelectrons are generated much deeper in the substrate (Section 1.1.1).

Pixel-to-Pixel (short-scale) sensitivity variations.

Pixel-to-pixel gain variations may be as little as 1% in some CCDs, as much as 10% in others. They are presumably due to minor variations primarily in electrode structure between the pixels.

`Black holes.'

'Black holes' are due to dust or grease on the surface of the chip. The response is down typically by 10% in these areas. They disappear completely on a background of around 25,000 , which would suggest that the accuracy of flat-fielding is better than 1% for these areas.

`Walnut-grain' produced by interference fringes.

The fringe pattern of Fig 1.5(a) is perhaps its most obvious feature. It is a property of thinned chips, for which the distance between the upper surface of the substrate and the electrode structure is only about 10 wavelengths. Narrow lines in the optical spectrum of the incident light - night-sky lines in particular - produce strong interference patterns, with amplitudes which may reach 5% of background signal. Even when illuminated with broad-band white light, some thinned CCDs manage to produce self-fringing, Newton's rings. When narrow-band filters are used, thinned chips can produce complex sets of overlapping fringe patterns from a white-light source, and as a result their use in narrow-band photometry of extended continuum objects is limited. For emission line objects (nebulae), they are all but useless. In the case of Fig Fig 1.5, the fringe pattern has been dealt with separately by subtracting a fringe frame, formed by taking the median in each pixel of a stack of object frames.

Fig 1.5

All these four phenomena may be considered as gain variations; all may be removed by pixel-to-pixel gain calibration. The process is known as flat-fielding; the array is divided by a calibrating array. Flat fields are such calibrating arrays, CCD images obtained through the appropriate filter of a uniformly illuminated background, usually the dimly-lit interior of the telescope dome, the twilight sky or the dark sky. The last two are best: colour-matching is otherwise a problem. The strong dependence of the large-scale response on wavelength means that its removal requires illumination for flat-fielding by light of the same spectral response. The sky is the obvious choice. A second reason for choosing the sky is the removal of the fringe pattern, which is usually dominated by interference from the strongest sky lines (implying that the pattern is much the strongest in broadband red and far-red).It follows that the optimum flat-fields are obtained on the dark sky, close in time and position to the observation. But sacrifice of observing time is required, and stars provide a second difficulty. It is almost impossible to find sky patches in which lengthy flat-field exposures will not be `spoilt' by stars in the frame. One possibility which has been explored with some success is to defocus the telescope drastically. The relative strength of the fringe pattern may differ in the twilight sky, so that although the structure of the pattern may be the same, the amplitude is not. This difficulty may be met by extracting an array describing the fringe pattern from the sky flat-field frames, scaling this to the amplitude of the fringes in the image, and subtracting. Even this will be inadequate if the fringe pattern arises from sky emission features whose relative strengths have significantly varied. The fringe frame in Fig 1.5(c) has been through the same de-biasing and flat-fielding steps as the object frame. It was scaled to the same average count-rate as the object frame before subtraction. The median process for forming the fringe frames only works because the contributing object frames have no bright sources in them. For work at R, fringing is a problem mainly in the bottom left corner of the chip. Fringing is negligible all over in V.

The only defects in the chip that cannot be removed by careful flat-fielding are the two bad columns and the corrupt regions in the first five or so rows. The latter could be removed by measuring the dark current very accurately, but there are usually no useful data so close to the edge anyway.

Patching or interpolation is the final element in the cleaning of CCD images. It is an admission of defeat, and is an illegal fix of the data; there should be justifiable reasons for its use. Large-scale surface photometry may well be one such reason. Patching may be useful for `removing' the occasional cosmic ray or defective column. An example of the latter is column 276 in Fig 1.5(a), which is bright for nearly a third of its length. This is probably due to a single defective pixel of the distance up the chip from the readout register (bottom row). The pixel adds a huge and spurious amount of noise to each signal clocked through it. These data are lost; there is no resurrecting them. Some observers are not in favour of interpolating over bad columns at all, as the `recovered' data are usually bad. If there is no extended object spanning the bad columns, nothing is gained, and if there is, you cannot be sure that you haven't distorted the structure in some way. The smaller bad column in Fig 1.5(b) and (c), column 147, appears after flat-fielding because the flat-field cannot cope with this sort of non-linear effect. It has disappeared after de-fringing, as the effect is present to the same extent in both object and fringe frames.

Finally, low-light-level effects may produce severe problems in some applications such as spectroscopy or narrow-band imaging where backgound photons are at a premium. One problem in this regard - charge-transfer inefficiency - has already been mentioned. A second is caused by electron traps, substrate deficiencies which result in dark columns, down which charge transfer is severely inhibited unless a threshold of electrons is present. Both charge-transfer inefficiency and electron trapping can be minimised by careful exploration of driving waveforms and chip operating temperatures. But if these do not succeed, pre-flash may be required, a pre-illumination by a small amount of light to provide the threshold. Of course this procedure is undesirable: it adds shot noise, and it has to be subtracted precisely (as for bias and dark count) in early stages of the analysis.

Further analysis of clean CCD frames depends on the purpose of observation; for the most part it can be carried out with standard techniques wrapped up in user-friendly software packages (section 2.7). There remain some aspects of analysis peculiar to CCDs which are worth mentioning here without full discussion:

  1. In broad-band photometry, there is a difficulty in establishing a U-band calibration which is consistent with standard scales. This difficulty concerns the form of the cutoff to short wavelengths and the difference in response curves between CCDs and photomultipliers. Similarly, the I band cutoff to long wavelengths is defined by the detector, and CCDs are again different from photomultipliers.

  2. The sampling of the image may not be optimal. For cameras with a small scale (e.g. INT Prime), undersampling is severe in conditions of good seeing. This may not affect the analysis of single observations. However, intercomparison of observations will usually require resampling, rebinning, or convolution/deconvolution, and these processes may be heavily constrained by the sampling problem. Conversely, a large scale (e.g. INT Cass) gives a very small field and the images are oversampled in all but the best seeing.



Previous: Operational requirements
Up: INTRODUCTION
Previous Page: Operational requirements
Next Page: WHICH CHIP? WHICH FOCUS?

dxc@mail.ast.cam.ac.uk
Wed Mar 16 03:14:28 GMT 1994