description lang="EN" 
Field of the Invention 

[0001] The present invention relates to an image processing method and apparatus for improving the quality of an acquired image. 

 Background 

[0002] It is well known to apply filters to images to improve their characteristics. 

[0003] US 7,072,525, Covell discloses an adaptive filter for filtering a target version of a visual image that is produced by processing an original version of the visual image, the characteristics of the adaptive filter being determined in accordance with one or more characteristics of the original version of the visual image. The orientation and/or strength of filtering of the adaptive filter are adjusted based on local properties of the original image, which can enable the adaptive filter to avoid introducing blurring across true edges in the image. 

[0004] US 6,823,086, Dolazza discloses a system for adaptively filtering an image so as to reduce a noise component associated with the image. The system includes an image analyzer for determining image parameters related to the image. The system also includes a spatial filter, having an adjustable kernel responsive to the image parameters, for filtering the image sequence. The image analyzer manipulates the filter kernel as a function of the image parameters so that the system produces a filtered image, adaptable in real time, as a function of the unfiltered image, external rules, predetermined constraints, or combinations thereof. The spatial filter includes a time-invariant section and an adaptable section. The time-invariant section applies a plurality of filters to the image, each of the filters having a distinct frequency response, so as to produce a plurality of distinct filtered outputs. The adaptable section scales each of the plurality of distinct filtered outputs with a corresponding distinct weighting value to produce a plurality of scaled filtered outputs, and combines the plurality of scaled filtered outputs to produce a composite filtered output. 

[0005] In Covell and Dolazza, several 2-D low pass filters, each with a distinct frequency response, are applied to the image and the outputs are weighted in order to produce a composite filtered output. 

[0006] As such, the complexity of US 7,072,525 and US 6,823,086 is high. Also, these patents require an image analyzer or another image in order to decide on the behavior of the adaptive filters, i.e. at least one pass over the original image and the target image is necessary. 

[0007] US 6,335,990, Chen et al , discloses filtering in the spatial and temporal domain in a single step with filtering coefficients that can be varied depending upon the complexity of the video and the motion between the adjacent frames. The filter comprises: an IIR filter, a threshold unit, and a coefficient register. The IIR filter and threshold unit are coupled to receive video data. The IIR filter is also coupled to the coefficient register and the threshold unit. The IIR filter receives coefficients, a, from the coefficient register and uses them to filter the video data received. The IIR filter filters the data in the vertical, horizontal and temporal dimensions in a single step. The filtered data output by the IIR filter is sent to the threshold unit. The threshold unit compares the absolute value of the difference between the filtered data and the raw video data to a threshold value from the coefficient register, and then outputs either the raw video data or the filtered data. 

[0008] Chen uses an IIR filter and a threshold unit and output the raw video data or filtered data. As such, the IIR filter operates on its previous outputs and the pixel values. 

[0009] Referring to Figure 1 , US 2004/0213478, Chesnokov , discloses an image processing method comprising the step of processing an input signal to generate an adjusted output signal, wherein the intensity values I(x,y) for different positions (x,y) of an image are adjusted to generate an adjusted intensity value I'(x,y) in accordance with: I out = Σ i = 0 N ⁢ α i I ⁢ LPFΩ i P i F I ⋅ Q i F I + 1 - α i ⁢ I ,
<img class="EMIRef" id="463683444-ib0001" />

where Pi(γ) is an orthogonal basis of functions of γ defined in the range 0<γ<1; Qi(.) are anti-derivatives of Pi(.): Qi(F(I)=∫0<F(I)>Pi(η)dη or an approximation thereto; LPFΩ[.] is an operator of low-pass spatial filtering; Ωiis a cut-off frequency of the low-pass filter; F(.) is a weighting function; and where 0<α<1. 

[0010] The output of the weighting function F(.) is monotonically decreasing with higher values of the pixels. There is a feedback from the output of the filtered sequence and the method can receive information other than from the image. For example, an amplification factor can be added to the linear or the logaritmic multiplication block and can be computed from a preview using an integral image. As such, in Chesnokov, signficant processing steps are applied to the input signal, making the method quite complex and the output image is a weighted sum of the original and the processed image. 

 Disclosure of the Invention 

[0011] According to the present invention there is provided a method of processing an image according to claim 1. 

[0012] The present invention provides a one-pass image technique that uses an IIR filter to improve the quality of pictures, using only one image and with efficient use of processor resources. 

[0013] A first embodiment of the present invention provides for the automatic correction of uneven luminance in the foreground/background of an image. This implementation improves quality especially where the background is more illuminated/or darker than the foreground. 

[0014] Preferred implementations of the first embodiment provide an estimate of the average of the red, green and blue channels while another recursive filter filters a term that has a component inversely proportional with the values of each color plane pixel value or the intensity value. Its output is multiplied with one or more correction terms dependent on the color channel(s) and preferably limited by two thresholds. The enhanced pixel value is obtained by using a linear or logarithmic model. 

[0015] Using the embodiment, as well as an automatic correction of uneven luminance in the foreground/background, color boost is also obtained. 

[0016] In the first embodiment, the average values of each color channel are not used for comparison purposes and they can be replaced by sliding averaging windows ending on the pixel being processed. In any case, these average values are used to determine correction terms which in turn are used to avoid over-amplification of red or blue channels. 

[0017] Unlike the prior art, the coefficients of the IIR filter are fixed, rather than employing adaptive filters. As such, the present method requires only one pass through an image and the output of one filter does not have to be used as an input to another filter. 

 Brief Description of the Drawings 

[0018] An embodiment of the invention will now be described by way of example, with reference to the accompanying drawings, in which:
Figure 1 is a block diagram of the prior art image enhancement system; and
Figure 2 is a block diagram of an image enhancement system according to an embodiment of the present invention.

 Description of the Preferred Embodiment 

[0019] Referring now to Figure 2 , an acquired image G is supplied for filtering according to the present invention. While the embodiment is described in terms of processing an image in RGB space, the invention can be applied to luminance channels only or other color spaces. 

[0020] Only one input image,G, is used and a running average on each color channel is computed 20 as each pixel value is read. Therefore for each pixel G(i,j,k) of each plane k=1...3, we compute: R ̅ = β ⋅ R ̅ + 1 - β ⋅ G i ⁢ j ⁢ 1
<img class="EMIRef" id="463683444-ib0002" />
G ̅ = β ⋅ G ̅ + 1 - β ⋅ G i ⁢ j ⁢ 2<img class="EMIRef" id="463683444-ib0003" />
B ̅ = β ⋅ B ̅ + 1 - β ⋅ G i ⁢ j ⁢ 3 ,<img class="EMIRef" id="463683444-ib0004" />

where β is a coefficient between 0 and 1. 

[0021] Another variant is to compute on each color channel, the sum of 2N+1 pixel values around the pixel G(i,j,k) and divide by 2N+1. 

[0022] From the moving average values,R,G,B, correction terms γR,γBare calculated, step 25, as follows: γ R = G ̅ R ̅ ⋅ 1 - a ⋅ R ̅ + 255 ⋅ a 1 - a ⋅ G ̅ + 255 ⋅ a and γ B = G ̅ B ̅ ⋅ 1 - a ⋅ B ̅ + 255 ⋅ a 1 - a ⋅ G ̅ + 255 ⋅ a
<img class="EMIRef" id="463683444-ib0005" />


[0023] Preferably, both correction terms, γRand γBvalues are limited within a chosen interval (e.g. between 0.95 and 1.05; if any of γRand γBvalues is below 0.95 their value is set to 0.95; if any of γRand γBvalues is above 1.05 their value is set to 1.05). This prevents over-amplification of the red and blue channels in further processing. 

[0024] In parallel with generating the moving average values, the pixels are parsed on rows or columns and for each pixel of a color plane G(i,j,k), a coefficientH(ij)is calculated as follows: H i ⁢ j = α ⁢ H ⁢ i , j - 1 + 1 - α ⁢ 1 - a + 255 ⋅ a max ⁢ δ , G i ⁢ j ⁢ 1 + G i ⁢ j ⁢ 2 + G i ⁢ j ⁢ 3 / 3
<img class="EMIRef" id="463683444-ib0006" />


[0025] In Figure 2 , this processing is broken into step 30: f G i ⁢ j ⁢ k , a , δ = 1 - a + 255 ⋅ a max ⁢ δ , G i ⁢ j ⁢ 1 + G i ⁢ j ⁢ 2 + G i ⁢ j ⁢ 3 / 3
<img class="EMIRef" id="463683444-ib0007" />

followed by a recursive filter, step 40: H i ⁢ j = α ⁢ H ⁢ i , j - 1 + 1 - α ⁢ f G i ⁢ j ⁢ k , a , δ
<img class="EMIRef" id="463683444-ib0008" />

where:
a is a positive value less than 1 (e.g.a = 0.125); and
α is the pole of the corresponding recursive filtering, e.g. α can have values between 0.05 and 0.8).

[0026] The comparison with δ is used in order to avoid division by zero and to amplify dark pixels (e.g. δ = 15). The initial value H(1,1) can have values between 1 and 2. 

[0027] Using this filter, darker areas are amplified more than illuminated areas due to the inverse values averaging and, therefore, an automatic correction of uneven luminance in the foreground/background is obtained. 

[0028] It will be seen from the above that the recursive filter,H, doesn't filter the pixel values. For example, ifa= α = 1/8 and δ = 15, the filter 30/40 is filtering a sequence of numbers that varies between 1 and 3 depending on actual pixel value G(i,j,k) and the preceding values of the image. If the filter 40 simply uses as input the pixel values G(i,j,k), it generates a simple low pass filtered image, with no luminance correction. 

[0029] In one implementation of the embodiment, the modified pixel values, G1(i,j,k), are given by a linear combination, step 50, of the filter parameters H and the correction terms γR,γB: G 1 i ⁢ j ⁢ 1 = G i ⁢ j ⁢ 1 ⋅ H i ⁢ j ⋅ γ R
<img class="EMIRef" id="463683444-ib0009" />
G 1 i ⁢ j ⁢ 2 = G i ⁢ j ⁢ 2 ⋅ H i ⁢ j<img class="EMIRef" id="463683444-ib0010" />
G 1 i ⁢ j ⁢ 3 = G i ⁢ j ⁢ 3 ⋅ H i ⁢ j ⋅ γ B .<img class="EMIRef" id="463683444-ib0011" />


[0030] One more complex alternative to the linear model is a logarithmic model. In such an implementation, the output pixel G1(i,j,k)corresponding to the enhanced color plane (R/G/B color planes), is as follows: G 1 i ⁢ j ⁢ 1 = D - D ⁢ 1 - G i ⁢ j ⁢ 1 D εH i ⁢ j ⁢ γ R ,
<img class="EMIRef" id="463683444-ib0012" />
G 1 i ⁢ j ⁢ 2 = D - D ⁢ 1 - G i ⁢ j ⁢ 2 D εH i ⁢ j ,<img class="EMIRef" id="463683444-ib0013" />
G 1 i ⁢ j ⁢ 3 = D - D ⁢ 1 - G i ⁢ j ⁢ 3 D εH i ⁢ j ⁢ γ B<img class="EMIRef" id="463683444-ib0014" />

where:
D is the maximum permitted value (e.g. 255 for 8bit representation of images); and
ε is a constant whose indicated values are between 1 and 3.

[0031] Examination of the formula above shows that only values smaller than D may be obtained. In this implementation, the degree of color and brightness boost are obtained by varying the pole value (α) and the logarithmic model factor (ε). 

[0032] The computations can be adapted for the YCC or other color spaces. For example, when using YCC color space in the embodiment of Figure 2 , there is no need to compute the correction terms γR,γB, and ε = 1 for the Y channel if the logarithmic model is used. The inverting function for the Y channel is therefore: f Y i ⁢ j , a , δ = 1 - a + 255 ⋅ a max δ , Y i ⁢ j .
<img class="EMIRef" id="463683444-ib0015" />


[0033] The linear model can be applied for the luminance channel and the logarithmic model can be used for the chrominance channels using the H(i,j) coefficient computed on the luminance channel. 

[0034] This approach leads to computational savings and add the possibility of adjusting the color saturation by using a different positive value for ε (e.g. ε = 0.9) when computing the new chrominance values. The brightness of the enhanced image can be varied by multiplying the Y channel with a positive factor, ε, whose value can be different than the value of s used for the chrominance channels. 

[0035] In a second embodiment of the invention, the processing structure of Figure 2 can be used to sharpen an image. 

[0036] In this embodiment, the image is preferably provided in YCC format and the processing is performed on the Y channel only. The ratio of the next pixel and the current pixel value is computed and filtered with a one pole IIR filter (e.g. a = 1/16), step 40. The operations can be performed on successive or individual rows or columns. The initial H coefficient is set to 1 and in case of operating on row i we have: H i ⁢ j = α ⁢ H ⁢ i , j - 1 + 1 - α ⁢ Y ⁢ i , j + 1 max δ , Y i ⁢ j ,
<img class="EMIRef" id="463683444-ib0016" />

where:
α is the pole of the IIR filter.

[0037] Again, this processing can be broken down in step 30: f Y i ⁢ j , δ = Y ⁢ i , j + 1 max δ , Y i ⁢ j
<img class="EMIRef" id="463683444-ib0017" />

followed by the recursive filter, step 40: H i ⁢ j = α ⁢ H ⁢ i , j - 1 + 1 - α ⁢ f Y i ⁢ j , δ
<img class="EMIRef" id="463683444-ib0018" />


[0038] Again, the comparison with δ is used in order to avoid division by zero (δ is usually set to 1). H(i,j) is a coefficient that corresponds to the current pixel position (i, j) of the original image. The initial coefficient can be set to 1 at the beginning of the first row or at the beginning of each row. In the first case, the coefficient computed at the end of the one row is used to compute the coefficient corresponding to the first pixel of the next row. 

[0039] The enhanced pixel value γ1(i,j) is given by the following formula: Y 1 i ⁢ j = Y i ⁢ j ⁢ 1 + ε i ⁢ j ⋅ 1 - H i ⁢ j
<img class="EMIRef" id="463683444-ib0019" />
where ε(i,j) can be a constant gain factor or a variable gain depending on the H coefficients. Another alternative for ε(i,j) is to use the difference between consecutive pixels or the ratio of successive pixel values. For example, if the difference between successive pixels is small (or the ratio of consecutive pixel values is close to 1) the value of ε(i,j) should be lower, because the pixel might be situated in a smooth area. If the difference is big (or the ratio is much higher or much lower than 1), the pixels might be situated on an edge, therefore the value of ε(i,j) should be close to zero, in order to avoid possible over-shooting or under-shooting problems. For intermediate values, the gain function should vary between 0 and a maximum chosen gain. An example of ε(i,j) according to these requirements has a Rayleigh distribution. 

[0040] In some implementations, a look up table (LUT) can be used if a variable ε(i,j) is chosen, because the absolute difference between consecutive pixels has limited integer values. 

[0041] This method is highly parallelizable and its complexity is very low. The complexity can be further reduced if LUTs are used and some multiplications are replaced by shifts. 

[0042] Furthermore, this second embodiment can also be applied to images in RGB space. 

[0043] The second embodiment can be applied in sharpening video frames either by sharpening each individual video frame or identified slightly blurred frames. 

[0044] In each embodiment, the pixels can be parsed using any space-filling curves (e.g. Hilbert curves), not only by rows or columns. The corrected image can be thought as a continuously modified image, pixel by pixel, through a path of a continuously moving point. 

[0045] It will also be seen that the image sharpening image processing of the second embodiment can be applied after the luminance correction of the first embodiment to provide a filtered image with even superior characteristics to either method implemented independently. Indeed either method can be applied in conjunction with other image processing methods as required for example following the processing described in PCT Application No. PCT/EP2007/009939 (Ref: FN204PCT). 

