GLAGC: Adaptive Dual-Gamma Function for Image Illumination Perception and Correction in the Wavelet Domain

Low-contrast or uneven illumination in real-world images will cause a loss of details and increase the difficulty of pattern recognition. An automatic image illumination perception and adaptive correction algorithm, termed as GLAGC, is proposed in this paper. Based on Retinex theory, the illumination of an image is extracted through the discrete wavelet transform. Two features that characterize the image illuminance are creatively designed. The first feature is the spatial luminance distribution feature, which is applied to the adaptive gamma correction of local uneven lighting. The other feature is the global statistical luminance feature. Through a training set containing images with various illuminance conditions, the relationship between the image exposure level and the feature is estimated under the maximum entropy criterion. It is used to perform adaptive gamma correction on global low illumination. Moreover, smoothness preservation is performed in the high-frequency subband to preserve edge smoothness. To eliminate low-illumination noise after wavelet reconstruction, the adaptive stabilization factor is derived. Experimental results demonstrate the effectiveness of the proposed algorithm. By comparison, the proposed method yields comparable or better results than the state-of-art methods in terms of efficiency and quality.


Introduction
Uneven or insufficient illumination will cause the contrast of an image to be too low, making it difficult to observe the details of the image. We usually pursue for the enhancement results that local variation is obvious while the global variation is in accordance with the original intensity, which is denoted as naturalness preservation. Researchers have proposed many enhancement methods to make these images have a more pleasing visual effect or to obtain high-visibility effects.
Pixel modulation schemes, such as statistics-based method histogram equalization (HE), directly adjust the pixel intensity of the image to achieve enhancement. This kind of method may cause artifacts and the loss of naturalness. The nonlinear gamma correction approach uses different mapping curves to achieve excellent performance in complex lighting conditions [1], but the parameters need manual design with prior knowledge, and the spatial information is not considered [2] when operating on each pixel.
Converting pixel information to other domains can yield more internal information of the image, such as discrete Fourier transform, discrete cosine transform (DCT), and discrete wavelet transform (DWT). These solutions achieve effects through filters in the frequency domain and reconstruction in the spatial domain, such as homomorphic filtering, which may result in the loss of potentially useful visual cues [3].
To conduct an analysis from the perspective of the image physical process, Retinex theory is proposed to simulate the relationship between the illumination component and the reflection component of an image [4,5]. A series of methods were derived, such as the single-scale Retinex (SSR) algorithm [6] and multiscale Retinex (MSR) algorithm [7], to enhance the image details. However, the naturalness of the images may be destroyed, and it is unreasonable to treat only the reflectance layer as the enhanced image [8].
As a spatial-frequency analysis tool, the DWT is applied to decompose and enhance image features at different resolutions. It has been utilized by researchers in the fields such as image resolution enhancement [9] and image denoising [10].
Existing methods have difficulty balancing brightness correction, naturalness preservation, color restoration, and algorithmic efficiency. A simple but efficient algorithm for image illuminance perception and correction in the wavelet domain is proposed in this paper. The DWT is used to separate the illuminances in the low-frequency subband, which will be enhanced by adaptive gamma correction considering both the spatial and statistical characteristics of the image. For naturalness preservation, adaptive punishment adjustment is applied for the high-frequency subband. Finally, a stabilization factor is designed for color restoration so that the extralow illumination can be corrected with noise suppression. To the best of our knowledge, no work has proposed an adaptive dual-gamma correction method in the wavelet domain.
The rest of the paper is organized as follows: Section 2 provides a brief discussion of related works. Section 3 presents the detailed process of the proposed method. In Section 4, the superiority of the proposed method is supported by experimental results and relevant evaluation with state-of-the-art models. Finally, the conclusions are presented in Section 5.

Related Works
To solve the problems mentioned above, improvements have been proposed in earlier works. There are several variations of the HE method, such as contrast-limited adaptive histogram equalization (CLAHE) [11] and brightness-preserving bi-histogram equalization (BBHE) [12]. In the frequency domain, improved methods such as illuminance normalization based on homomorphic filtering [3], color image enhancement by compressed DCT [13], and the alpha-root method based on the quaternion Fourier transform [14] are proposed. The following methods are comparable to our work: • Improved gamma correction: For parameter adjustment, some adaptive methods are derived, such as adaptive gamma correction based on cumulative histogram (AGCCH) [15], adaptive gamma correction to enhance the contrast of brightness-distorted images [16], adaptive correction with weight distribution (AGCWD) method [17], and a 2-D adaptive gamma correction method [18], which takes into account the variable brightness map of image spatial information while excessive contrast enhancement may occur. In addition, few methods consider both local and global enhancement, and overenhancement sometimes appears in some portions of the image. • Retinex-based model: Fu et al. [19] proposed a simultaneous illumination and reflectance estimation (SIRE) method to preserve more image details when estimating the reflection intensity. Wang [20] used Retinex theory to construct an image prior model and used a hierarchical Bayesian model to estimate the model parameters and achieved good results. Cheng [21] proposed a nonconvex variational Retinex model to improve the brightness while maintaining the texture and naturalness of an image. These models based on Retinex theory can achieve pleasing reflection separation through iterations. However, the algorithms are time-consuming and may limit their practical applications. Low-light image enhancement via well-constructed illumination map estimation (LIME) was proposed by Guo [2]. Oversaturation in some portion of an image usually occurs.

•
Combining the wavelet transform approach: By introducing the wavelet transform, a nonlinear enhancement function was designed based on the local dispersion of the wavelet coefficients [21]. Zotin [22] proposed an algorithm combining the MSR algorithm with the wavelet transform algorithm and achieved a better correction effect in terms of efficiency. A dual-tree complex wavelet transform for low-light image enhancement was proposed in [23]. However, it is unreasonable to utilize only the low-frequency subband for illumination enhancement. The image edges will appear jagged after transformation according to our experiments.

Algorithm Scheme
Gamma correction [17] is a common method for illumination enhancement and is defined as: where I is the corrected image, I max is the maximum intensity value of the original image, I is the original image, and γ is the parameter. For different values of γ, the resulting image has different enhancement results, as shown in Figure 1. When γ < 1, low-intensity pixels will be increased more than high-intensity pixels. When γ > 1, the opposite effect is generated. When γ = 1, the input and output intensities are equal.
• Combining the wavelet transform approach: By introducing the wavelet transform, a nonlinear enhancement function was designed based on the local dispersion of the wavelet coefficients [21]. Zotin [22] proposed an algorithm combining the MSR algorithm with the wavelet transform algorithm and achieved a better correction effect in terms of efficiency. A dual-tree complex wavelet transform for low-light image enhancement was proposed in [23]. However, it is unreasonable to utilize only the low-frequency subband for illumination enhancement. The image edges will appear jagged after transformation according to our experiments.

Algorithm Scheme
Gamma correction [17] is a common method for illumination enhancement and is defined as: ' max max where I' is the corrected image, Imax is the maximum intensity value of the original image, I is the original image, and γ is the parameter. For different values of γ, the resulting image has different enhancement results, as shown in Figure 1. When γ < 1, low-intensity pixels will be increased more than high-intensity pixels. When γ > 1, the opposite effect is generated. When γ = 1, the input and output intensities are equal. The limitations of the conventional gamma correction method are obvious: (1) The selection of the parameters requires experience. (2) Spatial information such as uneven lighting of the image is not considered. (3) The overall illumination cannot be perceived, and overexposure sometimes occurs.
For this reason, a novel adaptive gamma correction method, called global statistics and local spatial adaptive dual-gamma correction (GLAGC), is proposed in this section. First, the V component of from HSV model of the input image is converted to the logarithmic domain. Through the DWT, the illumination information of the image is obtained from the low-frequency subband LL. The dual-gamma correction γ(θ [χ,σ] ) based on spatial and statistical information is applied to subband LL: [ , ] ( ) where IMAX is the maximum pixel value of the LL subband and LL' is the corrected lowfrequency subband. The limitations of the conventional gamma correction method are obvious: (1) The selection of the parameters requires experience. (2) Spatial information such as uneven lighting of the image is not considered. (3) The overall illumination cannot be perceived, and overexposure sometimes occurs.
For this reason, a novel adaptive gamma correction method, called global statistics and local spatial adaptive dual-gamma correction (GLAGC), is proposed in this section. First, the V component of from HSV model of the input image is converted to the logarithmic domain. Through the DWT, the illumination information of the image is obtained from the low-frequency subband LL. The dual-gamma correction γ(θ [χ,σ] ) based on spatial and statistical information is applied to subband LL: where I MAX is the maximum pixel value of the LL subband and LL is the corrected lowfrequency subband. For naturalness preservation, adaptive punishment adjustment is applied in the LH, HL, and HH subbands. Then, the corrected V component is obtained through the inverse wavelet transform. Finally, the enhanced image is reconstructed by converting it to the RGB color space through color restoration. The process flow of the proposed image enhancement method is shown in Figure 2. For naturalness preservation, adaptive punishment adjustment is applied in the LH, HL, and HH subbands. Then, the corrected V component is obtained through the inverse wavelet transform. Finally, the enhanced image is reconstructed by converting it to the RGB color space through color restoration. The process flow of the proposed image enhancement method is shown in Figure 2.

Luminance Extraction in the Wavelet Domain
According to Retinex theory, an image can be expressed as the multiplicative combination of the reflection intensity and the illumination brightness, namely: where S(x, y) is the pixel information of the image and R(x, y) is the reflection intensity, reflecting the surface properties of the object color, texture, etc. that correspond to the high-frequency information of the image; L(x,y) is the environmental illumination, which depends on the external lighting conditions and corresponds to the low-frequency information of the image. Since the operation in the logarithmic domain is closer to the visual characteristics perceived by the human eye, the image is converted to the logarithmic domain to obtain the additive combination of reflection intensity and illumination brightness: ( , ) ( , ) ( , ) s x y l x y r x y = + where s(x, y) = log(S(x, y)), r(x, y) = log(R(x, y)), and l(x, y) = log(L(x, y)). To obtain the illumination component l(x, y), a center/surround Retinex method such as the SSR algorithm uses the convolution of the Gaussian function and the image s(x, y): ( , ) ( , ) ( , ) l x y s x y G x y = * where * is a convolution operation, G(x, y) is the Gaussian convolution function, ∬G(x, y) = 1; c is the scale factor, and k is the normalization constant. The MSR algorithm uses multiscale Gaussian functions: where Gn (x, y) is the Gaussian function of the n-th scale and the weight ϖn satisfies

Luminance Extraction in the Wavelet Domain
According to Retinex theory, an image can be expressed as the multiplicative combination of the reflection intensity and the illumination brightness, namely: where S(x, y) is the pixel information of the image and R(x, y) is the reflection intensity, reflecting the surface properties of the object color, texture, etc. that correspond to the high-frequency information of the image; L(x, y) is the environmental illumination, which depends on the external lighting conditions and corresponds to the low-frequency information of the image. Since the operation in the logarithmic domain is closer to the visual characteristics perceived by the human eye, the image is converted to the logarithmic domain to obtain the additive combination of reflection intensity and illumination brightness: s(x, y) = l(x, y) + r(x, y) where s(x, y) = log(S(x, y)), r(x, y) = log(R(x, y)), and l(x, y) = log(L(x, y)). To obtain the illumination component l(x, y), a center/surround Retinex method such as the SSR algorithm uses the convolution of the Gaussian function and the image s(x, y): l(x, y) = s(x, y) * G(x, y) where * is a convolution operation, G(x, y) is the Gaussian convolution function, G(x, y) = 1; c is the scale factor, and k is the normalization constant. The MSR algorithm uses multiscale Gaussian functions: where G n (x, y) is the Gaussian function of the n-th scale and the weight n satisfies ∑ N n=1 n = 1. State-of-the-arts methods like SSR and MSR obtain illumination feature by using Gaussian convolution within certain perception domain. Gaussian convolution will cause computational complexity. Moreover, the neighboring pixel information also includes the edge of the image, texture and other redundant details that do not contribute to the illuminance features. This paper takes a different approach that illumination extraction is conducted in the low frequency sub-band of the wavelet domain, while the details of image are extracted in high frequency sub-band. The DWT [24] of a digital image f (x, y) can be expressed as: where ϕ is the scale function; ψ is the wavelet function; (M, N) is the size of the image; j 0 is the initial scale; W φ (j 0 , m, n) is the low-frequency wavelet coefficient, which is an approximation of f (x, y); index i identifies the directional wavelets in terms of values of H, V, and D; and W i ψ (j, m, n) is the high-frequency wavelet coefficient. When the scale j ≥ j 0 , it means the horizontal, vertical, diagonal details in three directions.
The DWT uses low-pass and high-pass filters to decompose the pixel information of the image into 4 subbands, namely, LL, LH, HL, and HH. LL denotes the low-pass subband, and LH, HL, and HH denote the vertical, horizontal and diagonal subbands, respectively, where: From the perspective of the frequency domain, the high-frequency subband after applying the wavelet transform contains only detailed information, such as the edge of the image object, which ensures that the illumination component of the image is included in the low-frequency subband LL. Therefore, the illumination of the image can be corrected by using only the low-frequency subband. After illuminance correction in the low-frequency subband, we can use the inverse wavelet transform to obtain the reconstructed image: where W φ (j 0 , m, n), W ψ i (j, m, n) is the corrected coefficient, iDWT{} represents the inverse wavelet transform, and O(x, y) denotes the corrected image. Next, the proposed adaptive dual-gamma correction method for low-frequency subband LL based on the extracted illumination features is described.

Local Spatial Adaptive Gamma Correction (LSAGC)
A spatial luminance distribution feature (SLDF) is proposed, which is defined as: where SLDF(x, y) obtains the pixel neighborhood information by applying a convolution operation to estimate the local spatial distribution of the image s illumination. Figure 3 illustrates the SLDF(x, y) of an image, its frequency domain analysis diagram and time-consumption analysis of our method and MSR. In Figure 3b, the Y-axis denotes the average Fourier log intensity [25] of the image, and the X-axis denotes the frequency. In Figure 3c, the Y-axis denotes the average time consumption of illumination extraction, and the X-axis denotes the image size. mination of the image can be extracted only in the LL subband.
(2) As the frequency increases, the amplitude of SLDF (x, y) attenuates faster. This property is helpful in preserving the image details from the perspective of the local illumination characteristics. (3) For images with common image sizes, the proposed SLDF illumination extraction time is much less than that of the MSR algorithm, and the benefit of the SLDF scheme compared with the MSR algorithm increases as the image size increases. The uneven spatial distribution of the image illuminance appears as overexposure or underexposure in certain areas. The proposed local spatial adaptive gamma correction (LSAGC) method is applied to LL, which is defined as: where MSLDF is the average of SLDF (x, y) and σ is the difference between the brightness of a certain pixel and the average intensity. When the spatial brightness of the image is evenly distributed, (MSLDF /IMAX) is close to 1, and the γ(Θχ) correction ability becomes weak. When SLDF (x, y) is greater than MSLDF, strong illumination appears, which makes σ < 0; thus, the illumination will be reduced by (14). In contrast, the brightness of the dark area will be increased, so uneven lighting is improved through adaptive correction. Applying γ(Θχ) to LL: where LLLS indicates the low-frequency subband corrected by LSAGC. Figure 4 illustrates the LSAGC results. An image with uneven illumination is shown in Figure 4a. The LL subband obtained by wavelet transform is shown in Figure 4b, and SLDF (x, y) is shown in Figure 4c. The reconstructed image by iDWT{LLLS,Wψ i (j, m, n)} is shown in Figure 4d.
It can be seen from Figure 4d that although the uneven spatial illumination distribution of the image has been corrected, the overall brightness is still low, resulting in unclear details, such as the human face and horse body. Further, more overall luminance correction is required. (1) The frequency components of the illumination extracted by the MSR algorithm are included in the frequency components of the LL subband, which means that the illumination of the image can be extracted only in the LL subband. (2) As the frequency increases, the amplitude of SLDF (x, y) attenuates faster. This property is helpful in preserving the image details from the perspective of the local illumination characteristics. (3) For images with common image sizes, the proposed SLDF illumination extraction time is much less than that of the MSR algorithm, and the benefit of the SLDF scheme compared with the MSR algorithm increases as the image size increases.
The uneven spatial distribution of the image illuminance appears as overexposure or underexposure in certain areas. The proposed local spatial adaptive gamma correction (LSAGC) method is applied to LL, which is defined as: where M SLDF is the average of SLDF (x, y) and σ is the difference between the brightness of a certain pixel and the average intensity. When the spatial brightness of the image is evenly distributed, (M SLDF /I MAX ) is close to 1, and the γ(Θ χ ) correction ability becomes weak. When SLDF (x, y) is greater than M SLDF , strong illumination appears, which makes σ < 0; thus, the illumination will be reduced by (14). In contrast, the brightness of the dark area will be increased, so uneven lighting is improved through adaptive correction. Applying γ(Θ χ ) to LL: where LL LS indicates the low-frequency subband corrected by LSAGC. Figure 4 illustrates the LSAGC results. An image with uneven illumination is shown in Figure 4a. The LL subband obtained by wavelet transform is shown in Figure 4b, and SLDF (x, y) is shown in Figure 4c. The reconstructed image by iDWT{LL LS ,W ψ i (j, m, n)} is shown in Figure 4d.

Global Statistics Adaptive Gamma Correction (GSAGC)
The information entropy of the image represents the aggregation feature of the grayscale value distribution, which is defined as: where pi is the probability of a certain grayscale value. Figure 5a-f show images of different luminance conditions with their grayscale distribution histograms. When the image is properly exposed, the grayscale distribution histograms show uniform distributions, and their information entropy is the largest, as shown in Figure 5g. The probability density function (pdf) and cumulative distribution function (cdf) of the image are defined as follows: where i is the pixel intensity, ni is the number of pixels with intensity i, and N is the total number of pixels in the image. According to the maximum discrete entropy theorem, the image with the largest entropy has a uniformly distributed grayscale histogram, and its cdf(i) has linear characteristics, namely: Here, (20) is converted to the logarithmic domain: It can be seen from Figure 4d that although the uneven spatial illumination distribution of the image has been corrected, the overall brightness is still low, resulting in unclear details, such as the human face and horse body. Further, more overall luminance correction is required.

Global Statistics Adaptive Gamma Correction (GSAGC) Global Statistical Luminance Feature (GSLF)
The information entropy of the image represents the aggregation feature of the grayscale value distribution, which is defined as: where p i is the probability of a certain grayscale value.

Global Statistics Adaptive Gamma Correction (GSAGC)
The information entropy of the image represents the aggregation featu scale value distribution, which is defined as: where pi is the probability of a certain grayscale value. Figure 5a-f show im ent luminance conditions with their grayscale distribution histograms. Whe properly exposed, the grayscale distribution histograms show uniform dist their information entropy is the largest, as shown in Figure 5g. The probability density function (pdf) and cumulative distribution fu the image are defined as follows: where i is the pixel intensity, ni is the number of pixels with intensity i, and number of pixels in the image. According to the maximum discrete entrop image with the largest entropy has a uniformly distributed grayscale histo cdf(i) has linear characteristics, namely: The probability density function (pdf ) and cumulative distribution function (cdf ) of the image are defined as follows: where i is the pixel intensity, n i is the number of pixels with intensity i, and N is the total number of pixels in the image. According to the maximum discrete entropy theorem, the image with the largest entropy has a uniformly distributed grayscale histogram, and its cdf (i) has linear characteristics, namely: Here, (20) is converted to the logarithmic domain: where l = log(c × i) and c is a constant. In our research, the cdf (l) of subband LL of the image with the largest entropy in the logarithmic domain through wavelet decomposition is constructed as an intensity-guided distribution (IGD) function. It plays a guiding role in image illumination correction. The IGD function is defined as: The pdf (l) of the subband LL is normalized as: where pdf max and pdf min are the maximum and minimum values of the image pdf, respectively. According to the difference between cdf (l) of the input image and IGD(l) of the ideal image with the largest entropy, pdf GW (l) and cdf GW (l) are designed as follows: Figure 6 demonstrates three different images of the same scene, which appear underexposed in Figure 6a, properly exposed in Figure 6d and overexposed Figure 6g. A comparison of pdf norm (l) and pdf GW (l) is shown in Figure 6b,e,h, respectively. The relationship among cdf (l), cdf GW (l) and IGD(l) is shown in Figure 6c,f,i, respectively. The luminance distribution can be estimated according to the difference between cdf GW (l) and cdf (l). For an underexposed image, the area enclosed by cdf (l) and the X-axis is far larger than the area enclosed by cdf GW (l) and the X-axis. For a properly exposed image, the area enclosed by cdf (l) and the X-axis is close to the area enclosed by cdf GW (l) and the X-axis. For an overexposed image, the area enclosed by cdf (l), and the X-axis is close to the area enclosed by cdf GW (l) and the X-axis but smaller than that of IGD(l).
For the correction of the overall illumination brightness of an image, a global statistical luminance feature (GSLF) is designed to evaluate the difference between cdf (l) and cdf GW (l), which is defined as: In our research, a global statistics adaptive gamma correction (GSAGC) method is proposed as γ(Θ σ ), which is applied to subband LL: where LL GS indicates the corrected low-frequency subband by GSAGC. Through a training set containing images with various illuminance conditions, the relationship between γ(Θ σ ) and the GSLF will be estimated.
underexposed image, the area enclosed by cdf(l) and the X-axis is far large enclosed by cdfGW(l) and the X-axis. For a properly exposed image, the ar cdf(l) and the X-axis is close to the area enclosed by cdfGW (l) and the X-axis. posed image, the area enclosed by cdf(l), and the X-axis is close to the ar cdfGW (l) and the X-axis but smaller than that of IGD(l). For the correction of the overall illumination brightness of an image, tical luminance feature (GSLF) is designed to evaluate the difference betw cdfGW(l), which is defined as: Training Datasets: This article has established an image dataset collected from related works [2,15,18,[20][21][22]26] containing different luminance conditions, including underexposure, proper exposure, and uneven exposure.
Loss Function: To judge whether the overall illumination intensity of an image satisfies the maximum entropy criterion, we introduce the information entropy loss function to obtain the global statistics adaptive gamma γ(Θ σ ), namely: where D is the training dataset, D α is a reconstruction sample, and N is the number of samples in the training dataset. When the information entropy loss function of the reconstructed images is minimized, the regression curve indicating the relationship between γ(Θ σ ) 1×1×N and GSLF 1×1×N is obtained in Figure 7: Sensors 2021, 21, x FOR PEER REVIEW 9 of 20 In our research, a global statistics adaptive gamma correction (GSAGC) method is proposed as γ(Θσ), which is applied to subband LL: where LLGS indicates the corrected low-frequency subband by GSAGC. Through a training set containing images with various illuminance conditions, the relationship between γ(Θσ) and the GSLF will be estimated. Training Datasets: This article has established an image dataset collected from related works [2,15,18,[20][21][22]26] containing different luminance conditions, including underexposure, proper exposure, and uneven exposure.
Loss Function: To judge whether the overall illumination intensity of an image satisfies the maximum entropy criterion, we introduce the information entropy loss function to obtain the global statistics adaptive gamma γ(Θσ), namely: where D is the training dataset, Dα is a reconstruction sample, and N is the number of samples in the training dataset. When the information entropy loss function of the reconstructed images is minimized, the regression curve indicating the relationship between γ(Θσ) 1×1×N and GSLF 1×1×N is obtained in Figure 7.: According to the above, the proposed adaptive dual-gamma correction function, GLAGC, which takes into account the γ(Θχ) by LSAGC and the γ(Θσ) by GSAGC, is given as: According to the above, the proposed adaptive dual-gamma correction function, GLAGC, which takes into account the γ(Θ χ ) by LSAGC and the γ(Θ σ ) by GSAGC, is given as:

Smoothness Preservation
Since GLAGC is adopted in the low-frequency subband LL in the wavelet domain, the high-frequency subband needs to be adjusted correspondently. Otherwise, jaggedness will appear at the image edges after inverse wavelet transformation, as shown in Figure 8. Thus, we introduce a smoothness adjustment to the wavelet high-frequency subband, denoted by: where W ψ i is the high-frequency wavelet coefficient and L(Θ γ ) is the adjustment coefficient. Considering that the high-frequency subbands in the three directions have the same importance, the same punishment coefficient is used. where Wψ i is the high-frequency wavelet coefficient and L(Θγ) is the adjustment coefficient. Considering that the high-frequency subbands in the three directions have the same importance, the same punishment coefficient is used. According to discrete wavelet inverse transform, the image reconstructed by the scale coefficients is defined as s1 (x, y), the image reconstructed by the wavelet coefficients is denoted by s2(x, y), and the final reconstructed image ( , ) x y ς is defined as:  Figure 9 shows the relationship between the images reconstructed by the scale coefficients and the wavelet coefficients. According to the correlation between adjacent pixels in the image, when the 3 neighboring pixels are on a straight line, the edge of the object can be considered smooth and not jagged; we define it as the edge smoothness preservation constraint, namely: 2 ( 1, ) ( , ) ( 2, ) x y x y x y ς ς ς The low-frequency coefficient after adaptive gamma correction is defined as Wϕ'(j0,  m, n); the corresponding reconstructed image of s1'(x, y) is defined according to (34): According to discrete wavelet inverse transform, the image reconstructed by the scale coefficients is defined as s 1 (x, y), the image reconstructed by the wavelet coefficients is denoted by s 2 (x, y), and the final reconstructed image ς(x, y) is defined as: Figure 9 shows the relationship between the images reconstructed by the scale coefficients and the wavelet coefficients. According to the correlation between adjacent pixels in the image, when the 3 neighboring pixels are on a straight line, the edge of the object can be considered smooth and not jagged; we define it as the edge smoothness preservation constraint, namely: the punishment coefficient can be obtained:  x y , s1(x, y), s2(x, y).

Color Restoration
The HSV color model is used in our research because it is consistent w eye's perception of color. It includes three characteristics: hue (H), saturation (V). The V component represents the luminance intensity. GLAGC is perfo component. To restore the color information of the observed image, the ou age in RGB color space can be obtained by a linear transform [21], and the proved operations are defined:  (x, y) is an adaptive stability factor that plays a role in low-illuminat pression, which is defined as: where β is the adjustment coefficient. In general, β = 0.005. To sum up, we describe the algorithm of the proposed GLAGC method 1. Substituting (33) into (36) yields: The low-frequency coefficient after adaptive gamma correction is defined as W φ (j 0 , m, n); the corresponding reconstructed image of s 1 (x, y) is defined according to (34): The gradient comparison of any pixel (x i , y i ) between s 1 (x, y) and s 1 (x, y) is: The high-frequency coefficients are adjusted by L(Θ γ ) to obtain the reconstructed image s 2 (x, y): By substituting (39) and (40) into the edge smoothness preservation constraint (37), the punishment coefficient can be obtained: L(Θ γ ) can adjust the high-frequency coefficient adaptively with γ(Θ [χ,σ] ) to maintain the smoothness of the image edges.

Color Restoration
The HSV color model is used in our research because it is consistent with the human eye's perception of color. It includes three characteristics: hue (H), saturation (S) and value (V). The V component represents the luminance intensity. GLAGC is performed on the V component. To restore the color information of the observed image, the output color image in RGB color space can be obtained by a linear transform [21], and the following improved operations are defined: where V(x, y), R(x, y), G(x, y), and B(x, y) are the V, R, G, and B components before correction. V (x, y), R (x, y), G (x, y), and B (x, y) are the corresponding components after correction. ζ(x, y) is an adaptive stability factor that plays a role in low-illumination noise suppression, which is defined as: where β is the adjustment coefficient. In general, β = 0.005. To sum up, we describe the algorithm of the proposed GLAGC method in Algorithm 1.

Algorithm's inputs: Original image S(x, y) Algorithm's output: Enhanced image O(x, y)
Step (1):Convert to HSV space to obtain the V component Step (

Experiments
During the experiments, first, the performances of LSAGC and GSAGC are verified. Next, image naturalness preservation through punishment adjustment and lowillumination noise suppression is illustrated. Then, the GLAGC method is qualitatively compared with several state-of-the-art methods. All the experiments are run in MAT-LAB R2017b for Windows 7 on a computer equipped with an Intel(R) Core (TM) i7-4790 CPU at 3.60 GHz and 8 GB memory. All the test images are sourced from related work [2,15,18,[20][21][22]26] and benchmarks that have been commonly used for performance verification.
Four state-of-the-art algorithms were used for the comparison experiments, including the variational-based method SIRE [19], the AGCWD method combined with histograms [17], the 2-D adaptive gamma correction method (Sungmok Lee s method) [18], and LIME based on Retinex theory [2]. All the parameters in the competing methods are chosen according to their original articles.
Four evaluation indicators were selected in the experiments: (1) The computational cost of the algorithm; (2) The information entropy, which is used to quantify and evaluate the information richness of the enhanced image; (3) The absolute mean brightness error (AMBE) [27], which is used to evaluate illuminance retention, is defined as follows: where x m and y m represent the average value of the input image and output image, respectively. (4) The lightness order error (LOE), which is used to evaluate the naturalness of image enhancement [26]: where m, n is the image size, RD(i, j) is the relative order of pixels (i, j), ⊕ is the exclusive or (XOR) operator, and L(x, y) and L e (x, y) are the original image and enhanced image, respectively. The smaller the LOE value is, the better the naturalness of the original image that can be maintained.

LSAGC Tests
This section will discuss the spatial distribution characteristics of different images and the influence of the proposed LSAGC function on the image spatial illumination distribution. Figure 10 shows two images with uneven illumination distributions. The area where the lawn is located at the bottom of image (a) is in a weakly exposed state, and images (b) and (c) are the conditions without or with the LSAGC function, respectively. At the bottom of image (c), by LSAGC, the lawn becomes more obvious, and more detailed textures are also highlighted. Figure 10d shows the normal exposure of the sky in the middle of the image, and the indoor area next to it is weakly exposed. Without LSAGC, the sky area in the middle of the image becomes saturated after enhancement, resulting in the loss of texture and other information. When LSAGC is used, the texture of the sky is not overexposed, and information is not lost. enhanced image, respectively. The smaller the LOE value is, the better the naturalness of the original image that can be maintained.

LSAGC Tests
This section will discuss the spatial distribution characteristics of different images and the influence of the proposed LSAGC function on the image spatial illumination distribution. Figure 10 shows two images with uneven illumination distributions. The area where the lawn is located at the bottom of image (a) is in a weakly exposed state, and images (b) and (c) are the conditions without or with the LSAGC function, respectively. At the bottom of image (c), by LSAGC, the lawn becomes more obvious, and more detailed textures are also highlighted. Figure 10d shows the normal exposure of the sky in the middle of the image, and the indoor area next to it is weakly exposed. Without LSAGC, the sky area in the middle of the image becomes saturated after enhancement, resulting in the loss of texture and other information. When LSAGC is used, the texture of the sky is not overexposed, and information is not lost.
It can be seen from the above two examples that LSAGC sufficiently considers the spatial characteristics of the illumination distribution and redistributes the uneven spatial illumination to make it more uniform. Histogram analysis is given in Figure 11. It can be seen that in the absence of LSAGC, more high pixel values will lead to overexposure; LSAGC can avoid this situation, and low-value pixels have also been better improved, improving the image illumination quality.

GSAGC Tests
This experiment discusses the adaptive correction effect of the GSAGC method in the It can be seen from the above two examples that LSAGC sufficiently considers the spatial characteristics of the illumination distribution and redistributes the uneven spatial illumination to make it more uniform. Histogram analysis is given in Figure 11. It can be seen that in the absence of LSAGC, more high pixel values will lead to overexposure; LSAGC can avoid this situation, and low-value pixels have also been better improved, improving the image illumination quality.

GSAGC Tests
This experiment discusses the adaptive correction effect of the GSAGC method in the proposed algorithm on images with different global illumination values. Figures 12-14 shows three sets of images, in which each experimental input sample is five images with different exposures from dark to bright, and we process them with the proposed algorithm, the parameters of which have been trained. The experimental results show that for images with different exposures, the proposed algorithm can automatically perceive the exposure level and generate high-quality images with almost the same exposure. Figure

GSAGC Tests
This experiment discusses the adaptive correction effect of the GSAGC method in the proposed algorithm on images with different global illumination values. Figures 12-14 shows three sets of images, in which each experimental input sample is five images with different exposures from dark to bright, and we process them with the proposed algorithm, the parameters of which have been trained. The experimental results show that for images with different exposures, the proposed algorithm can automatically perceive the exposure level and generate high-quality images with almost the same exposure. Figure 15 uses the GSLF as the Y-axis of the above three sets of experimental input images and the mean of the image as the X-axis. The size of the circle indicates the AMBE value.          The GSLF defined in (26) is a measure of the global statistical illumination ch istics of an image. The larger the value is, the weaker the exposure. As shown i    The GSLF defined in (26) is a measure of the global statistical illuminat istics of an image. The larger the value is, the weaker the exposure. As sho The GSLF defined in (26) is a measure of the global statistical illumination characteristics of an image. The larger the value is, the weaker the exposure. As shown in Figure 15, as the brightness of the input image gradually increases, the average value of the image gradually increases. When GSLF decreases, the exposure level is increasing, and the AMBE also subsequently decreases, indicating that the image brightness has been maintained and has not continued to increase when the image is properly exposed. This experiment shows that the proposed algorithm can enhance low-exposure images while maintaining normal-exposure images.

Naturalness Preservation
This section will explain the impact of adjustment on the high-frequency coefficients and the impact of adaptive stabilization factors on the image quality. Figure 16 shows the edge smoothness preservation test. When the high-frequency coefficients are not corrected, the edge smoothness of the image object will be destroyed. As shown in Figure 16b, when the adaptive adjustment is obtained when the edge smoothness preservation constraint is used, the image maintains the edge smoothness after being enhanced, improving the visibility of the images. ness preservation constraint is used, the image maintains the edge smoothness after bein enhanced, improving the visibility of the images. Figure 17 illustrates the low-illumination noise suppression test. When the input im age is extremely weakly exposed, as Figure 17b shows, considerable noise will appear a low-exposure areas after color restoration. By the adaptive stability factor defined in (43 the image quality is enhanced while noise is suppressed, as shown in Figure 17c.    Figure 17 illustrates the low-illumination noise suppression test. When the input image is extremely weakly exposed, as Figure 17b shows, considerable noise will appear at low-exposure areas after color restoration. By the adaptive stability factor defined in (43), the image quality is enhanced while noise is suppressed, as shown in Figure 17c. This section will explain the impact of adjustment on the high-frequency coefficient and the impact of adaptive stabilization factors on the image quality. Figure 16 shows the edge smoothness preservation test. When the high-frequenc coefficients are not corrected, the edge smoothness of the image object will be destroyed As shown in Figure 16b, when the adaptive adjustment is obtained when the edge smooth ness preservation constraint is used, the image maintains the edge smoothness after bein enhanced, improving the visibility of the images. Figure 17 illustrates the low-illumination noise suppression test. When the input im age is extremely weakly exposed, as Figure 17b shows, considerable noise will appear a low-exposure areas after color restoration. By the adaptive stability factor defined in (43 the image quality is enhanced while noise is suppressed, as shown in Figure 17c.

Comparative Experiments
We compare images under a series of illuminance scenarios through different algorithms, and the results obtained are shown below. As shown in Figures 18-20, a group of images with uneven illumination distributions are called urban, baby, and street.

Comparative Experiments
We compare images under a series of illuminance scenarios through different algorithms, and the results obtained are shown below. As shown in Figures 18-20, a group of images with uneven illumination distributions are called urban, baby, and street. Figure 18 shows the experiments under the urban image. The AGCWD and SIRE methods cannot significantly enhance the dark areas surrounding the buildings. Moreover, the AGCWD method causes saturation on the upper part of the image; the LIME method can enhance the overall brightness of the image, but it causes overexposure; Lee's method and the proposed method achieve good performances. Figure 19 displays the results of the experiments on the baby image. The AGCWD and LIME methods both overenhance the background areas; moreover, the AGCWD method cannot increase the brightness in the baby's clothes. The result from Lee's method is overnormalized, and similar results are obtained by the LIME method and the proposed method. Figure 20 presents the situation for the Street image, for which the best performance is achieved by our method. The AGCWD and SIRE methods cannot enhance the dark areas at the bottom of the image; the overall picture by Lee's method is still dark; the LIME method seems to yield a bright image, but it causes oversaturation in the sky.     The other image samples (composed of the building, goddess, and landscape images) have evenly distributed spatial illumination but with different global illumination, as shown in Figures 21-23. Figure 21 reveals the algorithms' performance in complex lighting conditions. For shadow areas in the Buildings image in Figure 18. Our method outperforms all the other    The other image samples (composed of the building, goddess, and landscape images) have evenly distributed spatial illumination but with different global illumination, as shown in Figures 21-23. Figure 21 reveals the algorithms' performance in complex lighting conditions. For shadow areas in the Buildings image in Figure 18. Our method outperforms all the other  Figure 18 shows the experiments under the urban image. The AGCWD and SIRE methods cannot significantly enhance the dark areas surrounding the buildings. Moreover, the AGCWD method causes saturation on the upper part of the image; the LIME method can enhance the overall brightness of the image, but it causes overexposure; Lee's method and the proposed method achieve good performances. Figure 19 displays the results of the experiments on the baby image. The AGCWD and LIME methods both overenhance the background areas; moreover, the AGCWD method cannot increase the brightness in the baby's clothes. The result from Lee's method is overnormalized, and similar results are obtained by the LIME method and the proposed method. Figure 20 presents the situation for the Street image, for which the best performance is achieved by our method. The AGCWD and SIRE methods cannot enhance the dark areas at the bottom of the image; the overall picture by Lee's method is still dark; the LIME method seems to yield a bright image, but it causes oversaturation in the sky.
The other image samples (composed of the building, goddess, and landscape images) have evenly distributed spatial illumination but with different global illumination, as shown in Figures 21-23. methods; the LIME and AGCWD methods cannot restore the colors of the dusk area; Lee's method caused ripple distortions in the sky area. Figure 22 is the comparison for the Goddess image. Lee's method results in excessive contrast enhancement. Overenhancement is produced in the face region by the LIME and AGCWD methods; furthermore, the AGCWD method cannot remove the shadows in the background. The SIRE method achieves the best naturalness preservation, but the computational cost is much greater than that of our method, which will be discussed later. Figure 23 shows the landscape image with pleasant visual effects, which are used to test the performance of avoiding overexposure. The mountain in the background becomes blue and loses its original color with Lee's method; oversaturation occurs in the LIME method; the AGCWD and SIRE methods and our method all have good results.        Table 1 shows the entropy, LOE and AMBE performance of the different algorithms. The proposed algorithm achieves the maximum value of the average information entropy of the enhanced image, which reveals that the proposed algorithm can obtain the most abundant image information. In terms of the preservation of naturalness, the proposed method has the lowest LOE after the AGCWD method, which is inferior to ours. Regarding the AMBE, the maximum is achieved by our method for dark illumination scenarios such as buildings, revealing its overall boosting on low illumination, while the minimum is obtained in normal-exposure scenes (Landscape), which presents brightness maintenance. Table 2 shows the average computational costs of the different algorithms under the same computational conditions, and the image resolution is 512 × 512. It can be seen that the proposed algorithm can achieve good results in a short amount of time.      Table 1 shows the entropy, LOE and AMBE performance of the different algorithms. The proposed algorithm achieves the maximum value of the average information entropy of the enhanced image, which reveals that the proposed algorithm can obtain the most abundant image information. In terms of the preservation of naturalness, the proposed method has the lowest LOE after the AGCWD method, which is inferior to ours. Regarding the AMBE, the maximum is achieved by our method for dark illumination scenarios such as buildings, revealing its overall boosting on low illumination, while the minimum is obtained in normal-exposure scenes (Landscape), which presents brightness maintenance. Table 2 shows the average computational costs of the different algorithms under the same computational conditions, and the image resolution is 512 × 512. It can be seen that the proposed algorithm can achieve good results in a short amount of time.   Figure 18. Our method outperforms all the other methods; the LIME and AGCWD methods cannot restore the colors of the dusk area; Lee's method caused ripple distortions in the sky area. Figure 22 is the comparison for the Goddess image. Lee's method results in excessive contrast enhancement. Overenhancement is produced in the face region by the LIME and AGCWD methods; furthermore, the AGCWD method cannot remove the shadows in the background. The SIRE method achieves the best naturalness preservation, but the computational cost is much greater than that of our method, which will be discussed later. Figure 23 shows the landscape image with pleasant visual effects, which are used to test the performance of avoiding overexposure. The mountain in the background becomes blue and loses its original color with Lee's method; oversaturation occurs in the LIME method; the AGCWD and SIRE methods and our method all have good results. Figure 24 provides more experimental results by GLAGC. Table 1 shows the entropy, LOE and AMBE performance of the different algorithms. The proposed algorithm achieves the maximum value of the average information entropy of the enhanced image, which reveals that the proposed algorithm can obtain the most abundant image information. In terms of the preservation of naturalness, the proposed method has the lowest LOE after the AGCWD method, which is inferior to ours. Regarding the AMBE, the maximum is achieved by our method for dark illumination scenarios such as buildings, revealing its overall boosting on low illumination, while the minimum is obtained in normal-exposure scenes (Landscape), which presents brightness maintenance. Table 2 shows the average computational costs of the different algorithms under the same computational conditions, and the image resolution is 512 × 512. It can be seen that the proposed algorithm can achieve good results in a short amount of time.
In summary, by comparison experiments, the proposed method has good performance in low illumination enhancement, uneven illumination improvement and illumination maintenance. Lee's method may cause ripple distortion and excessive contrast enhancement; the LIME method can handle the various illuminance conditions while the results will be oversaturated in some regions; the AGCWD method formulates the gamma mapping curve according to the histogram of the image without considering the spatial information, which results in degrading performance in uneven illumination images; and the SIRE method is relatively good. Nevertheless, its practicality is limited by time consumption. nation maintenance. Lee's method may cause ripple distortion and excessive contrast enhancement; the LIME method can handle the various illuminance conditions while the results will be oversaturated in some regions; the AGCWD method formulates the gamma mapping curve according to the histogram of the image without considering the spatial information, which results in degrading performance in uneven illumination images; and the SIRE method is relatively good. Nevertheless, its practicality is limited by time consumption.

Conclusions
In this article, we propose an adaptive image illumination perception and correction algorithm in the wavelet domain. We use the wavelet transform to obtain features of illuminance, and then the creative global statistical illuminance features and local spatial illuminance features are proposed as the foundation of perceived illuminance. An adaptive dual-gamma correction function is carried out accordingly; moreover, the edge smoothness is retained by adaptive adjustment. In addition, the proposed stabilization factor can suppress low-illumination noise. It is verified by comparative experiments that the adaptability, preservation of naturalness and efficiency of this algorithm on different images are improved compared with previous state-of-the-art methods. In addition to image enhancement, for a certain camera, our algorithm is promising for automatically providing an appropriate gamma factor through learning only several captured images.