Filtering methods for underwater image

Rishitha Bandi
8 min readApr 21, 2021
A coral reef in the Red Sea, Israel. Image Credit: Matan Yuval, Marine Imaging Lab, University of Haifa

Underwater images have cast or blurred due to the attenuation of the light. The depth of the objects from the surface, the distance between the camera and objects, and the composition of water are a few of the factors which degrade the images. Images captured underwater are degraded by the color cast, blurring, etc.

Restoration and enhancement are two tasks the underwater filtering methods can do. Restoration is the process that aims to improve the known degradation operations on the degraded image. Physical based models are used for the restoration. Enhancement is the process of improving the quality of the image to look better. It changes the colors, contrasts, or improves the performance of computer vision algorithms.

Degradation of underwater images

Light attenuation reduces by absorption the intensity of light traveling through medium and redirects by scattering its propagation. Color cast and blurring are caused due to this attenuation. The longer the distance the more is absorbed, hence the images which capture objects of more depth are more degraded. Attenuation also depends on the composition of the water which keeps changing from the location.

Information from the scene helps in estimation using the devices like polarized filters or algorithms.

Coastal waters and oceanic waters are two categories of water environments mainly classified into. In coastal waters, red is the most attenuated color whereas in oceanic waters blue is the most attenuated color. The attenuation of light depends on the water composition and geographical location. Inherent and apparent are two categories of optical properties which depend on the water composition. Apparent properties also depend on the ambient light.

The intensity of the reflected light diminishes exponentially along with its propagation in water, modulated by the beam attenuation coefficient(s) and the distance traveled. The average cosine angle indicates the average scattering direction deviating from the original propagation. A larger scattering coefficient or a longer range traveled by the light results in a larger spread of the beam. The range of degradation varies the cast and blurring.

Comparison of Image formation models

McGlamery-Jaffe model considers attenuation of light i.e. directly reflected, attenuated scattered reflected, transmitted backscattered. This model involves camera parameters and water properties, such as the volume scattering functions and the point spread function

The Schechner-Karpel model considers how the un-attenuated reflected light is degraded along with the objects distance from the camera.

Sea-thru is a physics-based algorithm using the Akkaynak-Treibitz underwater image formation model. It does not use neural networks and was not trained on any dataset. It works without a color chart or any information about the optical qualities or depth of the water body.

Restoration

Although we have physical based restoration methods with scene formation, learned neural networks and observation are also used. The quality of the images can be enhanced with computer vision tasks. Image prior is a convolutional neural network used for enhancing the image without any prior training data. Neural networks are initialized randomly and used as training data to solve inverse problems such as noise reduction, super-resolution, and inpainting. Prior is used for estimating the background light and transmission map, which determines how the image is degraded. The transmission map describes the portion of the light that is not scattered and reaches the camera. Since the map is a continuous function of depth, it thus reflects the depth information in the scene. Neural networks are trained on the degraded image and its prior estimates. Estimation of background light of degraded image depends on transmission map for the same degraded image.

Outdoor colour priors proposed for dehazing are a popular choice. However, the main degradation source in hazy images is the veiling effect of haze, while that in water is the selective attenuation of light. Therefore applying outdoor priors to underwater images generally misestimates the elements. Dark channel prior and Haze-Line prior are main outdoor priors exploiting the veiling effect.

Dark channel prior observes that the minimum intensity across the three colour channels is usually close to zero in an haze-free image, whereas in an hazy image it is increasingly shifted towards that of the haze color i.e. background light with range. Dark channel value is smaller when objects are close but when applied to oceanic water to estimate background light and transmission map objects which are far away from the camera are misestimated to be close and most of the light is attenuated. Haze Line prior uses the overall distribution of the RGB intensities. However, the Haze line prior is not effective in water since is under the attenuated and does not follow the same color distribution as the in air scene that is under an unattenuated light.

Water-type specific priors exploit the selective color degradation in the water type. The Underwater Dark Channel Prior (UDCP)discards the most degraded red colour channel and states that objects closer to the camera, with less degraded colors, have low intensities in either of the blue or green channels. The Automatic Red Channel Prior (ARC)directly exploits the loss in red intensity and the increase in blue and green intensity along with the range. Carlevaris-Bianco et al. (CB) proposed to consider the difference between the red color channel and the less attenuated blue and green color channels.

Textures are increasingly blurred along with the range by scattering in all water types.

Comparison of transmission maps T estimated by outdoor, water-specific and texture priors for images captured in oceanic (rows 1–2) and coastal (rows 3–4) waters. Values vary in [0, 1] as follows: 0 1. The larger the value, the less compensated the colour as the object is estimated to be closer to the camera .Note that the three water-specific priors are designed for oceanic water. To facilitate the comparison, we selected the background light A for all priors as the average intensity from a 15×15 window that represents the water mass in the image (see the yellow rectangles in the first column). Priors marked with * requires a known background light A to estimate T; DCP: Dark Channel Prior ; Haze-line: Haze-line Prior ; UDCP: Underwater Dark Channel Prior , CB: Carlevaris-Bianco et al; ARC: Automatic Red Channel; Laplacian py.: Laplacian pyramid.

Enhancement

For improving the quality of images for human perception, enhancement methods are used in the color spaces like CIELab and HSL. CIELab is a color space that expresses in three values L* for perceptual lightness, a*, and b* for four unique colors of human vision (red, green, blue, yellow). White balancing algorithms are used for estimating the illuminance to remove the white cast. These algorithms operate in RGB or CIELab color spaces. White balancing is the process of removing unrealistic color casts so that objects which appear white in person are rendered white in the photo. The cast is an unwanted color shift in the whole image, which can be caused by reflecting light from a nearby object. White balancing cannot handle range dependent degradation but does the color degradation along with the range. For increasing the contrast of the images, approaches like gamma correction, histogram equalization in RGB and HSL color spaces. It adjusts each color channel’s value according to the overall color cast factor.

A degradation linked to sharpness is the noise, normally caused by backscattering or the image acquisition process. While denoising can be achieved by low-pass filtering in the frequency domain, this might introduce additional blurring to the edges.

Learning based methods

Underwater computer vision tasks include detection, tracking, classification of fishes or reefs, etc.

For image enhancement, neural networks are trained on target images that are free of degradation. As capturing underwater images free of degradation is difficult so synthesized physics based models. Target images can also be synthesized after learning the enhancement with a subjective tests or Generative Adversarial Network for selection . For weakly supervised training the target image is not restricted to the same scene as the degraded image.

Convolutional neural networks with skip connections are used for the supervised models for underwater image filtering. In CNNs the range is estimated by earlier network layers and effect the output image. U-Net architecture with skip connections between encoder and decoder are used as auto encoders which maintain contextual information.

Generative Adversarial Network frameworks are employed in several networks which consist of generator networks to filter images and discriminator networks to determine whether the image is the target image or has been filtered by a network. CycleGAN is used for weakly supervised networks which consist of first GAN that learns the filtering from the degraded to the target image set and second GAN that learns to process the filtered image back to the original degraded image. CycleGAN is less stable and more difficult to train without explicitly stating the target image of each degraded image.

Similarity between a target image and the network-filtered image is the measure for the loss function. Color differences and image structures are considered for the loss function. For color differences pixel by pixel measure is considered , which is calculated using L1 normalization, L2 normalization or MSE(Mean square error). Image gradient or structural similarity are used for measuring structures.

Datasets

  • Natural datasets which are degraded by the water medium.
  • Degradation of the water images due to the chemicals added to the water or digitally.

Subjective Tasks

For underwater image filtering, subjective task results are compared for restoration and enhancement on both coastal and oceanic water images. ARC(Automatic Red Channel), DBL(Depth- compensated Background Light Restoration ), WCID(Wavelength Compensation Image Dehazing), UWHL( Underwater Haze-line), and UWCNN(Underwater convolutional neural network) are for restoration. Fusion(Colour Balance and Fusion by Ancuti et al)., FUnIE, and waterNET for enhancement.

Comparison of underwater image filtering results for images that capture the two main Jerlov categories: oceanic waters (rows 1–4) and coastal waters (rows 5–7). UWCNN-I and UWCNN-3 are UWCNN trained for type I oceanic water and for type 3 coastal water, respectively. Note that FUnIE can only process images of fixed size (256 × 256) and, for visualisation, we resized the image to align with the height of the original image.

ARC is designed for oceanic water, it slightly darkens the images owing to the use of weighted background light to replace the color-channel dependent transmission map.

DBL is designed for oceanic water. Although it failed to remove cast in green water it compensates for its degradation along with the range. It also compensates for the nonuniform watercolor.

UWHL can compensate for the color degradations in oceanic water and also through the depth underwater surface, although it may produce over-saturated images UWHL can also remove the color cast in heavily green casted images However, the global white balancing used to remove the cast distorts the watercolor and introduces unnatural reds in the scene.

UWCNN, which was trained for oceanic type I waters, introduce a pink cast to oceanic images. This might be caused by the synthesized training dataset, which does not contain objects naturally present in water. The lack of appropriate training datasets, neural networks, which are to date trained on synthetic datasets, are still underperforming in the underwater image restoration task.

Conclusion

This survey paper has provided clear and detailed information about the degradation of the underwater images and enhancement techniques for improving the image quality. It describes the overall insight on the restoration techniques using neural networks and physical based methods. The datasets and subjective tasks required for the filtering of the underwater images are also covered.

References

https://arxiv.org/pdf/2012.12258.pdf

https://puiqe.eecs.qmul.ac.uk/Demo

https://blogs.mathworks.com/headlines/2020/01/20/computer-vision-algorithm-removes-the-water-from-underwater-images/

https://www.deryaakkaynak.com/research

--

--