With the rapid development of digital technology, digital culture industry has become an effective way to meet people's needs for a better life. To satisfy the requirements of cultural transmission, mobile devices not only need to capture high-quality images, but also to ensure that captured images can reflect the true color information of the subjects. However, the image color information is easily affected by the scene illumination, resulting in color cast phenomenon, especially in the backlight, night vision, solid color background and other special scenes, the color cast phenomenon is more serious, greatly affecting the user shooting experience. In general, the automatic white balance (AWB) module is used to correct color information in mobile terminal computational photography system. However, most of the industry's AWB modules have two limitations: first, in above-mentioned special scenes, it is difficult to achieve user satisfaction with the color correction effect; second, the AWB module for each type of camera needs to be individually adjusted.
The widely used AWB technology sets static color temperature parameters based on the pre-set scene optical statistics characteristics in camera color space, and uses the built-in static color temperature parameters to correct the color information of current scene by scene matching. Manually setting static color temperature parameters for pre-set scenes results in the two limitations mentioned in the background part. Machine learning-based AWB technology greatly improves the performance of image color correction in the special scenes and reduces the necessity of manual tuning. The effect of this type of AWB, however, is strongly dependent on the correctness and coverage of training data. The former refers to the data to meet a single uniform illumination requirement, achieving the purpose of model matching; the latter refers to the data taken by multiple camera types. This is because if the data is limited to only one type of camera, machine learning-based AWB technology would fit camera parameters; when the technology is migrated to other camera types, it is needed to re-shoot data training to achieve a satisfactory color correction effect, consuming a lot of manpower. Whereas the multi-camera data can improve the generalization of the machine learning-based AWB to different camera types, reducing the workload that different kinds of camera data need to be collected separately. Due to the lack of standardization of data collection and annotation for AWB in the industry, it is difficult for the data to meet the requirements of high quality, sufficient quantity and wide coverage. Therefore, it is necessary to propose the multi-camera data collection and annotation for AWB enhancement, ensuring the coverage of cameras and the effectiveness of illumination, filling the standard gap in the industry and laying a sufficient data base for AWB enhancement. Existing international standards for image chromaticity, light source and data acquisition can provide support, for example, [ITU-R BT.2246-4 6C/211] establishes standards for image acquisition of the scene illuminated by flickering lights at high frame rate, providing data acquisition experience for this proposal; [IEC 61966-9:2003] provides detailed specifications for digital camera color measurement, providing support for illumination uniformity testing; [ISO/CIE DIS 11664-2] defines the standard light source and observation environment, providing reference for scenes selection.
This proposal proposes the multi-camera data collection and annotation for AWB enhancement. This proposal standardizes the collection process from four aspects: number of camera types setting, scene selection, shooting setting, and illumination uniformity detection, meeting the requirements of single uniform illumination and wide coverage; and specifies the annotation content and format from three aspects: illumination information, illumination indicator information, and camera information, ensuring the comprehensiveness of the annotation. This proposal attempts to provide support for the following aspects:
1) The proposal will guide the transition of AWB enhancement from optical statistical characteristics-based to machine learning-based, which is conducive for chip manufacturer to enhance the AWB module performance and improve the AWB module accuracy in the special scenes; the data across camera types also reduces the workload that different kinds of camera data need to be collected separately.
2) For the special scenes where the AWB module provided by the chip manufacturer cannot meet user requirements, such as backlight, night vision and solid color background, the proposal is beneficial for the relevant color constancy technology to improving the effect of color correction in such scenes. In addition, while ensuring the authenticity of scene color information in the image, the performance of beauty software can be further supported.
3) As a sub-standard for computational photography processing unit, the proposal, together with other standards for computational photography processing unit, will enrich the standard system of mobile terminal computational photography system and promote the development of the industry to standardization.