CLIPtone: Unsupervised Learning for Text-based Image Tone Adjustment

(* Equal contribution)

POSTECH 1GSAI & 2CSE
CVPR 2024
-->
Teaser.

We present CLIPtone, a text-based image tone adjustment framework trained in an unsupervised manner. With its superior understanding of natural languages, CLIPtone is capable of performing successful adjustments across a range of text descriptions, including those previously deemed challenging.

Abstract

Recent image tone adjustment (or enhancement) approaches have predominantly adopted supervised learning for learning human-centric perceptual assessment. However, these approaches are constrained by intrinsic challenges of supervised learning. Primarily, the requirement for expertly-curated or retouched images escalates the data acquisition expenses. Moreover, their coverage of target style is confined to stylistic variants inferred from the training data.

To surmount the above challenges, we propose an unsupervised learning-based approach for text-based image tone adjustment method, CLIPtone, that extends an existing image enhancement method to accommodate natural language descriptions. Specifically, we design a hyper-network to adaptively modulate the pretrained parameters of the backbone model based on text description. To assess whether the adjusted image aligns with the text description without ground truth image, we utilize CLIP, which is trained on a vast set of language-image pairs and thus encompasses knowledge of human perception. The major advantages of our approach are three fold: (i) minimal data collection expenses, (ii) support for a range of adjustments, and (iii) the ability to handle novel text descriptions unseen in training. Our approach's efficacy is demonstrated through comprehensive experiments, including a user study.

Method

Network architecture.

CLIPtone consists of a text adapter and a tone adjustment network. From a target text description, the text adapter calculates a directional vector within the CLIP embedding space from the source to target text descriptions and estimates the modulation parameter ∆θ for the AdaInt module and the weight predictor of the tone adjustment network. The modulated tone adjustment network adaptively constructs an image-text adaptive 3D LUT through fusing basis 3D LUTs and non-uniform sampling, ultimately adjusting the color values of an input image.

Results

BibTeX

@inproceedings{lee2024cliptone,
        title={CLIPtone: Unsupervised Learning for Text-based Image Tone Adjustment},
        author={Lee, Hyeongmin and Kang, Kyoungkook and Ok, Jungseul and Cho, Sunghyun},
        booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
        year={2024}
}