Skip to content

tezignlab/subculture-colorization

Repository files navigation

Culture-inspired Multi-modal Color Palette Generation and Colorization: A Chinese Youth Subculture Case

Introduction

This repository is the official implementation of Culture-inspired Multi-modal Color Palette Generation and Colorization: A Chinese Youth Subculture Case.
Presented at The 3rd IEEE Workshop on Artificial Intelligence for Art Creation.
Links: paper | video
(now the paper has not been published yet, the link will be added later)

Subcultural youth groups in China have their own unique color style. For example, in traditional Chinese color concepts, the combination of red and green is unpleasant (there is a proverb: '红配绿,赛狗屁','red with green, sick the dog'). However, this same color combination represents a cool and rebellious style for Chinese Youth Subculture (CYS) groups as illustrated by a number of posters found in popular CYS websites. red_with_green

In order to study this unique color style and create an intelligent tool of palette generation and colorizaiton for CYS groups, we started this project.

CYS color dataset

dataset

CYS color dataset contains 1263 images with corresponding 5-color palette, descriptive text, and category. The figure above shows five examples from the dataset.

We find 100 samples of our dataset here and the full version will be available later.

Framework

Our framework includes two separately trained networks: the color palette generation network and the colorization network.

training

The first network is trained through a conditional GAN (cGAN) with a multi-modal input to generate CYS color palettes.

Another cGAN in the second model is trained to color the input images according to the palette generated by the first network.

Demo and Results

We have developed a demo system to materialize our framework, where users can obtain a image that is colored with the CYS style by three steps: palette generation, color adjustment and colorizaton.

demo

You can find the video of our demo at YouTube.

And we use the demo system to generate some sample results:

result_1

result_2

Running Models

Setup

Make sure you have installed Python >= 3.6.

Prepare virtual environment and install packages.

python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

Download Pre-trained Models

As mentioned in Framework, there are 2 cGAN models in the framework. Different checkpoint need to be downloaded for each model: text2palette pre-tained model and colorization pre-trained model.

Play with Pre-trained Models

Now change the configuration for Streamlit demo in config.ini. Change parameters under streamlit to the path where you store the downloaded checkpoints.

...
[streamlit]
t2p_ckpt_path = /PATH/TO/TEXT2PALETTE
col_ckpt_path = /PATH/TO/COLORIZATION

Then run the demo by:

streamlit run st_demo.py

Now you can visit http://localhost:8501 to play with pre-trained demo.

Training Models

Prepare training data

We have implemented all you need in preprocess step in data_preprocess.py.

  1. Change the paths to correct ones and run download_image_and_write_csv().

  2. If everything goes right, you will find images in ./data/images and a csv file with different first column in ./data named preprocessed_data.csv.

  3. Then run augment_preprocess_data() and you will get a csv file named augment_data.csv.

Change configuration in config.ini as you need.

Notice: [text2palette] is for the color palette generation network and [colorization] is for the colorization network.

Detailed description of every fields:
batch_size - batch size of training dataset
learning_rate - learning rate of optimizer
beta_1 - parameter of Adam optimizer
max_iteration_number - total steps of training
print_every - print results after print_every step
checkpoint_every - save checkpoint after checkpoint_every step
checkpoint_max_to_keep - the maximum number of checkpoint stored on your machine, old ones will be deleted
checkpoint_dir - the folder where you want to save your checkpoints
sample_dir - the folder where you want to save your sample generated during training
z_dim - dimension of noise of GAN

Start Training

Run text2palatte_pipeline.py for training color palette generation network

python text2palette_pipeline.py --train

and colorization_pipeline.py for training the colorization network.

python colorization_pipeline.py --train

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published