ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation
ImageReward is the first general-purpose text-to-image human preference RM which is trained on in total 137k pairs of
expert comparisons, based on text prompts and corresponding model outputs from DiffusionDB. We demonstrate that
ImageReward outperforms existing text-image scoring methods, such as CLIP, Aesthetic, and BLIP, in terms of
understanding human preference in text-to-image synthesis through extensive analysis and experiments.
Quick Start
Install Dependency
We have integrated the whole repository to a single python package
image-reward
. Following the commands below to prepare the environment:
# Clone the ImageReward repository (containing data for testing)
git clone https://github.com/THUDM/ImageReward.git
cd ImageReward
# Install the integrated package `image-reward`
pip install image-reward
Example Use
We provide example images in the
assets/images
directory of this repo. The example prompt is:
a painting of an ocean with clouds and birds, day time, low depth field effect
Use the following code to get the human preference scores from ImageReward:
import os
import torch
import ImageReward as reward
if __name__ == "__main__":
prompt = "a painting of an ocean with clouds and birds, day time, low depth field effect"
img_prefix = "assets/images"
generations = [f"{pic_id}.webp"for pic_id inrange(1, 5)]
img_list = [os.path.join(img_prefix, img) for img in generations]
model = reward.load("ImageReward-v1.0")
with torch.no_grad():
ranking, rewards = model.inference_rank(prompt, img_list)
# Print the resultprint("\nPreference predictions:\n")
print(f"ranking = {ranking}")
print(f"rewards = {rewards}")
for index inrange(len(img_list)):
score = model.score(prompt, img_list[index])
print(f"{generations[index]:>16s}: {score:.2f}")
The output should be like as follow (the exact numbers may be slightly different depending on the compute device):
@misc{xu2023imagereward,
title={ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation},
author={Jiazheng Xu and Xiao Liu and Yuchen Wu and Yuxuan Tong and Qinkai Li and Ming Ding and Jie Tang and Yuxiao Dong},
year={2023},
eprint={2304.05977},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Runs of THUDM ImageReward on huggingface.co
0
Total runs
0
24-hour runs
0
3-day runs
0
7-day runs
0
30-day runs
More Information About ImageReward huggingface.co Model
ImageReward huggingface.co is an AI model on huggingface.co that provides ImageReward's model effect (), which can be used instantly with this THUDM ImageReward model. huggingface.co supports a free trial of the ImageReward model, and also provides paid use of the ImageReward. Support call ImageReward model through api, including Node.js, Python, http.
ImageReward huggingface.co is an online trial and call api platform, which integrates ImageReward's modeling effects, including api services, and provides a free online trial of ImageReward, you can try ImageReward online for free by clicking the link below.
THUDM ImageReward online free url in huggingface.co:
ImageReward is an open source model from GitHub that offers a free installation service, and any user can find ImageReward on GitHub to install. At the same time, huggingface.co provides the effect of ImageReward install, users can directly use ImageReward installed effect in huggingface.co for debugging and trial. It also supports api for free installation.