internlm / CapRL-3B

huggingface.co
Total runs: 175
24-hour runs: 0
7-day runs: -17
30-day runs: -20
Model's Last Updated: April 16 2026
image-text-to-text

Introduction of CapRL-3B

Model Details of CapRL-3B

CapRL-3B

📖 Paper |🤗 CapRL-3B Model | 🤗 CapRL-2M Dataset |🤗 CapRL Collection | 🤗 Daily Paper

Introduction

We are excited to introduce CapRL-3B, a lightweight 3B image captioner that achieves perception capabilities comparable to Qwen2.5-VL-72B.

This is the first study of applying Reinforcement Learning with Verifiable Rewards for the open-ended and subjective image captioning task. Unlike traditional Supervised Fine-Tuning, which can lead to models memorizing a limited set of annotated captions, our method allows the model to explore and generate a broader range of creative and general descriptions. CapRL is a new training paradigm featuring a decoupled two-stage pipeline. The initial stage uses LVLMs to generate rich and accurate captions. Subsequently, the second stage evaluates caption quality by using a vision-only LLM to perform the QA task. We also created a specific QA curation pipeline to ensure the quality of the questions and answers used for the second stage.

By employing CapRL training framework, initializing with the Qwen2.5-VL-3B model, and using a carefully filtered 75K QA dataset as the training set, we obtained a highly capable captioner, CapRL-3B.

Main Results on GPT2

Main Results on GPT2

Key Features
  • Remarkable visual understanding for Chart, Infographics and Document : CapRL-3B achieves perception accuracy and visual information coverage comparable to Qwen2.5-VL-72B.
  • Well-organized output : The outputs of CapRL-3B are relatively well-structured, making them clear and easy to understand.
  • Detailed description for natural images : The outputs of CapRL-3B can perfectly cover all valid visual information while containing fewer hallucinations.
Usage

If you want to use CapRL-3B for captioning, you can directly follow the exact same inference approach as in Qwen2.5-VL-series .

We recommend using vLLM to speed up inference.

Start an OpenAI API Service

Run the command below to start an OpenAI-compatible API service:

vllm serve "/PATH/CapRL-3B" \
    --trust-remote-code \
    --tensor-parallel-size=1 \
    --pipeline-parallel-size=1 \
    --gpu_memory_utilization=0.95 \
    --served-model-name=caprl \
    --port 8000 \
    --host 0.0.0.0

Then you can use the chat API as below: (see OpenAI API protocol document for more details):

import base64
from openai import OpenAI
# Set OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)
image_path = "/path/to/local/image.png"
with open(image_path, "rb") as f:
    encoded_image = base64.b64encode(f.read())
encoded_image_text = encoded_image.decode("utf-8")
base64_qwen = f"data:image;base64,{encoded_image_text}"
chat_response = client.chat.completions.create(
    model="caprl",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {
            "role": "user",
            "content": [
                {
                    "type": "image_url",
                    "image_url": {
                        "url": base64_qwen
                    },
                },
                {"type": "text", "text": "What is the text in the illustrate?"},
            ],
        },
    ],
    temperature=1.0,
    max_tokens=max_tokens,
    top_p=1.0,
    extra_body={
        "repetition_penalty": 1.0,
        },
)
print("Chat response:", chat_response)
Cases

Main Results on GPT2

Main Results on GPT2

Main Results on GPT2

Main Results on GPT2

Runs of internlm CapRL-3B on huggingface.co

175
Total runs
0
24-hour runs
-7
3-day runs
-17
7-day runs
-20
30-day runs

More Information About CapRL-3B huggingface.co Model

More CapRL-3B license Visit here:

https://choosealicense.com/licenses/apache-2.0

CapRL-3B huggingface.co

CapRL-3B huggingface.co is an AI model on huggingface.co that provides CapRL-3B's model effect (), which can be used instantly with this internlm CapRL-3B model. huggingface.co supports a free trial of the CapRL-3B model, and also provides paid use of the CapRL-3B. Support call CapRL-3B model through api, including Node.js, Python, http.

internlm CapRL-3B online free

CapRL-3B huggingface.co is an online trial and call api platform, which integrates CapRL-3B's modeling effects, including api services, and provides a free online trial of CapRL-3B, you can try CapRL-3B online for free by clicking the link below.

internlm CapRL-3B online free url in huggingface.co:

https://huggingface.co/internlm/CapRL-3B

CapRL-3B install

CapRL-3B is an open source model from GitHub that offers a free installation service, and any user can find CapRL-3B on GitHub to install. At the same time, huggingface.co provides the effect of CapRL-3B install, users can directly use CapRL-3B installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

CapRL-3B install url in huggingface.co:

https://huggingface.co/internlm/CapRL-3B

Url of CapRL-3B

CapRL-3B huggingface.co Url

Provider of CapRL-3B huggingface.co

internlm
ORGANIZATIONS

Other API from internlm

huggingface.co

Total runs: 43.1K
Run Growth: -20.7K
Growth Rate: -48.03%
Updated:March 29 2026
huggingface.co

Total runs: 2.0K
Run Growth: 1.3K
Growth Rate: 66.07%
Updated:July 03 2024
huggingface.co

Total runs: 1.3K
Run Growth: -259
Growth Rate: -19.26%
Updated:January 24 2024
huggingface.co

Total runs: 169
Run Growth: -36
Growth Rate: -21.30%
Updated:July 15 2025
huggingface.co

Total runs: 123
Run Growth: 91
Growth Rate: 73.98%
Updated:October 23 2025
huggingface.co

Total runs: 42
Run Growth: 1
Growth Rate: 2.38%
Updated:July 15 2025
huggingface.co

Total runs: 36
Run Growth: -52
Growth Rate: -144.44%
Updated:March 24 2026