Salesforce / blip-vqa-base

huggingface.co
Total runs: 172.2K
24-hour runs: -15.6K
7-day runs: -104.9K
30-day runs: -448.2K
Model's Last Updated: February 03 2025
visual-question-answering

Introduction of blip-vqa-base

Model Details of blip-vqa-base

BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation

Model card for BLIP trained on visual question answering- base architecture (with ViT base backbone).

BLIP.gif
Pull figure from BLIP official repo
TL;DR

Authors from the paper write in the abstract:

Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.

Usage

You can use this model for conditional and un-conditional image captioning

Using the Pytorch model
Running the model on CPU
Click to expand
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForQuestionAnswering

processor = BlipProcessor.from_pretrained("Salesforce/blip-vqa-base")
model = BlipForQuestionAnswering.from_pretrained("Salesforce/blip-vqa-base")

img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' 
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')

question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt")

out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
>>> 1
Running the model on GPU
In full precision
Click to expand
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForQuestionAnswering

processor = BlipProcessor.from_pretrained("Salesforce/blip-vqa-base")
model = BlipForQuestionAnswering.from_pretrained("Salesforce/blip-vqa-base").to("cuda")

img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' 
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')

question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")

out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
>>> 1
In half precision ( float16 )
Click to expand
import torch
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForQuestionAnswering

processor = BlipProcessor.from_pretrained("ybelkada/blip-vqa-base")
model = BlipForQuestionAnswering.from_pretrained("ybelkada/blip-vqa-base", torch_dtype=torch.float16).to("cuda")

img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' 
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')

question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)

out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))
>>> 1
BibTex and citation info
@misc{https://doi.org/10.48550/arxiv.2201.12086,
  doi = {10.48550/ARXIV.2201.12086},
  
  url = {https://arxiv.org/abs/2201.12086},
  
  author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven},
  
  keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
  
  title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation},
  
  publisher = {arXiv},
  
  year = {2022},
  
  copyright = {Creative Commons Attribution 4.0 International}
}

Runs of Salesforce blip-vqa-base on huggingface.co

172.2K
Total runs
-15.6K
24-hour runs
-50.2K
3-day runs
-104.9K
7-day runs
-448.2K
30-day runs

More Information About blip-vqa-base huggingface.co Model

More blip-vqa-base license Visit here:

https://choosealicense.com/licenses/bsd-3-clause

blip-vqa-base huggingface.co

blip-vqa-base huggingface.co is an AI model on huggingface.co that provides blip-vqa-base's model effect (), which can be used instantly with this Salesforce blip-vqa-base model. huggingface.co supports a free trial of the blip-vqa-base model, and also provides paid use of the blip-vqa-base. Support call blip-vqa-base model through api, including Node.js, Python, http.

Salesforce blip-vqa-base online free

blip-vqa-base huggingface.co is an online trial and call api platform, which integrates blip-vqa-base's modeling effects, including api services, and provides a free online trial of blip-vqa-base, you can try blip-vqa-base online for free by clicking the link below.

Salesforce blip-vqa-base online free url in huggingface.co:

https://huggingface.co/Salesforce/blip-vqa-base

blip-vqa-base install

blip-vqa-base is an open source model from GitHub that offers a free installation service, and any user can find blip-vqa-base on GitHub to install. At the same time, huggingface.co provides the effect of blip-vqa-base install, users can directly use blip-vqa-base installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

blip-vqa-base install url in huggingface.co:

https://huggingface.co/Salesforce/blip-vqa-base

Url of blip-vqa-base

blip-vqa-base huggingface.co Url

Provider of blip-vqa-base huggingface.co

Salesforce
ORGANIZATIONS

Other API from Salesforce

huggingface.co

Total runs: 92.7K
Run Growth: -48.5K
Growth Rate: -52.37%
Updated:February 03 2025
huggingface.co

Total runs: 80.1K
Run Growth: 80.0K
Growth Rate: 99.89%
Updated:April 12 2025
huggingface.co

Total runs: 13.7K
Run Growth: -1.8K
Growth Rate: -12.95%
Updated:January 21 2025
huggingface.co

Total runs: 1.0K
Run Growth: -124
Growth Rate: -11.97%
Updated:October 04 2025
huggingface.co

Total runs: 440
Run Growth: 118
Growth Rate: 26.82%
Updated:January 15 2025
huggingface.co

Total runs: 150
Run Growth: -3
Growth Rate: -2.00%
Updated:November 05 2025
huggingface.co

Total runs: 93
Run Growth: -144
Growth Rate: -154.84%
Updated:January 21 2025