BEiT (base-sized model, fine-tuned on ImageNet-1k)
BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 384x384. It was introduced in the paper
BEIT: BERT Pre-Training of Image Transformers
by Hangbo Bao, Li Dong and Furu Wei and first released in
this repository
.
Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team.
Model description
The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches.
Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. Alternatively, one can mean-pool the final hidden states of the patch embeddings, and place a linear layer on top of that.
Intended uses & limitations
You can use the raw model for image classification. See the
model hub
to look for
fine-tuned versions on a task that interests you.
How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
from transformers import BeitFeatureExtractor, BeitForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-base-patch16-384')
model = BeitForImageClassification.from_pretrained('microsoft/beit-base-patch16-384')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
Currently, both the feature extractor and model support PyTorch.
Training data
The BEiT model was pretrained on
ImageNet-21k
, a dataset consisting of 14 million images and 21k classes, and fine-tuned on
ImageNet
, a dataset consisting of 1 million images and 1k classes.
Training procedure
Preprocessing
The exact details of preprocessing of images during training/validation can be found
here
.
Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
Pretraining
For all pre-training related hyperparameters, we refer to page 15 of the
original paper
.
Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
BibTeX entry and citation info
author = {Hangbo Bao and
Li Dong and
Furu Wei},
title = {BEiT: {BERT} Pre-Training of Image Transformers},
journal = {CoRR},
volume = {abs/2106.08254},
year = {2021},
url = {https://arxiv.org/abs/2106.08254},
archivePrefix = {arXiv},
eprint = {2106.08254},
timestamp = {Tue, 29 Jun 2021 16:55:04 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
Runs of microsoft beit-base-patch16-384 on huggingface.co
157
Total runs
0
24-hour runs
-35
3-day runs
-25
7-day runs
-229
30-day runs
More Information About beit-base-patch16-384 huggingface.co Model
beit-base-patch16-384 huggingface.co is an AI model on huggingface.co that provides beit-base-patch16-384's model effect (), which can be used instantly with this microsoft beit-base-patch16-384 model. huggingface.co supports a free trial of the beit-base-patch16-384 model, and also provides paid use of the beit-base-patch16-384. Support call beit-base-patch16-384 model through api, including Node.js, Python, http.
beit-base-patch16-384 huggingface.co is an online trial and call api platform, which integrates beit-base-patch16-384's modeling effects, including api services, and provides a free online trial of beit-base-patch16-384, you can try beit-base-patch16-384 online for free by clicking the link below.
microsoft beit-base-patch16-384 online free url in huggingface.co:
beit-base-patch16-384 is an open source model from GitHub that offers a free installation service, and any user can find beit-base-patch16-384 on GitHub to install. At the same time, huggingface.co provides the effect of beit-base-patch16-384 install, users can directly use beit-base-patch16-384 installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
beit-base-patch16-384 install url in huggingface.co: