Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224, and fine-tuned on ImageNet 2012 (1 million images, 1,000 classes) at resolution 224x224. It was introduced in the paper
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
by Dosovitskiy et al. and first released in
this repository
. However, the weights were converted from the
timm repository
by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him.
Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team.
Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, at the same resolution, 224x224.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
Intended uses & limitations
You can use the raw model for image classification. See the
model hub
to look for
fine-tuned versions on a task that interests you.
How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
from transformers import ViTFeatureExtractor, ViTForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-large-patch16-224')
model = ViTForImageClassification.from_pretrained('google/vit-large-patch16-224')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon, and the API of ViTFeatureExtractor might change.
Training data
The ViT model was pretrained on
ImageNet-21k
, a dataset consisting of 14 million images and 21k classes, and fine-tuned on
ImageNet
, a dataset consisting of 1 million images and 1k classes.
Training procedure
Preprocessing
The exact details of preprocessing of images during training/validation can be found
here
.
Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5).
Pretraining
The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Pre-training resolution is 224.
Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
BibTeX entry and citation info
@misc{wu2020visual,
title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision},
author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda},
year={2020},
eprint={2006.03677},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@inproceedings{deng2009imagenet,
title={Imagenet: A large-scale hierarchical image database},
author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
booktitle={2009 IEEE conference on computer vision and pattern recognition},
pages={248--255},
year={2009},
organization={Ieee}
}
Runs of google vit-large-patch16-224 on huggingface.co
21.0K
Total runs
0
24-hour runs
77
3-day runs
1.2K
7-day runs
2.6K
30-day runs
More Information About vit-large-patch16-224 huggingface.co Model
vit-large-patch16-224 huggingface.co is an AI model on huggingface.co that provides vit-large-patch16-224's model effect (), which can be used instantly with this google vit-large-patch16-224 model. huggingface.co supports a free trial of the vit-large-patch16-224 model, and also provides paid use of the vit-large-patch16-224. Support call vit-large-patch16-224 model through api, including Node.js, Python, http.
vit-large-patch16-224 huggingface.co is an online trial and call api platform, which integrates vit-large-patch16-224's modeling effects, including api services, and provides a free online trial of vit-large-patch16-224, you can try vit-large-patch16-224 online for free by clicking the link below.
google vit-large-patch16-224 online free url in huggingface.co:
vit-large-patch16-224 is an open source model from GitHub that offers a free installation service, and any user can find vit-large-patch16-224 on GitHub to install. At the same time, huggingface.co provides the effect of vit-large-patch16-224 install, users can directly use vit-large-patch16-224 installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
vit-large-patch16-224 install url in huggingface.co: