The primary use of ViG is research on vision tasks, e.g., classification, segmentation, detection, and instance segmentation, with an GLA-based backbone.
The primary intended users of the model are researchers and hobbyists in computer vision, machine learning, and artificial intelligence.
Training Details
ViG is pretrained on ImageNet-1K with classification supervision.
The training data is around 1.3M images from
ImageNet-1K dataset
.
See more details in this
paper
.
Evaluation
ViG is evaluated on ImageNet-1K val set, more details can be found in this
paper
.
Additional Information
Citation Information
@article{vig,
title={ViG: Linear-complexity Visual Sequence Learning with Gated Linear Attention},
author={Bencheng Liao and Xinggang Wang and Lianghui Zhu and Qian Zhang and Chang Huang},
journal={arXiv preprint arXiv:2405.18425},
year={2024}
}
ViG huggingface.co is an AI model on huggingface.co that provides ViG's model effect (), which can be used instantly with this hustvl ViG model. huggingface.co supports a free trial of the ViG model, and also provides paid use of the ViG. Support call ViG model through api, including Node.js, Python, http.
ViG huggingface.co is an online trial and call api platform, which integrates ViG's modeling effects, including api services, and provides a free online trial of ViG, you can try ViG online for free by clicking the link below.
ViG is an open source model from GitHub that offers a free installation service, and any user can find ViG on GitHub to install. At the same time, huggingface.co provides the effect of ViG install, users can directly use ViG installed effect in huggingface.co for debugging and trial. It also supports api for free installation.