EfficientViT-l2-cls: Optimized for Mobile Deployment
Imagenet classifier and general purpose backbone
EfficientViT is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
This model is an implementation of EfficientViT-l2-cls found
here
.
This repository provides scripts to run EfficientViT-l2-cls on Qualcomm® devices.
More details on model performance across various devices, can be found
here
.
Model Details
Model Type:
Image classification
Model Stats:
Model checkpoint: Imagenet
Input resolution: 224x224
Number of parameters: 64M
Model size: 243 MB
Model
Device
Chipset
Target Runtime
Inference Time (ms)
Peak Memory Range (MB)
Precision
Primary Compute Unit
Target Model
Installation
This model can be installed as a Python package via pip.
pip install "qai-hub-models[efficientvit_l2_cls]"
Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
Sign-in to
Qualcomm® AI Hub
with your
Qualcomm® ID. Once signed in navigate to
Account -> Settings -> API Token
.
With this API token, you can configure your client to run models on the cloud
hosted devices.
Profiling Results```
## How does this work?
This [export script](https://aihub.qualcomm.com/models/efficientvit_l2_cls/qai_hub_models/models/EfficientViT-l2-cls/export.py)
leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1: **Compile model for on-device deployment**
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the `jit.trace` and then call the `submit_compile_job` API.
```python
import torch
import qai_hub as hub
from qai_hub_models.models.efficientvit_l2_cls import
# Load the model
# Device
device = hub.Device("Samsung Galaxy S23")
Step 2:
Performance profiling on cloud-hosted device
After compiling models from step 1. Models can be profiled model on-device using the
target_model
. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
EfficientViT-l2-cls huggingface.co is an AI model on huggingface.co that provides EfficientViT-l2-cls's model effect (), which can be used instantly with this qualcomm EfficientViT-l2-cls model. huggingface.co supports a free trial of the EfficientViT-l2-cls model, and also provides paid use of the EfficientViT-l2-cls. Support call EfficientViT-l2-cls model through api, including Node.js, Python, http.
EfficientViT-l2-cls huggingface.co is an online trial and call api platform, which integrates EfficientViT-l2-cls's modeling effects, including api services, and provides a free online trial of EfficientViT-l2-cls, you can try EfficientViT-l2-cls online for free by clicking the link below.
qualcomm EfficientViT-l2-cls online free url in huggingface.co:
EfficientViT-l2-cls is an open source model from GitHub that offers a free installation service, and any user can find EfficientViT-l2-cls on GitHub to install. At the same time, huggingface.co provides the effect of EfficientViT-l2-cls install, users can directly use EfficientViT-l2-cls installed effect in huggingface.co for debugging and trial. It also supports api for free installation.
EfficientViT-l2-cls install url in huggingface.co: