InceptionNetV3 is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
This model is an implementation of Inception-v3 found
here
.
This repository provides scripts to run Inception-v3 on Qualcomm® devices.
More details on model performance across various devices, can be found
here
.
Profile Job summary of Inception-v3
--------------------------------------------------
Device: SA8255 (Proxy) (13)
Estimated Inference Time: 1.42 ms
Estimated Peak Memory Range: 0.61-143.48 MB
Compute Units: NPU (219) | Total (219)
How does this work?
This
export script
leverages
Qualcomm® AI Hub
to optimize, validate, and deploy this model
on-device. Lets go through each step below in detail:
Step 1:
Compile model for on-device deployment
To compile a PyTorch model for on-device deployment, we first trace the model
in memory using the
jit.trace
and then call the
submit_compile_job
API.
import torch
import qai_hub as hub
from qai_hub_models.models.inception_v3 import Model
# Load the model
torch_model = Model.from_pretrained()
# Device
device = hub.Device("Samsung Galaxy S23")
# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()
pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
# Compile model on a specific device
compile_job = hub.submit_compile_job(
model=pt_model,
device=device,
input_specs=torch_model.get_input_spec(),
)
# Get target model to run on-device
target_model = compile_job.get_target_model()
Step 2:
Performance profiling on cloud-hosted device
After compiling models from step 1. Models can be profiled model on-device using the
target_model
. Note that this scripts runs the model on a device automatically
provisioned in the cloud. Once the job is submitted, you can navigate to a
provided job URL to view a variety of on-device performance metrics.
Inception-v3 huggingface.co is an AI model on huggingface.co that provides Inception-v3's model effect (), which can be used instantly with this qualcomm Inception-v3 model. huggingface.co supports a free trial of the Inception-v3 model, and also provides paid use of the Inception-v3. Support call Inception-v3 model through api, including Node.js, Python, http.
Inception-v3 huggingface.co is an online trial and call api platform, which integrates Inception-v3's modeling effects, including api services, and provides a free online trial of Inception-v3, you can try Inception-v3 online for free by clicking the link below.
qualcomm Inception-v3 online free url in huggingface.co:
Inception-v3 is an open source model from GitHub that offers a free installation service, and any user can find Inception-v3 on GitHub to install. At the same time, huggingface.co provides the effect of Inception-v3 install, users can directly use Inception-v3 installed effect in huggingface.co for debugging and trial. It also supports api for free installation.