This repository contains the text and image encoders of all variants of MobileCLIP exported to Core ML. These Core ML models can be plugged-into the demo app provided in the official
MobileCLIP repo
Highlights
Our smallest variant
MobileCLIP-S0
obtains similar zero-shot performance as
OpenAI
's ViT-B/16 model while being 4.8x faster and 2.8x smaller.
MobileCLIP-S2
obtains better avg zero-shot performance than
SigLIP
's ViT-B/16 model while being 2.3x faster and 2.1x smaller, and trained with 3x less seen samples.
MobileCLIP-B
(LT) attains zero-shot ImageNet performance of
77.2%
which is significantly better than recent works like
DFN
and
SigLIP
with similar architectures or even
OpenAI's ViT-L/14@336
.
coreml-mobileclip huggingface.co is an AI model on huggingface.co that provides coreml-mobileclip's model effect (), which can be used instantly with this apple coreml-mobileclip model. huggingface.co supports a free trial of the coreml-mobileclip model, and also provides paid use of the coreml-mobileclip. Support call coreml-mobileclip model through api, including Node.js, Python, http.
coreml-mobileclip huggingface.co is an online trial and call api platform, which integrates coreml-mobileclip's modeling effects, including api services, and provides a free online trial of coreml-mobileclip, you can try coreml-mobileclip online for free by clicking the link below.
apple coreml-mobileclip online free url in huggingface.co:
coreml-mobileclip is an open source model from GitHub that offers a free installation service, and any user can find coreml-mobileclip on GitHub to install. At the same time, huggingface.co provides the effect of coreml-mobileclip install, users can directly use coreml-mobileclip installed effect in huggingface.co for debugging and trial. It also supports api for free installation.