Readme
CodeLlama is a family of fine-tuned Llama 2 models for coding. This is CodeLlama-7b, a 7 billion parameter Llama model tuned for completing code.

A 7 billion parameter Llama tuned for coding and conversation
An 8 billion parameter language model from Meta, fine tuned for chat completions
A 70 billion parameter language model from Meta, fine tuned for chat completions
Base version of Llama 3, an 8 billion parameter language model from Meta.
A 7 billion parameter language model from Meta, fine tuned for chat completions
A 70 billion parameter language model from Meta, fine tuned for chat completions
Meta's flagship 405 billion parameter language model, fine-tuned for chat completions
A 13 billion parameter language model from Meta, fine tuned for chat completions
Generate music from a prompt or melody
Base version of Llama 3, a 70 billion parameter language model from Meta.
A llama-3 based moderation and safeguarding language model
Base version of Llama 2 7B, a 7 billion parameter language model
Base version of Llama 2, a 70 billion parameter language model from Meta.
Base version of Llama 2 13B, a 13 billion parameter language model
A 34 billion parameter Llama tuned for coding and conversation
A 13 billion parameter Llama tuned for code completion
A 7 billion parameter Llama tuned for coding and conversation
A 13 billion parameter Llama tuned for coding and conversation
A Llama-3.1-8B pretrained model, fine-tuned for content safety classification
Detects any class given class names
Instance-Conditioned GAN
A 70 billion parameter Llama tuned for coding and conversation
SAM 2: Segment Anything v2 (for Images)
A 17 billion parameter model with 128 experts
A 34 billion parameter Llama tuned for coding and conversation
A 34 billion parameter Llama tuned for coding with Python
A 13 billion parameter Llama tuned for coding with Python
A 7 billion parameter Llama tuned for coding with Python
SAM 2: Segment Anything v2 (for videos)
Cut and Learn for unsupervised object detection and instance segmentation
A 70 billion parameter Llama tuned for coding with Python
Masked-attention Mask Transformer for Universal Image Segmentation
Supervised Weakly from hashtAGs
A Single Model for Many Visual Modalities
A Llama-3.2-11B pretrained model, fine-tuned for content safety classification
A 7B parameter Llama 2-based input-output safeguard model
A 17 billion parameter model with 16 experts