rahul7star / zimage-tiny

huggingface.co
Total runs: 0
24-hour runs: 0
7-day runs: 0
30-day runs: 0
Model's Last Updated: December 17 2025
text-to-image

Introduction of zimage-tiny

Model Details of zimage-tiny

This tiny model is for debugging. It is randomly initialized with the config adapted from Tongyi-MAI/Z-Image-Turbo .

File size:

  • 2.4MB text_encoder/model.safetensors
  • 1.4MB transformer/diffusion_pytorch_model.safetensors
  • 0.5MB vae/diffusion_pytorch_model.safetensors
Example usage:
import torch
from diffusers import ZImagePipeline

model_id = "tiny-random/z-image"
torch_dtype = torch.bfloat16
device = "cuda"
pipe = ZImagePipeline.from_pretrained(model_id, torch_dtype=torch_dtype)
pipe = pipe.to(device)

prompt = "Flowers and trees"
image = pipe(
    prompt=prompt,
    height=1024,
    width=1024,
    num_inference_steps=9,  # This actually results in 8 DiT forwards
    guidance_scale=0.0,     # Guidance should be 0 for the Turbo models
    generator=torch.Generator("cuda").manual_seed(42),
).images[0]
print(image)
Codes to create this repo:
import json

import torch
from diffusers import (
    AutoencoderKL,
    DiffusionPipeline,
    FlowMatchEulerDiscreteScheduler,
    ZImagePipeline,
    ZImageTransformer2DModel,
)
from huggingface_hub import hf_hub_download
from transformers import AutoConfig, AutoTokenizer, Qwen2Tokenizer, Qwen3Model
from transformers.generation import GenerationConfig

source_model_id = "Tongyi-MAI/Z-Image-Turbo"
save_folder = "/tmp/tiny-random/z-image"

torch.set_default_dtype(torch.bfloat16)
scheduler = FlowMatchEulerDiscreteScheduler.from_pretrained(
    source_model_id, subfolder='scheduler')
tokenizer = AutoTokenizer.from_pretrained(
    source_model_id, subfolder='tokenizer')

def save_json(path, obj):
    import json
    from pathlib import Path
    Path(path).parent.mkdir(parents=True, exist_ok=True)
    with open(path, 'w', encoding='utf-8') as f:
        json.dump(obj, f, indent=2, ensure_ascii=False)

def init_weights(model):
    import torch
    torch.manual_seed(42)
    with torch.no_grad():
        for name, p in sorted(model.named_parameters()):
            torch.nn.init.normal_(p, 0, 0.1)
            print(name, p.shape, p.dtype, p.device)

with open(hf_hub_download(source_model_id, filename='text_encoder/config.json', repo_type='model'), 'r', encoding='utf - 8') as f:
    config = json.load(f)
    config.update({
        "head_dim": 32,
        'hidden_size': 8,
        'intermediate_size': 32,
        'max_window_layers': 1,
        'num_attention_heads': 8,
        'num_hidden_layers': 2,
        'num_key_value_heads': 4,
        'tie_word_embeddings': True,
    })
    save_json(f'{save_folder}/text_encoder/config.json', config)
    text_encoder_config = AutoConfig.from_pretrained(
        f'{save_folder}/text_encoder')
    text_encoder = Qwen3Model(text_encoder_config).to(torch.bfloat16)
    generation_config = GenerationConfig.from_pretrained(
        source_model_id, subfolder='text_encoder')
    text_encoder.generation_config = generation_config
    init_weights(text_encoder)

with open(hf_hub_download(source_model_id, filename='transformer/config.json', repo_type='model'), 'r', encoding='utf-8') as f:
    config = json.load(f)
    config.update({
        'dim': 64,
        'axes_dims': [8, 8, 16],
        'n_heads': 2,
        'n_kv_heads': 4,
        'n_layers': 2,
        'cap_feat_dim': 8,
        'in_channels': 8,
    })
    save_json(f'{save_folder}/transformer/config.json', config)
    transformer_config = ZImageTransformer2DModel.load_config(
        f'{save_folder}/transformer')
    transformer = ZImageTransformer2DModel.from_config(
        transformer_config)
    init_weights(transformer)

with open(hf_hub_download(source_model_id, filename='vae/config.json', repo_type='model'), 'r', encoding='utf-8') as f:
    config = json.load(f)
    config.update({
        'layers_per_block': 1,
        'block_out_channels': [32, 32],
        'latent_channels': 8,
        'down_block_types': ['DownEncoderBlock2D', 'DownEncoderBlock2D'],
        'up_block_types': ['UpDecoderBlock2D', 'UpDecoderBlock2D']
    })
    save_json(f'{save_folder}/vae/config.json', config)
    vae_config = AutoencoderKL.load_config(f'{save_folder}/vae')
    vae = AutoencoderKL.from_config(vae_config)
    init_weights(vae)

pipeline = ZImagePipeline(
    scheduler=scheduler,
    text_encoder=text_encoder,
    tokenizer=tokenizer,
    transformer=transformer,
    vae=vae,
)
pipeline = pipeline.to(torch.bfloat16)
pipeline.save_pretrained(save_folder, safe_serialization=True)
print(pipeline)

Runs of rahul7star zimage-tiny on huggingface.co

0
Total runs
0
24-hour runs
0
3-day runs
0
7-day runs
0
30-day runs

More Information About zimage-tiny huggingface.co Model

zimage-tiny huggingface.co

zimage-tiny huggingface.co is an AI model on huggingface.co that provides zimage-tiny's model effect (), which can be used instantly with this rahul7star zimage-tiny model. huggingface.co supports a free trial of the zimage-tiny model, and also provides paid use of the zimage-tiny. Support call zimage-tiny model through api, including Node.js, Python, http.

rahul7star zimage-tiny online free

zimage-tiny huggingface.co is an online trial and call api platform, which integrates zimage-tiny's modeling effects, including api services, and provides a free online trial of zimage-tiny, you can try zimage-tiny online for free by clicking the link below.

rahul7star zimage-tiny online free url in huggingface.co:

https://huggingface.co/rahul7star/zimage-tiny

zimage-tiny install

zimage-tiny is an open source model from GitHub that offers a free installation service, and any user can find zimage-tiny on GitHub to install. At the same time, huggingface.co provides the effect of zimage-tiny install, users can directly use zimage-tiny installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

zimage-tiny install url in huggingface.co:

https://huggingface.co/rahul7star/zimage-tiny

Url of zimage-tiny

Provider of zimage-tiny huggingface.co

rahul7star
ORGANIZATIONS

Other API from rahul7star

huggingface.co

Total runs: 12
Run Growth: -4
Growth Rate: -33.33%
Updated:June 28 2025
huggingface.co

Total runs: 11
Run Growth: 0
Growth Rate: 0.00%
Updated:November 13 2025
huggingface.co

Total runs: 8
Run Growth: -2
Growth Rate: -25.00%
Updated:January 08 2026
huggingface.co

Total runs: 7
Run Growth: 5
Growth Rate: 83.33%
Updated:December 25 2025
huggingface.co

Total runs: 6
Run Growth: 0
Growth Rate: 0.00%
Updated:November 28 2025
huggingface.co

Total runs: 2
Run Growth: 0
Growth Rate: 0.00%
Updated:August 15 2024
huggingface.co

Total runs: 1
Run Growth: 1
Growth Rate: 100.00%
Updated:December 26 2025
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:August 10 2025
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:May 19 2025
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:August 15 2025
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:February 24 2026
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:September 24 2025
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:February 06 2026
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:February 13 2026
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:October 01 2025
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:September 18 2025
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:September 11 2025
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:February 06 2026
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:September 23 2025
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:September 12 2025