tiny-random / qwen3-next-moe

huggingface.co
Total runs: 28.8K
24-hour runs: 0
7-day runs: -329
30-day runs: 6.9K
Model's Last Updated: September 12 2025
text-generation

Introduction of qwen3-next-moe

Model Details of qwen3-next-moe

This tiny model is intended for debugging. It is randomly initialized using the configuration adapted from Qwen/Qwen3-Next-80B-A3B-Instruct .

Example usage:
  • vLLM
 VLLM_ALLOW_LONG_MAX_MODEL_LEN=1 \
    vllm serve tiny-random/qwen3-next-moe \
    --tensor-parallel-size 4 \
    --max-model-len 262144 \
    --speculative-config '{"method":"qwen3_next_mtp","num_speculative_tokens":2}'
  • SGLang
SGLANG_ALLOW_OVERWRITE_LONGER_CONTEXT_LEN=1 \
python -m sglang.launch_server \
    --model-path tiny-random/qwen3-next-moe \
    --tp-size 4 --context-length 262144 \
    --mem-fraction-static 0.8 \
    --speculative-algo NEXTN \
    --speculative-num-steps 3 \
    --speculative-eagle-topk 1 \
    --speculative-num-draft-tokens 4
  • Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "tiny-random/qwen3-next-moe"

# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    dtype="auto",
    device_map="cuda",
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
    {"role": "user", "content": prompt},
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=8,
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
content = tokenizer.decode(output_ids, skip_special_tokens=True)
print("content:", content)
Codes to create this repo:
from copy import deepcopy

import torch
import torch.nn as nn
from transformers import (
    AutoConfig,
    AutoModelForCausalLM,
    AutoTokenizer,
    GenerationConfig,
    pipeline,
    set_seed,
)

source_model_id = "Qwen/Qwen3-Next-80B-A3B-Instruct"
save_folder = "/tmp/tiny-random/qwen3-next-moe"

tokenizer = AutoTokenizer.from_pretrained(
    source_model_id, trust_remote_code=True,
)
tokenizer.save_pretrained(save_folder)

config = AutoConfig.from_pretrained(
    source_model_id, trust_remote_code=True,
)
config._name_or_path = source_model_id
config.hidden_size = 8
config.intermediate_size = 32
config.head_dim = 32
config.num_key_value_heads = 8
config.num_attention_heads = 16
config.num_hidden_layers = 4
config.tie_word_embeddings = False
config.linear_num_key_heads = 8
config.linear_num_value_heads = 16
config.moe_intermediate_size = 32
config.num_experts = 32
config.num_experts_per_tok = 10
config.layer_types = config.layer_types[:4]
config.shared_expert_intermediate_size = 32
model = AutoModelForCausalLM.from_config(
    config,
    torch_dtype=torch.bfloat16,
    trust_remote_code=True,
)
model.generation_config = GenerationConfig.from_pretrained(
    source_model_id, trust_remote_code=True,
)
# MTP
model.mtp = nn.ModuleDict({
    "pre_fc_norm_embedding": nn.RMSNorm(config.hidden_size),
    "fc": nn.Linear(config.hidden_size * 2, config.hidden_size, bias=False),
    "norm": nn.RMSNorm(config.hidden_size),
    "pre_fc_norm_hidden": nn.RMSNorm(config.hidden_size),
    "layers": nn.ModuleList([deepcopy(model.model.layers[3])]),
})
model = model.to(torch.bfloat16)
set_seed(42)
with torch.no_grad():
    for name, p in sorted(model.named_parameters()):
        torch.nn.init.normal_(p, 0, 0.1)
        print(name, p.shape)
model.save_pretrained(save_folder)
Printing the model:
Qwen3NextForCausalLM(
  (model): Qwen3NextModel(
    (embed_tokens): Embedding(151936, 8)
    (layers): ModuleList(
      (0-2): 3 x Qwen3NextDecoderLayer(
        (linear_attn): Qwen3NextGatedDeltaNet(
          (act): SiLU()
          (conv1d): Conv1d(4096, 4096, kernel_size=(4,), stride=(1,), padding=(3,), groups=4096, bias=False)
          (in_proj_qkvz): Linear(in_features=8, out_features=6144, bias=False)
          (in_proj_ba): Linear(in_features=8, out_features=32, bias=False)
          (norm): FusedRMSNormGated(128, eps=1e-06, activation=silu)
          (out_proj): Linear(in_features=2048, out_features=8, bias=False)
        )
        (mlp): Qwen3NextSparseMoeBlock(
          (gate): Linear(in_features=8, out_features=32, bias=False)
          (experts): ModuleList(
            (0-31): 32 x Qwen3NextMLP(
              (gate_proj): Linear(in_features=8, out_features=32, bias=False)
              (up_proj): Linear(in_features=8, out_features=32, bias=False)
              (down_proj): Linear(in_features=32, out_features=8, bias=False)
              (act_fn): SiLU()
            )
          )
          (shared_expert): Qwen3NextMLP(
            (gate_proj): Linear(in_features=8, out_features=32, bias=False)
            (up_proj): Linear(in_features=8, out_features=32, bias=False)
            (down_proj): Linear(in_features=32, out_features=8, bias=False)
            (act_fn): SiLU()
          )
          (shared_expert_gate): Linear(in_features=8, out_features=1, bias=False)
        )
        (input_layernorm): Qwen3NextRMSNorm((8,), eps=1e-06)
        (post_attention_layernorm): Qwen3NextRMSNorm((8,), eps=1e-06)
      )
      (3): Qwen3NextDecoderLayer(
        (self_attn): Qwen3NextAttention(
          (q_proj): Linear(in_features=8, out_features=1024, bias=False)
          (k_proj): Linear(in_features=8, out_features=256, bias=False)
          (v_proj): Linear(in_features=8, out_features=256, bias=False)
          (o_proj): Linear(in_features=512, out_features=8, bias=False)
          (q_norm): Qwen3NextRMSNorm((32,), eps=1e-06)
          (k_norm): Qwen3NextRMSNorm((32,), eps=1e-06)
        )
        (mlp): Qwen3NextSparseMoeBlock(
          (gate): Linear(in_features=8, out_features=32, bias=False)
          (experts): ModuleList(
            (0-31): 32 x Qwen3NextMLP(
              (gate_proj): Linear(in_features=8, out_features=32, bias=False)
              (up_proj): Linear(in_features=8, out_features=32, bias=False)
              (down_proj): Linear(in_features=32, out_features=8, bias=False)
              (act_fn): SiLU()
            )
          )
          (shared_expert): Qwen3NextMLP(
            (gate_proj): Linear(in_features=8, out_features=32, bias=False)
            (up_proj): Linear(in_features=8, out_features=32, bias=False)
            (down_proj): Linear(in_features=32, out_features=8, bias=False)
            (act_fn): SiLU()
          )
          (shared_expert_gate): Linear(in_features=8, out_features=1, bias=False)
        )
        (input_layernorm): Qwen3NextRMSNorm((8,), eps=1e-06)
        (post_attention_layernorm): Qwen3NextRMSNorm((8,), eps=1e-06)
      )
    )
    (norm): Qwen3NextRMSNorm((8,), eps=1e-06)
    (rotary_emb): Qwen3NextRotaryEmbedding()
  )
  (lm_head): Linear(in_features=8, out_features=151936, bias=False)
  (mtp): ModuleDict(
    (pre_fc_norm_embedding): RMSNorm((8,), eps=None, elementwise_affine=True)
    (fc): Linear(in_features=16, out_features=8, bias=False)
    (norm): RMSNorm((8,), eps=None, elementwise_affine=True)
    (pre_fc_norm_hidden): RMSNorm((8,), eps=None, elementwise_affine=True)
    (layers): ModuleList(
      (0): Qwen3NextDecoderLayer(
        (self_attn): Qwen3NextAttention(
          (q_proj): Linear(in_features=8, out_features=1024, bias=False)
          (k_proj): Linear(in_features=8, out_features=256, bias=False)
          (v_proj): Linear(in_features=8, out_features=256, bias=False)
          (o_proj): Linear(in_features=512, out_features=8, bias=False)
          (q_norm): Qwen3NextRMSNorm((32,), eps=1e-06)
          (k_norm): Qwen3NextRMSNorm((32,), eps=1e-06)
        )
        (mlp): Qwen3NextSparseMoeBlock(
          (gate): Linear(in_features=8, out_features=32, bias=False)
          (experts): ModuleList(
            (0-31): 32 x Qwen3NextMLP(
              (gate_proj): Linear(in_features=8, out_features=32, bias=False)
              (up_proj): Linear(in_features=8, out_features=32, bias=False)
              (down_proj): Linear(in_features=32, out_features=8, bias=False)
              (act_fn): SiLU()
            )
          )
          (shared_expert): Qwen3NextMLP(
            (gate_proj): Linear(in_features=8, out_features=32, bias=False)
            (up_proj): Linear(in_features=8, out_features=32, bias=False)
            (down_proj): Linear(in_features=32, out_features=8, bias=False)
            (act_fn): SiLU()
          )
          (shared_expert_gate): Linear(in_features=8, out_features=1, bias=False)
        )
        (input_layernorm): Qwen3NextRMSNorm((8,), eps=1e-06)
        (post_attention_layernorm): Qwen3NextRMSNorm((8,), eps=1e-06)
      )
    )
  )
)

Runs of tiny-random qwen3-next-moe on huggingface.co

28.8K
Total runs
0
24-hour runs
-605
3-day runs
-329
7-day runs
6.9K
30-day runs

More Information About qwen3-next-moe huggingface.co Model

qwen3-next-moe huggingface.co

qwen3-next-moe huggingface.co is an AI model on huggingface.co that provides qwen3-next-moe's model effect (), which can be used instantly with this tiny-random qwen3-next-moe model. huggingface.co supports a free trial of the qwen3-next-moe model, and also provides paid use of the qwen3-next-moe. Support call qwen3-next-moe model through api, including Node.js, Python, http.

tiny-random qwen3-next-moe online free

qwen3-next-moe huggingface.co is an online trial and call api platform, which integrates qwen3-next-moe's modeling effects, including api services, and provides a free online trial of qwen3-next-moe, you can try qwen3-next-moe online for free by clicking the link below.

tiny-random qwen3-next-moe online free url in huggingface.co:

https://huggingface.co/tiny-random/qwen3-next-moe

qwen3-next-moe install

qwen3-next-moe is an open source model from GitHub that offers a free installation service, and any user can find qwen3-next-moe on GitHub to install. At the same time, huggingface.co provides the effect of qwen3-next-moe install, users can directly use qwen3-next-moe installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

qwen3-next-moe install url in huggingface.co:

https://huggingface.co/tiny-random/qwen3-next-moe

Url of qwen3-next-moe

qwen3-next-moe huggingface.co Url

Provider of qwen3-next-moe huggingface.co

tiny-random
ORGANIZATIONS

Other API from tiny-random

huggingface.co

Total runs: 687
Run Growth: 670
Growth Rate: 97.53%
Updated:February 20 2026
huggingface.co

Total runs: 658
Run Growth: 593
Growth Rate: 91.65%
Updated:July 11 2025
huggingface.co

Total runs: 493
Run Growth: 474
Growth Rate: 96.15%
Updated:August 06 2025
huggingface.co

Total runs: 362
Run Growth: 73
Growth Rate: 20.17%
Updated:April 23 2026
huggingface.co

Total runs: 281
Run Growth: 278
Growth Rate: 98.93%
Updated:August 21 2025
huggingface.co

Total runs: 153
Run Growth: 80
Growth Rate: 52.63%
Updated:September 06 2025
huggingface.co

Total runs: 148
Run Growth: 64
Growth Rate: 43.24%
Updated:January 12 2025
huggingface.co

Total runs: 145
Run Growth: 56
Growth Rate: 38.89%
Updated:April 27 2025
huggingface.co

Total runs: 145
Run Growth: -26
Growth Rate: -17.93%
Updated:November 23 2025
huggingface.co

Total runs: 137
Run Growth: -99
Growth Rate: -72.26%
Updated:February 27 2026
huggingface.co

Total runs: 134
Run Growth: -127
Growth Rate: -94.07%
Updated:June 25 2025
huggingface.co

Total runs: 80
Run Growth: -10
Growth Rate: -12.50%
Updated:July 08 2025
huggingface.co

Total runs: 65
Run Growth: -269
Growth Rate: -407.58%
Updated:December 16 2025
huggingface.co

Total runs: 59
Run Growth: 12
Growth Rate: 20.69%
Updated:October 18 2025
huggingface.co

Total runs: 58
Run Growth: 58
Growth Rate: 100.00%
Updated:April 03 2026
huggingface.co

Total runs: 49
Run Growth: 44
Growth Rate: 89.80%
Updated:February 14 2026
huggingface.co

Total runs: 41
Run Growth: -30
Growth Rate: -73.17%
Updated:November 23 2025
huggingface.co

Total runs: 40
Run Growth: 33
Growth Rate: 82.50%
Updated:October 18 2025
huggingface.co

Total runs: 31
Run Growth: 31
Growth Rate: 100.00%
Updated:April 12 2026
huggingface.co

Total runs: 27
Run Growth: 15
Growth Rate: 55.56%
Updated:April 12 2026
huggingface.co

Total runs: 20
Run Growth: -13
Growth Rate: -61.90%
Updated:July 22 2025
huggingface.co

Total runs: 14
Run Growth: 9
Growth Rate: 64.29%
Updated:July 22 2025
huggingface.co

Total runs: 11
Run Growth: 2
Growth Rate: 18.18%
Updated:February 13 2026
huggingface.co

Total runs: 11
Run Growth: 7
Growth Rate: 63.64%
Updated:July 29 2025
huggingface.co

Total runs: 5
Run Growth: 1
Growth Rate: 20.00%
Updated:August 11 2025
huggingface.co

Total runs: 2
Run Growth: 1
Growth Rate: 50.00%
Updated:October 05 2025