city96 / Flux.1-Heavy-17B

huggingface.co
Total runs: 192
24-hour runs: 2
7-day runs: 5
30-day runs: 169
Model's Last Updated: November 19 2024
text-to-image

Introduction of Flux.1-Heavy-17B

Model Details of Flux.1-Heavy-17B

Main cover

Do you feel like you have too much VRAM lately? Want to OOM on a 40GB A100? This is the model for you!


About

This is a 17B self-merge of the original 12B parameter Flux.1-dev model.

Merging was done similarly to 70B->120B LLM merges, with the layers repeated and interwoven in groups.

Final model stats:
 p layers: [    32]
 s layers: [    44]
 n params: [17.17B]
Training

Some post-merge training was done to try and reverse the extensive braindamage the model has suffered, but even after that this is mostly a proof of concept due to not having any hardware capable of properly training this. Still, I think it might be the first open source 17B image model that actually generates coherent images, even if it's just a self-merge.

You can see the text recovering with training. Leftmost image is step 0 base merge:

Training


Usage

Good luck.

Diffusers

Should work with inference pipeline, from_single_file seems to need the custom layer counts passed:

model = FluxTransformer2DModel.from_single_file("flux.1-heavy-17B.safetensors", num_layers=32, num_single_layers=44)
Comfy

Just load it normally via the "Load Diffusion Model" node. You need like 80GBs of system RAM on windows for it to not swap to disk lol.

It requires about 35-40GBs of VRAM for inference, assuming you offloat the text encoder and unload it during VAE decoding. Partial offloading works if you have enough system RAM.

Training

Seems to work out of the box with ostris/ai-toolkit , at least it did when I pointed config -> process -> model -> name_or_path to it in a local folder.


Q&A :

Should I use this model?

No unless you want to brag about it or God forbid train it into something usable.

Where is the merge script?

It's a mess of like 3-4 scripts and some questionable manual editing on some of the biases. You can replicate it if you put the layers after each other with some overlap similarly to this , just leave the later single layers alone.

The merged (untrained) weights are in this repo in the raw folder. You can from_single_file -> save pretrained w/ FluxTransformer2DModel if you need those in the diffusers format.

GGUF? FP8?

It's essential to experience this model in BF16 precision for the full experience of running out of every kind of resources at the same time while trying to run it.

(I'd put some up but I'm out out runpod credits again)

Settings? LoRA compatibility?

Just use the same settings you'd use for regular flux. LoRAs do seem to have at least some effect, but the blocks don't line up so don't expect them to work amazingly.

Does this generate coherent images?

Yes but text and general prompt adherence can be questionable. Example failure mode for text:

Main cover

Was the cover image cherrypicked?

Of course.

Runs of city96 Flux.1-Heavy-17B on huggingface.co

192
Total runs
2
24-hour runs
3
3-day runs
5
7-day runs
169
30-day runs

More Information About Flux.1-Heavy-17B huggingface.co Model

Flux.1-Heavy-17B huggingface.co

Flux.1-Heavy-17B huggingface.co is an AI model on huggingface.co that provides Flux.1-Heavy-17B's model effect (), which can be used instantly with this city96 Flux.1-Heavy-17B model. huggingface.co supports a free trial of the Flux.1-Heavy-17B model, and also provides paid use of the Flux.1-Heavy-17B. Support call Flux.1-Heavy-17B model through api, including Node.js, Python, http.

Flux.1-Heavy-17B huggingface.co Url

https://huggingface.co/city96/Flux.1-Heavy-17B

city96 Flux.1-Heavy-17B online free

Flux.1-Heavy-17B huggingface.co is an online trial and call api platform, which integrates Flux.1-Heavy-17B's modeling effects, including api services, and provides a free online trial of Flux.1-Heavy-17B, you can try Flux.1-Heavy-17B online for free by clicking the link below.

city96 Flux.1-Heavy-17B online free url in huggingface.co:

https://huggingface.co/city96/Flux.1-Heavy-17B

Flux.1-Heavy-17B install

Flux.1-Heavy-17B is an open source model from GitHub that offers a free installation service, and any user can find Flux.1-Heavy-17B on GitHub to install. At the same time, huggingface.co provides the effect of Flux.1-Heavy-17B install, users can directly use Flux.1-Heavy-17B installed effect in huggingface.co for debugging and trial. It also supports api for free installation.

Flux.1-Heavy-17B install url in huggingface.co:

https://huggingface.co/city96/Flux.1-Heavy-17B

Url of Flux.1-Heavy-17B

Flux.1-Heavy-17B huggingface.co Url

Provider of Flux.1-Heavy-17B huggingface.co

city96
ORGANIZATIONS

Other API from city96

huggingface.co

Total runs: 3.6K
Run Growth: 755
Growth Rate: 20.82%
Updated:December 14 2024
huggingface.co

Total runs: 6
Run Growth: 2
Growth Rate: 33.33%
Updated:May 28 2024
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:January 18 2024
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:October 21 2023
huggingface.co

Total runs: 0
Run Growth: 0
Growth Rate: 0.00%
Updated:August 09 2023