Simple tensor parallel generation speed test on 2x3090, 4x3060 (GPTQ, AWQ, exl2)
How is there still no real GUI that supports Hunyuan or any other video generation model?
thoughts on amd strix halo
How to optimize the performance of my GPU?
Linux users
i7-12700 vs i9-12900 with 64 GB DDR5 for using in comfyUI
Nvidia claims 2x Flux Dev gen speed across the board for all 50XX series GPUs
HowTo: Running FLUX.1 [dev] on A770 (Forge, ComfyUI)
Inference test on RTX3060 x4 vs RTX3090 x2 vs RTX4090 x1
Best Anime Checkpoints/models? Flux? Pony? Illustrious?
In which scenarios is the RTX 4060 Ti 16GB faster than the RTX 4070 12GB?
I couldn't find an updated danbooru tag list for kohakuXL/illustriousXL/Noob so I made my own.
SD Webui Forge, need help with X/Y/Z grid
2 x 3090 vs. 1 x 3090 in training
Built a sever to play around with Local LLM in mind. RTX 3060 12gb. Realize that the slot is physically 16x but electrically 8x. Screwed?
Krita AI plugin now supports custom comfyui workflows
Flux.dev on a 12 GB VRAM card - best setup?
3x Arc a770 > 2x Tesla P40 ???
{requesting help} Integration of Open WebUI and Stable Diffusion using Automatic1111
Any tips for switching between multiple CUDA versions?
[Flux] How do I run inference in local Python?
why use GGUF instead of GPTQ, EXL2 or MARLIN
Which Linux distro do you use for Cuda 12.1 and vLLM?
SSD or NVME for ComfyUI