Comfyui sdxl turbo reddit. If we look at comfyui\comfy\sd2_clip_config.
Comfyui sdxl turbo reddit When it comes to the sampling steps, Dreamshaper SDXL Turbo does not possess any advantage over LCM. 5 and appears in the info. Please share your tips, tricks, and workflows for using this Hi guys, today Stability Inc released their new SDXL Turbo model that can inference an image in as little as 1 step. With ComfyUI the below image took 0. I was using krita with a comfyui backend on a rtx 2070 and I was using about 5. I have a basic workflow with SDXL-turbo, executing with flask app, and using mediapipe. SDXL Turbo and SDXL Lightning are fairly new approaches that again make images rapidly in 3-8 steps. Step 2: Download this sample Image. /r/StableDiffusion is back open after the protest of Reddit killing open API access SDXL Turbo: Real-time Prompting - Stable Diffusion Art Tutorial | Guide Welcome to the unofficial ComfyUI subreddit. Nvidia EVGA 1080 Ti FTW3 (11gb) SDXL Turbo. Right now, SDXL turbo can run 38% faster with OneFlow's OneDiff Optimization(compiled UNet and VAE). Step 1: Download SDXL Turbo checkpoint. At this moment I tagged lcm-lora-sd1. Could you share the details of how to train The other reason is that the central focus of the story (perhaps I should have left in the 200 word summary) was how a seemingly insignificant event that occurs during the EU4 timeframe, i. This is the first time I've ever tried to do local creations on my own computer. 0, designed for real-time image generation. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 5, and a different LoRA to use LCM with SDXL, but either way that gives you super-fast generations using your choice of SD1. 1024x1024 is intended although you can use resolution in other aspect ratios with similar pixel capacities. ComfyUI Node for Stable Audio Diffusion v 1. lab ] Create an Image in Just 1. Edit: here's more advanced comfyui implementation. I am a bot, and this action was performed automatically. Recent questions have been asking how far is open 15K subscribers in the comfyui community. I get that good vibe, like discovering Stable Diffusion all over again. Posted by u/cgpixel23 - 1 vote and no comments Today Stability. 2. Hence, it appears necessary to apply FaceDetailer ComfyUI - SDXL + Image Distortion custom workflow Resource | Update This workflow/mini tutorial is for anyone to use, it contains both the whole sampler setup for SDXL plus an additional digital distortion filter which is what im In A1111 Use xl turbo. Thank you for posting to r/weirddalle!Make sure to follow all the subreddit rules. Posted by u/Creative_Dark_8731 - 1 vote and 10 comments I installed SDXL Turbo on my server, you can use it unlimited for free (link in post) Discussion SDXL Turbo achieves state-of-the-art performance with a new distillation technology, enabling single-step image generation with unprecedented quality, reducing the required step count from 50 to just one. SDXL Lightning: "Improved" version of SDXL. 27 it/s 1. 2K subscribers in the comfyui community. This way was shared by a SD dev over in the SD discord - Turbo XL checkpoint -> merge subtract -> base SDXL checkpoint -> merge add -> whatever finetune checkpoint you want. I just wanted to share a little tip for those who are currently Hey r/comfyui, . 5 /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Anyone have ComfyUI workflows for img2img with SDXL Turbo? If so, could you kindly share some of your workflows please. (comfyui, sdxl turbo SDXL-Turbo Animation | Workflow and Tutorial in the comments WF included Share Add a Comment. ComfyUI: 0. Please share your tips, tricks, and workflows for using this Welcome to the unofficial ComfyUI subreddit. You can use more steps to increase the quality. 0 Dreamshaper SDXL Turbo is a variant of SDXL Turbo that offers enhanced charting capabilities. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. MoonRide workflow v1. If anyone happens to have the link for it Start by installing 'ComfyUI manager' , you can google that. 68 votes, 13 comments. Right now, SDXL turbo can run 62% faster with OneFlow's OneDiff Optimization(compiled UNet and VAE). i mainly use the wildcards to generate creatures/monsters in a location, all set by Welcome to the unofficial ComfyUI subreddit. com) I tried uploading the embedded workflows but I don't think Reddit likes that very much. 5 seconds to create a single frame. Takes around 34 seconds per 1024 x 1024 image on an 8GB 3060TI and 32 GB system ram. The proper way to use it is with the new SDTurboScheduler node but it might also work with the regular Honestly you can probably just swap out the model and put in the turbo scheduler, i don't think loras are working properly yet but you can feed the images into a proper sdxl model to touch up during generation (slower and tbh doesn't I've developed an application that harnesses the real-time generation capabilities of SDXL TURBO through webcam input. Developed using the groundbreaking Adversarial Diffusion Distillation (ADD) technique, SDXL SDXL (Stable Diffusion XL) represents a significant leap forward in text-to-image models, offering improved quality and capabilities compared to earlier versions. A subreddit about Stable Diffusion. Then I combine it with a combination of either Depth, Canny and OpenPose ControlNets. safetensors and rename it. 9K subscribers in the comfyui community. This is how fast turbo SDXL is in Comfy UI, running on a 4090 via wireless network on another PC Discussion 15K subscribers in the comfyui community. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the But when I started exploring new ways with SDXL prompting the results improved more and more over time and now I'm just blown away what it can do. /r/StableDiffusion is back open after the protest of Reddit Instead of "Turbo" models, if you're trying to use fewer models, you could try using LCM. For now at least I don't have any need for custom models, loras, or even It's faster for sure but I personally was more interested in quality than speed. SDXL-Turbo uses a new training method called Adversarial Diffusion Distillation (ADD) (see the technical report), which enables fast This all said, if ComfyUI works for you, use it, just offering ideas that I have come across for my own uses. SDXL Turbo took 3 minutes to generate an image. Posted by u/andw1235 - 2 votes and no comments I am loving playing around with the SDXL Turbo-based models popping out in the past week. 0. We're open again. I just want to make many fast portraits and worry about upscaling, fixing, posing, and the rest later! • Built on the same technological foundation as SDXL 1. 5 tile upscaler. Get the Reddit app Scan this QR code to download the app now. SDXL Turbo Live Painting workflow self. Basically, when using SDXL models, you can use the SDXL turbo model to accelerate image generation to get good images in 8 steps from your favorite models. Turbo is designed to generate 0. an all new technology for generating high resolution images based on SDXL, SDXL Turbo, SD 2. It seems to produce faces that don't blend well with the rest of the image when used after combining SDXL and SD1. 0-2-g4afaaf8a Tested on ComfyUI v1754 [777f6b15]: workflow /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 5 or SDXL models. Turbo XL checkpoint -> simple merge -> whatever finetune checkpoint you want. I will also have a look at your discussion. it is NOT optimized. Please share your tips, tricks, and workflows for using this /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Share Turbo-SDXL 1 Step Results + 1 Step Hires-fix upscaler. Skip to main content. I was thinking that it might make more sense to manually load the sdxl-turbo-tensorrt model published by stability. I would never use it. 7. Set the tiles to 1024x1024 (or your sdxl resolution) and set the tile padding to 128. Additionally, I need to incorporate FaceDetailer into the process. it might just be img2img with a very high denoise, for this prompt/input it could work just like that. SDXL Turbo > SD 1. I didn't notice much difference using the TCD Sampler vs simply using EularA and Simple/SGM with a simple load lora node. I already follow this process in Automatic1111, but if I could build it in ComfyUI, I wouldn't have to manually switch to ImgToImg and swap checkpoints like I do in A1111. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and Welcome to the unofficial ComfyUI subreddit. For now sd xl turbo is horrible quality. Sort by: Best. 5x-2x with either SDXL Turbo or SD1. I used to play around with interpolating prompts like this, rendered as batches. I use it with 5 steps and with my 4090 it generates 1 image at 1344x768 per 1 second. json, SDXL seems to operate at clip skip 2 by Welcome to the unofficial ComfyUI subreddit. When you downscale the resolution a bit, it's near-realtime generation following your prompt as you type. /r/StableDiffusion is back open after the protest of Reddit killing LoRA based on new sdxl turbo, you can use the TURBO with any stable diffusion xl checkpoint, few seconds = 1 image(4 seconds with a nvidia rtx 3060 with 1024x768 resolution) Tested on webui 1111 v1. You can find my workflow here: An example workflow of using HiRez Fix with SDXL Turbo for great results (github. 1 seconds (about 1 second) at 2. It does not work as a final step, however. making a list of wildcards and also downloading some on civitai brings a lot of fun results. Please share your tips, tricks, and SDXL Turbo is a SDXL model that can generate consistent images in a single step. Seemed like a Don't install ALL the suggested nodes from ComfyUI Manager's "install missing nodes" feature!!! It will lead to conflicted nodes with the same name and a crash. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper You can run it locally. Third Pass: Further upscale 1. comfyui 17K subscribers in the comfyui community. I just published a YouTube tutorial showing how to leverage the new SDXL Turbo model inside Comfy UI for creative workflows. Its super fast and quality is amazing. 5 seconds so there is a significant drop in time but I am afraid, I won't be using it too much because it can't really gen at higher resolutions without creating weird duplicated artifacts. I spent some time fine-tuning it and really like it. My primary goal was to fully utilise 2-stage architecture of SDXL - so I have base and refiner models working as stages in latent space. It's really cool, but unfortunately really limited currently as it has coherency issues and is "native" at only 512 x 512. Vanilla SDXL Turbo is designed for 512x512 and it shows /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Works with SDXL, SDXL Turbo as well as earlier version like SD1. I made a preview of each step to see how the image changes itself after sdxl to sd1. More info: https Posted by u/violethyperia - 1,142 votes and 213 comments InvokeAI natively supports SDXL-Turbo! To install SDXL-turbo, just drop the HF RepoID into the model manager and let Invoke handle the installation. New comments cannot be posted. 1 step turbo has slightly less quality than SDXL at 50 steps, while 4 step turbo has significantly more quality than SDXL at 50 steps. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site (TouchDesigner+T2Iadapter\_canny+SDXL+turbo\_LoRA) I used the 'Touch Designer' tool to create videos in near-real time by translating user movements into img2img translation! It only takes about 0. Some of my favorite SDXL Turbo models so far: SDXL TURBO PLUS - RED TEAM MODEL ️🦌🎅 - Welcome to the unofficial ComfyUI subreddit. This guide will I can run SD XL - both base and refiner steps - using InvokeAI or Comfyui - without any issues. Please contact the moderators of this subreddit if you have any questions or concerns. My first attemp to sdxl-turbo and controlnet (canny-sdxl) any suggestion Welcome to the unofficial ComfyUI subreddit. Comfy UI Sdxl Turbo Advanced Latent Upscaling Workflow Video Locked post. In the video, I go over how to set up three workflows text-to-image, image-to-image, and high res image upscaling. 20K subscribers in the comfyui community. POD-MOCKUP generator using SDXL turbo and IP-adaptor plus #comfyUI upvotes LoRA based on new sdxl turbo, you can use the TURBO with any stable diffusion xl checkpoint, few seconds = 1 image Tested on ComfyUI: workflow. Text2SVD with Turbo SDXL and Stable Video Diffusion (with loopback) Workflow is still image. ComfyUI does not do it automatically Instead of SDXL Turbo I can fairly quickly try out a lot of ideas in 1. Check out the demonstration video here: Link to the Video 114 votes, 43 comments. One of the generated images needed to fix boobs so I back to sd1. SDXL Turbo with Comfy for real time image generation Locked post. (longer for more faces) Stable Diffusion: 2-3 seconds + 3-10 seconds for background processes per image. You need a LoRA for LCM for SD1. And SDXL is just a "base model", can't imagine what we'll be able to generate with custom trained models in the future. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper Welcome to the unofficial ComfyUI subreddit. This is why SDXL-Turbo doesn't use the negative prompt. You can't use a CFG higher than 2, otherwise it will generate artifacts. 23K subscribers in the comfyui community. Backround replacement using Segmentation and SDXL TURBO model Share Add a Comment. (workflow included) IMG2IMG with SDXL Turbo . SDXL Turbo comfy UI on M1 Mac Question - Help Welcome to the unofficial ComfyUI subreddit. r/lexfridman. I'm a teacher and I'm working on replicating it for a graduate school project. Then I tried to create SDXL-turbo with the same script with a simple mod to allow downloading sdxl-turbo from hugging face. Open comment sort options. It’s easy to setup as it just uses Below is my XL Turbo workflow, which includes a lot of toggles and focuses on latent upscaling. Comfyui SDXL-Turbo Extension with upscale nodes youtube r/lexfridman. 6. Testing both, I've found #2 to be just as speedy and coherent as #1, if not more so. 5K subscribers in the comfyui community. But with SDXL Turbo, this is fast enough to do interactively, running locally on an RTX 3090! To set this up in ComfyUI, replace the positive text input by a ConditioningAverage node, combining two text inputs between which to blend. images generated with sdxl lightning with relvison sdxl turbo at cfg of 1 and 8 steps Share Add a Comment. It runs at CFG 1. LoRA for SDXL Turbo 3d disney style? Hi! I am trying to create a workflow for generating an image that looks like this. it is currently in two separate scripts. Both Turbo and the LCM Lora will start giving you garbage after the 6 - 9 step. I suspect your comment is misleading. Top. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and i'm currently playing around with dynamic prompts. Nice. I've been having issues with majorly bloated workflows for the great Portrait Master ComfyUI node. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site Welcome to the unofficial ComfyUI subreddit. Or check it out in the app stores Home Comfyui Tutorial : SDXL-Turbo with Refiner tool Locked post. I was hunting for the turbo-sdxl checkpoint this morning but ran out of time. Just download pytorch_lora_weights. 5 > SD 1. Just install these nodes: Fannovel16 ComfyUI's ControlNet Auxiliary Welcome to the unofficial ComfyUI subreddit. Using OpenCV, I transmit information to the ComfyUI API via Python websockets. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Go civitai download dreamshaperxl Turbo and use the settings they say ( 5-10 ) steps , right sampler and cfg 2. "outperforming LCM and SDXL Turbo by 57% and 20%" Welcome to the unofficial ComfyUI subreddit. 7K subscribers in the comfyui community. Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: I get about 2x perf from Ubuntu in WSL2 on my 4090 with Hugging Face Diffusers python scripts for SDXL Turbo. I Finally manage to use FaceSwap with SDXL-Turbo models Share Add a Comment. There is an official list of recommended SDXL resolution outputs. Sort by: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind Welcome to the unofficial ComfyUI subreddit. Live drawing. But I have not checked that yet. 11K subscribers in the comfyui community. 5. Is the image quality on par with basic SDXL/Turbo? What are the drawbacks compared to basic SDXL/Turbo? Does this support all the resolutions? Does this work with A1111? Stable Cascade: New model from using a cascade process to generate images? I've managed to install and run the official SD demo from tensorRT on my RTX 4090 machine. r Today Stability. Lightning is better and produces nicer images. /r/StableDiffusion is back open after the protest of 3d material from comfy. I played for a few days with ComfyUI and SDXL 1. 93 seconds. 5 thoughts? Discussion (comfyui, sdxl turbo. And I'm pretty sure even the step generation is faster. Sort by: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 6 seconds (total) if I do CodeFormer Face Restore on 1 face. Its extremely fast and hires. ipadapter + ultimate upscale) LCM gives good results with 4 steps, while SDXL-Turbo gives them in 1 step. Then go to the 'Install Models' submenu in ComfyUI-manager. Step 3: Update ComfyUI. The ability to produce high-quality videos in real time is thanks to SDXL turbo. 5 from nkmd then i changed model to sdxl turbo and used it as base image. the British landing in Quiberon (compared to say, the fall of Constantinople, discovery of the new world, reformation, enlightenment, Waterloo, etc) could have drastic differences on Europe as /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. There's also an SDXL lora if you click on the devs name. 3 gb of vram in the generation, . New /r/GuildWars2 is the primary community for Guild 20K subscribers in the sdforall community. But should be very easy to modify [ soy. SDXL generates images at a resolution of 1MP (ex: 1024x1024) You can't use as many samplers/schedulers as with the standard models. Please keep posted images SFW. e. ai launched the SDXL turbo, enabling small-step image generation with high quality, reducing the required step count from 50 to just 4 or 1. 5 using something close to 512x512 resolution, and SDXL-Turbo Animation | Workflow and Tutorial in the comments Duchesses of Worcester - SDXL + COMFYUI + LUMA 0:45. 15K subscribers in the comfyui community. Even with a mere RTX 3060. ai. there are other custom nodes that also use wildcards (forgot the names) and i haven't really tried some of them. And bump the mask blur to 20 to help with seams. Please share your tips, tricks, and workflows for using this Saw everyone posting about the new sdxl turbo and comfyui workflows and thought it would be cool to use from my phone with siri Using ssh, the shortcut connects to your comfyui host server, starts the comfyui service (setup with nssm) and then calls a python example script modified to send the result images (4 of them) to a telegram chatbot. I also used non-turbo sdxl models, but it didn't work please help me Share Add a Comment. painting with SDXL-Turbo what do you think about the results? 0:46. Welcome to the unofficial ComfyUI subreddit. Anyone has an idea how to stabilise sdxl? Have either rapid movement in every frame or almost no movement. /r/StableDiffusion is back open after the protest of Reddit killing open API 8. 9. currently generating new image in 1. create materials, textures and designs that are seamless for use in multiple 3d softwares or as mockups or as shader nodes use cases in 3d programs. Since twhen? Its base reaolution is 512x512. 0, SDXL Turbo features the enhancements of a new technology: Adversarial Diffusion Distillation (ADD). But the aim of the SDXL Turbo is to generate a good image with less than 4 steps /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Discussion of science, technology, engineering, philosophy, history, politics Thank you. Turbo SDXL-LoRA-Stable Diffusion XL faster than light Welcome to the unofficial ComfyUI subreddit. but the only thing I could find were reddit Ultimate SD Upscale works fine with SDXL, but you should probably tweak the settings a little bit. cheers sirolim (comfyui, sdxl turbo. Decided to create all 151. SDXL takes around 30 seconds on my machine and Turbo takes around 7. 5 because inpainting. Automatic1111 won't even load the 36 votes, 14 comments. 0, did some experiments, and came up with reasonably simple, yet pretty flexible and powerful workflow I use myself: . SDXL (Turbo) vs SD1. SDXL Turbo accelerates image generation,* delivering high-quality outputs* within notably shorter time frames by decreasing the standard suggested step count from 30, to 1! I was testing out the SDXL turbo model with some prompt templates from the prompt styler (comfyui) and some Pokémon were coming out real nice with the sai-cinematic template. In 1024x1024 with turbo is a mess of random duplicating things ( like any other mode when used 2x resolution without hires fix or upscaler) And I mean normal sd xl quality. I opted to use ComfyUI so I could utilize the low-vram mode (using a GTX 1650). No kittens were harmed in this film. and a few posts for "I wish they would release SD The "original" one was sd1. Oddly, I saw no posts about this. Sampling method on ComfyUI: LCM CFG Scale: from 1 to 2 Sampling steps: 4 Locked post. 5 Seconds Using ComfyUI SDXL-TURBO! #comfyUI (Automatic language translation available!) —----- 😎 Contents 00:00 Intro 01:21 SDXL TURBO 06:09 SDXL TURBO CUSTOM # 1 BASIC 11:25 SDXL TURBO CUSTOM # 2 MULTI PASS + UPSCALE 13:26 RESULT /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I don't have this installed though. As we have using normal sd xl in 1024x1024 with 40 steps. This feels like an obvious workflow that any SDXL user in ComfyUI would want to have. Sure, some of them don’t SDXL-Turbo is a simplified and faster version of SDXL 1. They actually seem to have released SD-turbo at the same time as SDXL-turbo. 2 seconds (with t2i controlnet), and mediapipe refreshing 20fps. Use one gpu (a slower one) to do the sdxl turbo step and use Prior to the update to torch & ComfyUI to support FP8, I was unable to use SDXL+refiner as it requires ~20GB of system RAM or enough VRAM to fit all the models in GPU memory. As you go above 1. SDXL Turbo fine tune Question - Help Hey guys, is there any script or colab notebook for the new turbo model? Welcome to the unofficial ComfyUI subreddit /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. . 1 and SD 1. SDXL was trained 1024x1024 for same output. 2 to 0. However, it comes with a trade-off of a slower speed due to its requirement of a 4-step sampling process. Please share your tips, tricks, and workflows for using this software to create your AI art. See you next year when we can run real-time AI video on a smartphone x). ipadapter Indeed SDXL it s better , but it s not yet mature, as models are just appearing for it and as loras the same. Share /r/StableDiffusion is back open after the protest of Reddit killing Welcome to the unofficial ComfyUI subreddit. 5 model. 9 to 1. In the SDXL paper, they had stated that the model uses the penultimate layer, I was never sure what that meant exactly*. I've never had good luck with latent upscaling in the past, which is "Upscale Latent By" and Posted in r/StableDiffusion by u/comfyanonymous • 1,190 points and 206 comments Posted in r/StableDiffusion by u/violethyperia • 1,142 points and 211 comments In this guide, we will walk you through the process of installing SDXL Turbo, the latest breakthrough in text-to-image synthesis. So yes. 25MP image (ex: 512x512). 0, the strength of the +ve and -ve reinforcement is increased. Edit: you could try the workflow to see it for yourself. If we look at comfyui\comfy\sd2_clip_config. Search for: 'resnet50' And you will find: And in the examples on the workflow page that I linked you can see that the workflow was used to generate several images that do need the face restore I even doubled it. Building on that, I just published a video walking through how to setup and use the Gradio web interface I built to leverage SDXL Turbo. ComfyUI wasn't able to load the controlnet model for some reason, even after putting it in models/controlnet. 21K subscribers in the comfyui community. Guide for SDXL / SD Turbo distilation? a series of courses designed to help you master ComfyUI and build your own workflows Hi there. Basically if i find the sdxl turbo preview close enough to what i have in mind, i 1click the group toggle node and i use the normal sdxl model to iterate on sdxl turbos result, effectively iterating with TensorRT compiling is not working, when I had a look at the code it seemed like too much work. 1 step sdxl turbo with a good quality vs 1 step with lcm, will win always Welcome to the unofficial ComfyUI subreddit. 5 and then after upscale and facefix, you ll be surprised how much change that was Does anyone have an explanation for why some turbo models give clear outputs in 1 step (such as sdxl turbo, jibmix turbo), while others like this one require 4 ~ 8 steps to get there? Which is barely an improvement over the ~12 youd need with a non turbo non LCM model? Is this some sort of training related quality/performance tradeoff situation? 15K subscribers in the comfyui community. using these settings: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Best. I tried it a bit, I used the same workflow that uses the sdxl turbo here: https: Welcome to the Hey r/comfyui, Last week I shared my SDXL Turbo repository for fast image generation using stable diffusion, which many of you found helpful. Using only a few steps to generate images. 40 votes, 10 comments. mxii hem ediyj wkzey nnvmf tzrpi kmkeuomt ecvcl yjlvxnw xyliko