Comfyui load workflow tutorial reddit


  1. Comfyui load workflow tutorial reddit. 1. be/ppE1W0-LJas - the tutorial. With a 3060 12 vram, it sometimes takes me up to 3 minutes to load sdxl, but once loaded, all other generations are faster because you don't need to load the checkpoint anymore. To download the workflow, go to the website linked at the top, save the image of the workflow, and drag it into ComfyUI. Of course, if it takes more than 5 minutes It is clear that there is a problem. . Follow basic comfyui tutorials on comfyui github, like basic SD1. Heya, I've been working on a few tutorials for comfyUI over the past couple of weeks if you are new at comfyUI and want a good grounding in how to use comfyUI then this tutorial might help you out. For the checkpoint, I suggest one that can handle cartoons / manga fairly easily. ComfyUI basics tutorial. IF there is anything you would like me to cover for a comfyUI tutorial let me know. Welcome to the unofficial ComfyUI subreddit. Then add in the parts for a LoRA, a ControlNet, and an IPAdapter. a search of the subreddit Didn't turn up any answers to my question. Start by loading up your standard workflow - checkpoint, ksampler, positive, negative prompt, etc. I then downloaded a custom workflow from here and initiated installing it from within comfyui. ComfyScript is simple to read and write and can run remotely. I'm wondering if there is a good tutorial out there that starts at step 1 and sets everything up and explains the concepts (eg: what is a latent image, eg). You can then load or drag the following image in ComfyUI to get the workflow: This guide is about how to setup ComfyUI on your Windows computer to run Flux. That's a bit presumptuous considering you don't know my requirements. Ideally nothing that's like "download this workflow and click 'install missing nodes' because that never actually works. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. Lora usage is confusing in ComfyUI. Tutorials wise, there are a bunch of images that can be loaded as a workflow by comfyUI, you download the png and load it. You can then load or drag the following image in ComfyUI to get the workflow: Welcome to the unofficial ComfyUI subreddit. Go to the comfyUI Manager, click install custom nodes, and search for reactor. Please keep posted images SFW. Try inpaint Try outpaint Hmm low Quality, try lantent upscale with 2 ksamplers. Overview of different versions of Flux. Flux Hardware Requirements. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI in your home network, Masking and clipseg awesomeness, many more. where did you extract the frames zip file if you are following along with the tutorial) image_load_cap will load every frame if it is set to 0, otherwise it will load however many frames you choose which will determine the length of the animation And now for part two of my "not SORA" series. Initial Input block - Welcome to the unofficial ComfyUI subreddit. where did you extract the frames zip file if you are following along with the tutorial) image_load_cap will load every frame if it is set to 0, otherwise it will load however many frames you choose which will determine the length of the animation Welcome to the unofficial ComfyUI subreddit. Save workflow: Ctrl + O: Load workflow: Ctrl + A: Select all nodes: Alt + C: Collapse/uncollapse selected nodes: Ctrl + M: Mute/unmute selected nodes: Ctrl + B: Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through) Delete/Backspace: Delete selected nodes: Ctrl + Backspace: Delete the current Welcome to the unofficial ComfyUI subreddit. 8>. In Automatic1111, for example, you load and control the strength by simply typing something like this: <lora:Dragon_Ball_Backgrounds_XL:0. I'm trying to get dynamic prompts to work with comfyui but the random prompt string won't link with clip text encoder as is indicated on the diagram I have here from the Github page. will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion - this creats a very basic image from a simple prompt and sends it as a source. INITIAL COMFYUI SETUP and BASIC WORKFLOW. 5 based models with greater detail in SDXL 0. Most Awaited Full Fine Tuning (with DreamBooth effect) Tutorial Generated Images - Full Workflow Shared In The Comments - NO Paywall This Time - Explained OneTrainer - Cumulative Experience of 16 Months Stable Diffusion Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. Flux Schnell is a distilled 4 step model. ComfyUI-to-Python-Extension can be written by hand, but it's a bit cumbersome, can't take benefit of the cache, and can only be run locally. 1 with ComfyUI. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. At the same time, I scratch my head to know which HF models to download and where to place the 4 Stage models. The workflow in the example is passed in via the script in inline string, but it's better (and more flexible) to have your python script load that from a file instead. Please share your tips, tricks, and workflows for using this software to create your AI art. Help, pls? comments sorted by Best Top New Controversial Q&A Add a Comment Hey all, another tutorial, hopefully this can help with anyone who has trouble dealing with all the noodly goodness of comfyUI, in it I show some good layout practices for comfyUI and show how modular systems can be built. The generated workflows can also be used in the web UI. 4. 1 or not. Put the flux1-dev. Looks awesome, currently i am creating a tutorial for converting comfyui workflows to a production-grade multiuser backend api. You need to select the directory your frames are located in (ie. A lot of people are just discovering this technology, and want to show off what they created. Upcoming tutorial - SDXL Lora + using 1. Load Image Node. Initial Input block - will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion - this creats a very basic image from a simple prompt and sends it as a source. All the adapters that loads images from directories that I found (Inspire Pack and WAS Node Suite) seem to sort the files by name and don't give me an option to sort them by anything else. Save workflow: Ctrl + O: Load workflow: Ctrl + A: Select all nodes: Alt + C: Collapse/uncollapse selected nodes: Ctrl + M: Mute/unmute selected nodes: Ctrl + B: Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through) Delete/Backspace: Delete selected nodes: Ctrl + Backspace: Delete the current I see youtubers drag images into ComfyUI and they get a full workflow, but when I do it, I can't seem to load any workflows. I have a wide range of tutorials with both basic and advanced workflows. 86s/it on a 4070 with the 25 frame model, 2. I tried the load methods from Was-nodesuite-comfyUI and ComfyUI-N-Nodes on comfyUI, but they seem to load all of my images into RAM at once. Belittling their efforts will get you banned. 9. So for the first time you start the workflow, wait a while. Aug 2, 2024 · Flux Dev. sft file in your: ComfyUI/models/unet/ folder. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. Starting workflow. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. It covers the following topics: Introduction to Flux. If you see a few red boxes, be sure to read the Questions section on the page. You can find the Flux Schnell diffusion model weights here this file should go in your: ComfyUI/models/unet/ folder. You can find the Flux Dev diffusion model weights here. https://youtu. Let me know if you are interested in collaboration Welcome to the unofficial ComfyUI subreddit. This causes my steps to take up a lot of RAM, leading to killed RAM. Breakdown of workflow content. And above all, BE NICE. I teach you how to build workflows rather than Welcome to the unofficial ComfyUI subreddit. (I will be sorting out workflows for tutorials at a later date in the youtube description for each, many can be found in r/comfyui where I first posted most of these. Ending Workflow. Somebody suggested that the previous version of this workflow was a bit too messy, so this is an attempt to address the issue while guaranteeing room for future growth (the different segments of the Bus can be moved horizontally and vertically to enlarge each section/function. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 5 workflow (dont download workflows from YouTube videos or Advanced stuff on here!!). How to install and use Flux. Seems very hit and miss, most of what I'm getting look like 2d camera pans. It downloads the custom nodes and then gets to "downloading models & other files". Workflow. There are lots of people who wants to turn their workflows to fully functioning apps and libraries like your will help that a lot. Try generating basic stuff with prompt, read about cfg, steps and noise. I'm not going to spend two and a half grand on high-end computer equipment, then cheap out by paying £50 on some crappy SATA SSD that maxes out at 560MB/s. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. ) Welcome to the unofficial ComfyUI subreddit. Not only I was able to recover a 176x144 pixel 20 year old video with this, in addition it supports the brand new SD15 model to Modelscope nodes by exponentialML, an SDXL lightning upscaler (in addition to the AD LCM one), and a SUPIR second stage, for a total a gorgeous 4k native output from comfyUI! You can then load or drag the following image in ComfyUI to get the workflow: Flux Schnell. Is there a way to load each image in a video (or a batch) to save memory? My goal is that I start the ComfyUI workflow and the workflow loads the latest image in a given directory and works with it. I have a video and I want to run SD on each frame of that video. Yesterday, was just playing around with Stable Cascade and made some movie poster to test the composition and letter writing. and yess, this is arcane as FK and I have no idea why some of the workflows are shared this way. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. Dec 1, 2023 · If you've ever wanted to start creating your own Stable Diffusion workflows in ComfyU, then this is the video for you! Learning the basics is essential for any workflow creator, and I’ve Apr 30, 2024 · Follow this step-by-step guide to load, configure, and test LoRAs in ComfyUI, and unlock new creative possibilities for your projects. Link to the workflows, prompts and tutorials : download them here. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. An example of the images you can generate with this workflow: ComfyUI's API is enough for making simple apps, but hard to write by hand. 9 but it looks like I need to switch my upscaling method. Related resources for Flux. Try to install the reactor node directly via ComfyUI manager. The images look better than most 1. 1, such as LoRA, ControlNet, etc. Jul 6, 2024 · You can construct an image generation workflow by chaining different blocks (called nodes) together. the diagram doesn't load into comfyui so I can't test it out. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. Run any ComfyUI workflow w/ ZERO setup (free & open source) Try now Welcome to the unofficial ComfyUI subreddit. I load the models fine and connect the proper nodes, and they work, but I'm not sure how to use them properly to mimic other webuis behavior. 75s/it with the 14 frame model. Once installed, download the required files and add them to the appropriate folders. The ComfyUI workflow uses the latent upscaler (nearest/exact) set to 512x912 multiplied by 2 and it takes around 120-140 seconds per image at 30 steps with SDXL 0. The API workflows are not the same format as an image workflow, you'll create the workflow in ComfyUI and use the "Save (API Format)" button under the Save button you've probably Welcome to the unofficial ComfyUI subreddit. yuhnsam udza cgcku rvp qajvm qnhowp mhl usouo zuiz pth