Posts
Image size comfyui reddit
Image size comfyui reddit. Welcome to the unofficial ComfyUI subreddit. Stable Diffusion 1. Howdy! I'm not too advanced with ComfyUI for SD generation yet, but I've made a lot of progress thanks to your help. If you just want to see the size of an image you can open an image in a seperate tab of your browser and look up top to find the resolution too. New users of civitai should be aware the PNG (which contains the metadata) can only be downloaded from the "image view". This comprehensive tutorial covers 10 vital steps, including cropping, mask detection, sampler erasure, mask fine-tuning, and streamlined inpainting for incredible results. I have a workflow that is basically two user branches. Input your batched latent and vae. Stable Diffusion XL is Jul 6, 2024 · So, if you want to change the size of the image, you change the size of the latent image. Hey everyone, I've been exploring the possibility of using an image as input and generating an output image that retains the original input's dimensions. How do I do the same with ComfyUI? Welcome to the unofficial ComfyUI subreddit. I have a workflow I use fairly often where I convert or upscale images using ControlNet. A transparent PNG in the original size with only the newly inpainted part will be generated. Save the new image. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) When I generate an image with the prompt "attractive woman" in ComfyUI, I get the exact same face for every image I create. I want to upscale my image with a model, and then select the final size of it. The option has been around for a long time with other UIs like Automatic1111 and Visions of Chaos. Is there a way to pull this off within ComfyUI? Welcome to the unofficial ComfyUI subreddit. I have managed to push it down to 3 steps with some nifty tricks I found The demo images aren't curated, all images just use the seed "3" with a basic prompt, so this is really useful for experimenting. Belittling their efforts will get you banned. A mask adds a layer to the image that tells comfyui what area of the image to apply the prompt too. This youtube video should help answer your questions. Please share your tips, tricks, and workflows for using this software to create your AI art. 5 is trained on images 512 x 512. Also, if this is new and exciting to you, feel free to post Posted by u/tobi1577 - 216 votes and 49 comments Welcome to the unofficial ComfyUI subreddit. How to Magically Resize Your Images: The 1024px Rule That Will Change Everything. i do that alot. ComfyShop phase 1 is to establish the basic painting features for ComfyUI. I can obviously pick a size when doing Text2Image but when prompting off an existing image my final image will always just be the same size as the inspiration image. So I can't give a simple answer but I'd say if you're still interested and need some help we can join a discord call or something and I can help. The first branch has: Txt to Image and then Image to SDVID with the new SD vid models that came out. No, you don't erase the image. Probably not what you want but, the preview chooser\image chooser node is a custom node that pauses the flow while you choose which image (or latent) to pass on to the rest of the workflow. Increasing the tile size to half the image's dimensions (1536) does improve image quality, but the speed benefit diminishes. This workflow generates an image with SD1. Its solvable, ive been working on a workflow for this for like 2 weeks trying to perfect it for comfyUI but man no matter what you do there are usually some kind of artifacting, its a challenging problem to solve, unless you really want to use this process, my advice would be to generate subject smaller and then crop in and upscale instead. Want 10 images? Click that button till the Queue size is 10 (or select Extra options and put in 10 in Batch count). And above all, BE NICE. The hard part is knowing when the image is ready to be retreived and getting the image. A bit of an obtuse take. /* Put custom styles here */ . css and change the font-size to something higher than 10px and you should see a difference. (207) ComfyUI Artist Inpainting Tutorial - YouTube Welcome to the unofficial ComfyUI subreddit. First we calculate the ratios, or we use a text file where we Mar 22, 2024 · This simple checkbox in the Automatic1111 WebUI interface allows you to generate high-resolution images that look much better than the default output. I would like to know if that is due to some reason other than images that large take a long time. 5, then uses Grounding Dino to mask portions of the image to animate with AnimateLCM. This way its an end-to-end txt to animation. As an input I use various image sizes and find I have to manually enter the image size in the Empty Latent Image node that leads to the KSampler each time I work on a new image. Stable diffusion has a bad understanding of relative terms, try prompting: "a puppy and a kitten, the puppy on the left and the kitten on the right" to see what I mean. Aug 21, 2023 · If we want to change the image size of our ComfyUI Stable Diffusion image generator, we have to type the width and height. You can't enter a latent image size larger than 8192. It's based on the wonderful example from Sytan, but I un-collapsed it and removed upscaling to make it very simple to understand. I've built many ComfyUI web apps for personal business purposes and have helped others on Reddit as well. Layer copy & paste this PNG on top of the original in your go to image editing software. Im instead going to just try to work around it but trying to downscale the size of the image. In this case, the image from comfy has some extra glitches. Learn how to master inpainting on large images using ComfyUI and Stable Diffusion. You will need to launch comfyUI with this option each time, so modify your bat file or launch script. Also the exact same position of the body. . Automatic1111 would let you pick the final image size no matter what and give you options for crop, just resize, etc. - comfyanonymous/ComfyUI Copy that into user. so I would assume generating 4 images (with the `batch_size` property) would give me four images with seeds `1`, `2 I think the intended workflow here is to just press several times on the Queue Prompt button. To then view the generated images click on View History and go through your generations by loading them. Ignore the LoRA node that makes the result look EXACTLY like my girlfriend. Enjoy a comfortable and intuitive painting app. you can just plug the width and height from get image size directly into nodes where you need it too. The only way I can think of is just Upscale Image Model (4xultrasharp), get my image to 4096 , and then downscale with nearest-extact back to 1500. It is not a problem in the seed, because I tried different seeds. So you have the preview and a button to continue the workflow, but no mask and you would need to add a save image after this node in your workflow. If I were to make some type of custom node or modify the core node and allow a larger latent image size, would that break the whole process and there is some larger reason that 8192 is the hard Welcome to the unofficial ComfyUI subreddit. When I do the same in Automatic1111, I get completely different people and different compositions for every image. In the process, we also discuss SDXL architecture, how it is supp Welcome to the unofficial ComfyUI subreddit. I have a ComfyUI workflow that produces great results. can prettymuch be scaled to whatever batch size by repetition. In an effort the generate images faster on my potato pc. and see if you can get the image size to be used for the empty latent (converted) height and width (later on - empty Welcome to the unofficial ComfyUI subreddit. You probably still want an Exif Viewer/Remover/Cleaner to double check images since you haven't been using this setting and presumably have prior work to sanitize of metadata. With Masquerades nodes (install using comfyui node manager), you can maskToregion, cropByregion (both the image and the large mask), inpaint the smaller image, pasteByMask into the smaller image, then pasteByRegion into the bigger image. you wont get obvious seams or strange lines Welcome to the unofficial ComfyUI subreddit. Generated images automatic1111 image. (using SD webUI before) I am getting blurry image when using "Realities Edge XL ⊢ ⋅ LCM+SDXLTurbo" model in ComfyUI I got the same issue in SD webUI but after using sdxl-vae-fp16-fix, images are good But when I try to use the same to fix this issue, not working. A lot of people are just discovering this technology, and want to show off what they created. I started with ComfyUI 3 days ago. In truth, 'AI' never stole anything, any more than you 'steal' from the people who's images you have looked at when their images influence your own art; and while anyone can use an AI tool to make art, having an idea for a picture in your head, and getting any generative system to actually replicate that takes a considerable amount of skill and effort. The denoise on the video generation KSampler is at 0. However, my goal is to recreate the exact same image, I understand that the DPM++2M model can do this, at least in auto11 it does repeat the same image all the time. eg: batch index 2, Length 2 would send image number 3 and 4 to preview img in this example. I have tried to push down the sampling step count as low as possible. comfy-multiline-input { font-size: 10px; } ComfyShop has been introduced to the ComfyI2I family. I think the bare minimum would be the following but having the rest of the defaults next to it could be handy if you want to make other changes. I first get the prompt working as a list of the basic contents of your image. comfyui image. It animates 16 frames and uses the looping context options to make a video that loops. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. 8 so that some of the structure of the original image generated is retained. Please keep posted images SFW. In the process, we also discuss SDXL architecture, how it is supp During my img2img experiments with 3072x3072 images, I noticed a quality drop using Hypertile with standard settings (tile size 256, swap size = 2, max depth = 0). To open ComfyShop, simply right click on any image node that outputs an image and mask and you will see the ComfyShop option much in the same way you would see MaskEditor. I know i can run the img to vid portion with 512 x 512 input image but im struggling trying to downscale the image by 2. Batch index counts from 0 and is used to select a target in your batched images Length defines the ammount of images after the target to send ahead. Here’s how you can do it: Automatic1111 May 14, 2024 · A basic description on a couple of ways to resize your photos or images so that they will work in ComfyUI. Works great. This will be follow-along type step-by-step tutorials where we start from an empty ComfyUI canvas and slowly implement SDXL. Here, you can also set the batch size , which is how many images you generate in each run. Hello, Stable Diffusion enthusiasts! We decided to create a new educational series on SDXL and ComfyUI (it's free, no paywall or anything). Or add the Image Gallery extension. You set the height and the width to change the image size in pixel space. Insert the new image in again in the workflow and inpaint something else rinse and repeat until you loose interest :-) and no workflow metadata will be saved in any image. The one that is shown in the "post view" is a "preview JPEG" (even though it looks as if it is full size) which does not have the metadata.
jmn
cydo
bnzxw
rfrg
udbjcc
qtmgib
idqdq
wwndwi
cknhc
yod