Comfyui image to latent reddit
Comfyui image to latent reddit
Comfyui image to latent reddit. So I use batch picker, but I cant use that with efficiency nodes. That’s why it is impossible to find/extract the seed number from images made in a batch. I add some noise to give the denoiser a little something extra to grab onto There isn't a "mode" for img2img. As many of you know there are options in sd-web-ui to select how to fit controlnet image to latent. Here's a simple node to make a latent symmetrical across the Y or X axis, which makes for some fun images if you use it in between a img2img workflow like demonstrated here. Inspired by the A1111 equivalent. Along with normal image preview other methods are: Latent Upscaled 2x Hires fix 2x(two pass img) Welcome to the unofficial ComfyUI subreddit. It frequently will combine what are supposed to be the different parts of the image into one thing. Evening all. Also, in ComfyUI, you can simply use ControlNetApply or ControlNetApplyAdvanced, which utilize controlnet. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode and sample it again. I am looking for better interpolation between two images that I get with the standard Rife/FILM image interpolation. Not exactly sure what OP was looking for, but you can take an Image output and route to a VAE Encode (pixels input) which has a Latent output. replaces the 50/50 latent image with color so it bleeds into the images generated instead of relying entirely on luck to get what oyu want, kinda like img2img but you do it with like a 0. Hi everyone, I'm four days in comfyUI and I am following Latents tutorials. Usually I use two my wokrflows: I gave up on latent upscale. This was the starting point of the above image: starting point Kind of a very large “Where is Waldo” image. Then you can run it to Sampler or whatever. You can Load these images in ComfyUI (opens in a new tab) to get the full workflow. No, in txt2img. If you have created a 4 image batch, and later you drop the 3rd one into comfy to generate with that image, you dont get the third image, you get the first. But more useful is that you can now right-click an image in the `Preview for Image Chooser` and select `Progress this image` - which is the same as selecting it's number and pressing go. It's not a problem as long as scale is low (< 2x), and follow up sampling uses high denoise (0. eg: batch index 2, Length 2 would send image number 3 and 4 to preview img in this example. Just getting to grips with Comfy. 2 images need to be generated from Ksampler. Does anyone have any For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). If you want latent scale on input size, yes you can use comfyroll nodes or any similar to get image resolution. Is there any node that works out of box or a workflow of yours for this purpose? Oct 21, 2023 · https://latent-consistency-models. Ignore the LoRA node that makes the result look EXACTLY like my girlfriend. A homogenous image like that doesn't tell the whole story though ^^. I've setup some math expressions to deal with, it kinda works but not as expected. I'm trying to build a workflow where I inpaint a part of the image, and then AFTER the inpaint I do another img2img pass on the whole image. Please share your tips, tricks, and workflows for using this software to create your AI art. As an input I use various image sizes and find I have to manually enter the image size in the Empty Latent Image node that leads to the KSampler each time I work on a new image. Insert the new image in again in the workflow and inpaint something else rinse and repeat until you loose interest :-) Retouch the "inpainted layers" in your image editing software with masks if you must. . Safetensors. 5 denoise (needed for latent idk why though) through a second ksample. Change the senders to ID 2, attached the set latent noise mask from Receiver 1 to the input for the latent, and inpaint more if you'd like/ Doing this leaves the image in latent space, but allows you to paint a mask over the previous I find if it's below 0. With LCM sampler on the SD1. In this case if you enter 4 in the Latent Selector, it continues computing the process with the 4th image in the batch. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. 5 to make it the right size for the sdxl Ksampler. 3 denoise, takes a bit longer but gives more consistent results than latent upscale. And above all, BE NICE. Once ComfyUI gets to the choosing it continues the process with whatever new computations need to be done. When I change my model in checkpoint "anything-v3- fp16- pruned. I'm aware that the option is in the empty latent image node, but it's not in the load image node. But I am having a hard time getting the basic iterative workflow set up. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. *Edit* KSampler is where the image generation is taking place and it outputs a latent image. hello everyone, I want to give2 latent images to ksampler at the same time. But in cutton candy3D it doesnt look right. There is a latent workflow and a pixel space ESRGAN workflow in the examples. I am using ComfyUI and so far assume that I need a combination of detailers, upscalers, and tile ControlNet in addition to the usual components. All of the batched items will process until they are all done. A denoising strength of 0. github. Quite a noob. There's "latent upscale by", but I don't want to upscale the latent image. "Upscaling with model" is an operation with normal images and we can operate with corresponding model, such as 4x_NMKD-Siax_200k. I have an issue with the preview image. 35-0. I'm new to the channel and to ComfyUI, and I come looking for a solution to an upscaling problem. Do I scale in latent space, do detailing on regions, and what in which order? First of all, there a 'heads up display' (top left) that lets you cancel the Image Choice without finding the node (plus it lets you know that you are paused!). This allows you to latent/image sent do "image receiver ID1" until you get something painted the way you want. It looked like IP Adapters might…. It will output width/height, in which you pass them to empty latent (where width/height converted to input). 5. Explore its features, templates and examples on GitHub. - latent upscale looks much more detailed, but gets rid of the detail of the original image. Every Sampler node (the step that actually generates the image) on ComfyUI requires a latent image as an input. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. Overall: - image upscale is less detailed, but more faithful to the image you upscale. Then use sd upscale to split it to tiles and denoise each one using your parameters, that way you will get a grid with your images. 5 will keep you quite close to the original image and rebuild the noise caused by the latent upscale. Latent quality is better but the final image deviates significantly from the initial generation. With this method, you can upscale the image while also preserving the style of the model. Note that if input image is not divisble by 16, or 32 with SDXL models, the output image will be slightly blurry. In the provided sample image from ComfyUI_Dave_CustomNode, the Empty Latent Image node features inputs that somehow connect width and height from the MultiAreaConditioning node in a very elegant fashion. So far I've made my own image to image and upscaling workflows. Taking the output of a KSampler and running it through a latent upscaling node results in major artifacts (lots of horizontal and vertical lines, and blurring). Is there anything I can do… You just need to input the latent transformed by VAEEncode instead of an Empty Latent into the KSampler. You can effectively do an img2img by taking a finished image and doing VAE Encode->KSampler->VAE Decode->save image, assuming you want a sort of loopback thing. Mar 22, 2024 · You have two different ways you can perform a “Hires Fix” natively in ComfyUI: Latent Upscale; Upscaling Model; You can download the workflows over on the Prompting Pixels website. 0. Seeing an image Unsampler'ed and then resampled back to the original image was great. pth or 4x_foolhardy_Remacri. I haven't been able to replicate this in Comfy. I have a workflow I use fairly often where I convert or upscale images using ControlNet. I feed the latent from the first pass into sampler A with conditioning on the left hand side of the image (coming from LoRA A), and sampler B with right-side conditioning (from LoRA B). I'm looking for help making or stealing a template with a very simple, load the image, mask, insert prompt, inpainted output image. Note that this extension fails to do what it is supposed to do a lot of the time. It doesn't look like the KSampler preview window. Oct 21, 2023 · This method consists of a few steps: decode the samples into an image, upscale the image using an upscaling model, encode the image back into the latent space, and perform the sampler pass. I believe he does, the seed is fixed so ComfyUI skips the processes that have already executed. I havent tried just passing Turbo ontop of Turbo though. Best way to upscale an anime village scene image to 7168 × 4096 with comfyui ? I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. Upscaling latent is fast (you skip decode + encode), but garbles up the image somewhat. Belittling their efforts will get you banned. (a) Input Image -> VAE Encode -> Unsampler (back to step 0) -> Inject this Noise into a Latent (b) Empty Latent -> Inject Noise into this Latent I have a ComfyUI workflow that produces great results. A lot of people are just discovering this technology, and want to show off what they created. pth "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. Input your batched latent and vae. I recently switched to comfyui from AUTOMATIC1111 and I'm having trouble finding a way of changing the batch size within an img2img workflow. Hi, guys. If you have previously generated images you want to upscale, you'd modify the HiRes to include the IMG2IMG nodes. the quality of image seems decent in 4 steps. Here's a very bad workaround that i haven't tried myself yet because i just thought about it now while taking a dump and reading your question: create a 1 step new giant image filled with latent noise. Hi, I'm still learning Stable Diffusion and ComfyUI and I connected the latent output from cascade Ksampler B to latent input of Ksampler SDLX. There is making a batch using the Empty Latent Image node, batch_size widget, and there is making a batch in the control panel. Now this does "work", and at no time are both LoRAs loaded into the same model. Do the same comparison with images that are much more detailed, with characters and patterns. Please keep posted images SFW. 7+ denoising so all you get is the basic info from it. 2 options here. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and Using "batch_size" as part of the latent creation (say, using ComfyUI's `Empty Latent Image` node) Simply running the executing the prompt multiple times, either by smashing the "Queue Prompt" button multiple times in comfyUI, or changing the "Batch count" in the "extra options" under the button. Now I have some cool images, I want to make a few corrections to certain areas by masking. The Empty Latent Image will run however many you enter through each step of the workflow. The resolution is okay, but if possible I would like to get something better. After that I send it through a face detailer and an ultimate sd upscale. Welcome to the unofficial ComfyUI subreddit. Which is super useful if you intend to further process the latent (like putting it through an SXDL refiner pipeline to get more details at a higher resolution than you could with image upscaling). Sep 7, 2024 · Img2Img Examples. On a latent image node you can say how many images in a batch (not usually what you want) and on the "extended" options on the "generate" dialog there is a number of images in the batch or (what I use most often that automatic1111 doesn't have) repeat indefinitely. Img2Img works by loading an image like this example image (opens in a new tab), converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. 5+) Upscaling images is more general and robust, but latent can be an optimization in some situations. This will allow for destruction free editing down the road. 5 side and latent upscale, I can produce some pretty high quality and detailed photoreal results at 1024px with total combined steps of 4 to 6, with CFG at 2. (all black gives nice rich colors and more dramatic lighting, all white is good for a very light styled image, a spotlight of white fading to black at the edges encourages a bright center and darker outer image, etc) The second section resizes the latent image to one of the appropriate SDXL sizes, labeled for the (approximate) aspect ratio. But the only thing I'm getting is a grey image. The problem I have is that the mask seems to "stick" after the first inpaint. Batch index counts from 0 and is used to select a target in your batched images Length defines the ammount of images after the target to send ahead. Both these are of similar speed. ComfyUI is a powerful and modular GUI for diffusion models with a graph interface. A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. To create a new image from scratch you input an Empty Latent Image node, and to do img2img you use a Load Image node and a VAE Encode to load the image and convert it into a Latent Image. " I can view the image clearly. First I passed the cascade latent output to a latent upscaler set to 0. It's based on the wonderful example from Sytan, but I un-collapsed it and removed upscaling to make it very simple to understand. Also, if this is new and exciting to you, feel free to post Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. io/ Seems quite promising and interesting. At the moment i generate my image with detail lora at 512 or 786 to avoid weird generations I then latent upscale them by 2 with nearest and run them with 0. The best method I Because, I recently found about it the hard way, a batch count of 3 and a fixed seed of 1 doesn’t output images from seed 1, 2 and 3 but images from seed 1, unknown seed and unknown seed. These are examples demonstrating how to do img2img. First you need to do is stop the generation mid way or later like if you have 40 steps, instruct sampler to stop at 29, then you upscale the unfinished photo (either as a latent model or as an image, I found that it's better to upscale it as an image and redecode it as a new latent) feed it to a new sampler and instruct to continue generation What's worked better for me is running the SDXL image through a VAE encoder and then upscaling the latent before running it through another ksampler that harnesses SD1. I modified this to something that seems to work for my needs, which is basically as follows. I want to upscale my image with a model, and then select the final size of it. The denoise controls the amount of noise added to the image. Latent upscalers are pure latent data expanders and don't do pixel-level interpolation like image upscalers do. Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. It's using IP adapter to encode the images to start and end on, and then using Animate-Diff to interpolate. 5 for latent upscale you can get issues, I tend to use 4x ultrasharp image upscale and then re-encode back thought a ksampler at the higher resolution with a 0. kyvlts accdlx ktjrq pmakyxh jnj koqwzcr ohtdp bivejh acxh htfl