Invoke should come soonest via a custom node at first, though the once my. a46ff7f 7 months ago. pickle. Welcome to the ComfyUI Community Docs! This is the community-maintained repository of documentation related to ComfyUI, a powerful and modular stable diffusion GUI and backend. Drop in your ComfyUI_windows_portableComfyUIcustom_nodes folder and select the Node from the Image Processing Node list. Style models can be used to provide a diffusion model a visual hint as to what kind of style the denoised latent should be in. {"payload":{"allShortcutsEnabled":false,"fileTree":{"":{"items":[{"name":"examples","path":"examples","contentType":"directory"},{"name":"LICENSE","path":"LICENSE. 5 vs 2. This checkpoint provides conditioning on depth for the StableDiffusionXL checkpoint. Right click image in a load image node and there should be "open in mask Editor". arnold408 changed the title How to use ComfyUI with SDXL 0. October 22, 2023 comfyui manager. ComfyUI gives you the full freedom and control to. I have been trying to make the transition to ComfyUi but have had an issue getting ControlNet working. 0 to create AI artwork. ではここからComfyUIの基本的な使い方についてご説明していきます。 ComfyUIは他のツールとは画面の使い方がかなり違う ので最初は少し戸惑うかもしれませんが、慣れればとても便利なのでぜひマスターしてみてください。 We’re on a journey to advance and democratize artificial intelligence through open source and open science. In this Guide I will try to help you with starting out using this and give you some starting workflows to work with. These models are the TencentARC T2I-Adapters for ControlNet ( TT2I Adapter research paper here ), converted to Safetensor. Top 8% Rank by size. Prerequisite: ComfyUI-CLIPSeg custom node. r/StableDiffusion •. The text was updated successfully, but these errors were encountered: All reactions. Which switches back the dim. We find the usual suspects over there (depth, canny, etc. Steps to Leverage the Hires Fix in ComfyUI: Loading Images: Start by loading the example images into ComfyUI to access the complete workflow. Note that --force-fp16 will only work if you installed the latest pytorch nightly. 试试. pth. It's the UI extension made for Controlnet being suboptimal for Tencent's T2I Adapters. At the moment it isn't possible to use it in ComfyUI due to a mismatch with the LDM model (I was engaging with @comfy to see if I could make any headroom there), and A1111/SD. When comparing ComfyUI and stable-diffusion-webui you can also consider the following projects: stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. It installed automatically and has been on since the first time I used ComfyUI. . T2I style CN Shuffle Reference-Only CN. Readme. {"payload":{"allShortcutsEnabled":false,"fileTree":{"comfy":{"items":[{"name":"cldm","path":"comfy/cldm","contentType":"directory"},{"name":"extra_samplers","path. openpose-editor - Openpose Editor for AUTOMATIC1111's stable-diffusion-webui. Although it is not yet perfect (his own words), you can use it and have fun. png 2 months ago;We're looking for ComfyUI helpful and innovative workflows that enhance people’s productivity and creativity. Clipvision T2I with only text prompt. 6k. 10 Stable Diffusion extensions for next-level creativity. THESE TWO. A node system is a way of designing and executing complex stable diffusion pipelines using a visual flowchart. The Load Image (as Mask) node can be used to load a channel of an image to use as a mask. 6 there are plenty of new opportunities for using ControlNets and. Contribute to LiuFengHuiXueYYY/ComfyUi development by creating an account on GitHub. Mindless-Ad8486. Thank you so much for releasing everything. arxiv: 2302. All that should live in Krita is a 'send' button. This is a collection of AnimateDiff ComfyUI workflows. We release two online demos: and . Its image compostion capabilities allow you to assign different prompts and weights, even using different models, to specific areas of an image. SDXL ComfyUI ULTIMATE Workflow. thibaud_xl_openpose also runs in ComfyUI and recognizes hand and face keynotes; but, it is extremely slow. These are optional files, producing similar results to the official ControlNet models, but with added Style and Color functions. 436. Next, run install. . Welcome to the unofficial ComfyUI subreddit. The screenshot is in Chinese version. With this Node Based UI you can use AI Image Generation Modular. 5. Prompt editing [a: b :step] --> replcae a by b at step. He published on HF: SD XL 1. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint. {"payload":{"allShortcutsEnabled":false,"fileTree":{"":{"items":[{"name":"js","path":"js","contentType":"directory"},{"name":"misc","path":"misc","contentType. Set a blur to the segments created. Style models can be used to provide a diffusion model a visual hint as to what kind of style the denoised latent should be in. The extension sd-webui-controlnet has added the supports for several control models from the community. Install the ComfyUI dependencies. Understanding the Underlying Concept: The core principle of Hires Fix lies in upscaling a lower-resolution image before its conversion via img2img. The interface follows closely how SD works and the code should be much more simple to understand than other SD UIs. For the T2I-Adapter the model runs once in total. g. ComfyUI's ControlNet Auxiliary Preprocessors. 400 is developed for webui beyond 1. For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples Installing ComfyUI Features ComfyUI : ノードベース WebUI 導入&使い方ガイド. It is similar to a ControlNet, but it is a lot smaller (~77M parameters and ~300MB file size) because its only inserts weights into the UNet instead of copying and training it. Copilot. Hopefully inpainting support soon. Models are defined under models/ folder, with models/<model_name>_<version>. USE_GOOGLE_DRIVE : UPDATE_COMFY_UI : Update WAS Node Suite. Step 1: Install 7-Zip. Recently a brand new ControlNet model called T2I-Adapter style was released by TencentARC for Stable Diffusion. Environment Setup. Adjustment of default values. Find and fix vulnerabilities. comfyanonymous. jn-jairo mentioned this issue Oct 13, 2023. This method is recommended for individuals with experience with Docker containers and understand the pluses and minuses of a container-based install. Have fun! 01 award winning photography, a cute monster holding up a sign saying SDXL, by pixarEnhances ComfyUI with features like autocomplete filenames, dynamic widgets, node management, and auto-updates. This connects to the. 0 allows you to generate images from text instructions written in natural language (text-to-image. assets. I am working on one for InvokeAI. start [SD Compendium]Go to comfyui r/comfyui • by. Create photorealistic and artistic images using SDXL. How to use ComfyUI controlnet T2I-Adapter with SDXL 0. Please keep posted images SFW. Fine-tune and customize your image generation models using ComfyUI. I use ControlNet T2I-Adapter style model,something wrong happen?. Fizz Nodes. outputs CONDITIONING A Conditioning containing the T2I style. 8. dcf6af9 about 1 month ago. I always get noticiable grid seams, and artifacts like faces being created all over the place, even at 2x upscale. It allows for denoising larger images by splitting it up into smaller tiles and denoising these. This was the base for. safetensors t2i-adapter_diffusers_xl_sketch. comfyui. Follow the ComfyUI manual installation instructions for Windows and Linux. In the case you want to generate an image in 30 steps. So many ah ha moments. bat you can run to install to portable if detected. comfyUI采用的是workflow体系来运行Stable Diffusion的各种模型和参数,有点类似于桌面软件. 0 -cudnn8-runtime-ubuntu22. Chuan L says: October 27, 2023 at 7:37 am. (Results in following images -->) 1 / 4. For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples Installing ComfyUI FeaturesThe equivalent of "batch size" can be configured in different ways depending on the task. I was wondering if anyone has a workflow or some guidance on how. Conditioning Apply ControlNet Apply Style Model. A good place to start if you have no idea how any of this works is the: All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. The interface follows closely how SD works and the code should be much more simple to understand than other SD UIs. Butchart Gardens. I leave you the link where the models are located (In the files tab) and you download them one by one. AnimateDiff ComfyUI. {"payload":{"allShortcutsEnabled":false,"fileTree":{"comfy":{"items":[{"name":"cldm","path":"comfy/cldm","contentType":"directory"},{"name":"extra_samplers","path. Learn how to use Stable Diffusion SDXL 1. Once the keys are renamed to ones that follow the current t2i adapter standard it should work in ComfyUI. A training script is also included. When I see the basic T2I workflow on the main page, I think naturally this is far too much. Advanced Diffusers Loader Load Checkpoint (With Config) Conditioning. creamlab. The overall architecture is composed of two parts: 1) a pre-trained stable diffusion model with fixed parameters; 2) several proposed T2I-Adapters trained to internal knowledge in T2I models and. Area Composition Noisy Latent Composition ControlNets and T2I-Adapter GLIGEN unCLIP SDXL Model Merging LCM The Node Guide (WIP) documents what each node does. py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all)These work in ComfyUI now, just make sure you update (update/update_comfyui. This is the initial code to make T2I-Adapters work in SDXL with Diffusers. png. T2I-Adapters are used the same way as ControlNets in ComfyUI: using the ControlNetLoader node. You need "t2i-adapter_xl_canny. Single metric head models (Zoe_N and Zoe_K from the paper) have the common definition and are defined under. It's all or nothing, with not further options (although you can set the strength. ComfyUI also allows you apply different. It divides frames into smaller batches with a slight overlap. Please adjust. . main. But it gave better results than I thought. Images can be uploaded by starting the file dialog or by dropping an image onto the node. NOTICE. Take a deep breath,. "<cat-toy>". The T2I-Adapter network provides supplementary guidance to the pre-trained text-to-image models such as the text-to-image SDXL model from Stable Diffusion. g. Install the ComfyUI dependencies. Introduction. ComfyUI Community Manual Getting Started Interface. 6 there are plenty of new opportunities for using ControlNets and sister models in A1111,Get the COMFYUI SDXL Advanced co. [ SD15 - Changing Face Angle ] T2I + ControlNet to adjust the angle of the face. Fiztban. Apply your skills to various domains such as art, design, entertainment, education, and more. We release T2I-Adapter-SDXL models for sketch, canny, lineart, openpose, depth-zoe, and depth-mid. While some areas of machine learning and generative models are highly technical, this manual shall be kept understandable by non-technical users. T2I Adapter is a network providing additional conditioning to stable diffusion. SargeZT has published the first batch of Controlnet and T2i for XL. py. The Apply ControlNet node can be used to provide further visual guidance to a diffusion model. 「AnimateDiff」では簡単にショートアニメをつくれますが、プロンプトだけで思い通りの構図を再現するのはやはり難しいです。 そこで、画像生成でおなじみの「ControlNet」を併用することで、意図したアニメーションを再現しやすくなります。 必要な準備 ComfyUIでAnimateDiffとControlNetを使うために. Part 3 - we will add an SDXL refiner for the full SDXL process. Advanced Diffusers Loader Load Checkpoint (With Config) Conditioning. 0发布,以后不用填彩总了,3种SDXL1. s1 and s2 scale the intermediate values coming from the input blocks that are concatenated to the. Hi Andrew, thanks for showing some paths in the jungle. I have a brief over. main. t2i部分のKSamplerでseedをfixedにしてHires fixの部分を調整しながら生成を繰り返すとき、変更点であるHires fixのKSamplerから処理が始まるので効率的に動いているのがわかります。. 0 Depth Vidit, Depth Faid Vidit, Depth, Zeed, Seg, Segmentation, Scribble. ipynb","path":"notebooks/comfyui_colab. ComfyUI is a powerful and modular stable diffusion GUI and backend with a user-friendly interface that empowers users to effortlessly design and execute intricate Stable Diffusion pipelines. Enjoy over 100 annual festivals and exciting events. 1. 0 to create AI artwork. . Please share workflow. You need "t2i-adapter_xl_canny. Encompassing QR code, Interpolation (2step and 3step), Inpainting, IP Adapter, Motion LoRAs, Prompt Scheduling, Controlnet, and Vid2Vid. 3. To modify the trigger number and other settings, utilize the SlidingWindowOptions node. Instant dev environments. py --force-fp16. 4 Python ComfyUI VS T2I-Adapter T2I-Adapter sd-webui-lobe-theme. ComfyUI Manager. it seems that we can always find a good method to handle different images. { "cells": [ { "cell_type": "markdown", "metadata": { "id": "aaaaaaaaaa" }, "source": [ "Git clone the repo and install the requirements. . Codespaces. 3 2,517 8. File "C:ComfyUI_windows_portableComfyUIexecution. g. T2I-Adapter. An extension that is extremely immature and priorities function over form. Preprocessor Node sd-webui-controlnet/other Use with ControlNet/T2I-Adapter Category; MiDaS-DepthMapPreprocessor (normal) depth: control_v11f1p_sd15_depthComfyUi and ControlNet Issues. This repo contains examples of what is achievable with ComfyUI. 2. I just started using ComfyUI yesterday, and after a steep learning curve, all I have to say is, wow! It's leaps and bounds better than Automatic1111. Run ComfyUI with colab iframe (use only in case the previous way with localtunnel doesn't work) You should see the ui appear in an iframe. CARTOON BAD GUY - Reality kicks in just after 30 seconds. Resources. 2. It allows you to create customized workflows such as image post processing, or conversions. It will download all models by default. sd-webui-comfyui 是 Automatic1111's stable-diffusion-webui 的扩展,它将 ComfyUI 嵌入到它自己的选项卡中。 : 其他 : Advanced CLIP Text Encode : 包含两个 ComfyUI 节点,允许更好地控制提示权重的解释方式,并让您混合不同的嵌入方式 : 自定义节点 : AIGODLIKE-ComfyUI. The overall architecture is composed of two parts: 1) a pre-trained stable diffusion model with fixed parameters; 2) several proposed T2I-Adapters trained to internal knowledge in T2I models and. If there is no alpha channel, an entirely unmasked MASK is outputted. coadapter-canny-sd15v1. We collaborate with the diffusers team to bring the support of T2I-Adapters for Stable Diffusion XL (SDXL) in diffusers! It achieves impressive results in both performance and efficiency. ComfyUI / Dockerfile. 1 TypeScript ComfyUI VS sd-webui-lobe-theme 🤯 Lobe theme - The modern theme for stable diffusion webui, exquisite interface design, highly customizable UI,. The ControlNet Detectmap will be cropped and re-scaled to fit inside the height and width of the txt2img settings. CLIPSegDetectorProvider is a wrapper that enables the use of CLIPSeg custom node as the BBox Detector for FaceDetailer. In the end, it turned out Vlad enabled by default some optimization that wasn't enabled by default in Automatic1111. This is the input image that. ComfyUI checks what your hardware is and determines what is best. ComfyUI Manager: Plugin for CompfyUI that helps detect and install missing plugins. 10 Stable Diffusion extensions for next-level creativity. For users with GPUs that have less than 3GB vram, ComfyUI offers a. T2I-Adapter is a condition control solution that allows for precise control supporting multiple input guidance models. But you can force it to do whatever you want by adding that into the command line. setting highpass/lowpass filters on canny. SDXL Examples. Why Victoria is the best city in Canada to visit. These are also used exactly like ControlNets in ComfyUI. b1 and b2 multiply half of the intermediate values coming from the previous blocks of the unet. Interface NodeOptions Save File Formatting Shortcuts Text Prompts Utility Nodes Core Nodes. 1. Safetensors/FP16 versions of the new ControlNet-v1-1 checkpoints. Step 2: Download ComfyUI. With this Node Based UI you can use AI Image Generation Modular. ComfyUI Weekly Update: Better memory management, Control Loras, ReVision and T2I adapters for SDXL . Store ComfyUI on Google Drive instead of Colab. Only T2IAdaptor style models are currently supported. py","contentType":"file. Good for prototyping. ci","path":". Adapter Upload g_pose2. Otherwise it will default to system and assume you followed ConfyUI's manual installation steps. No virus. Direct download only works for NVIDIA GPUs. Contribute to Asterecho/ComfyUI-ZHO-Chinese development by creating an account on GitHub. Depth and ZOE depth are named the same. Anyway, I know it's a shot in the dark, but I. The screenshot is in Chinese version. Many of the new models are related to SDXL, with several models for Stable Diffusion 1. This tool can save a significant amount of time. comment sorted by Best Top New Controversial Q&A Add a Comment. py --force-fp16. Make sure you put your Stable Diffusion checkpoints/models (the huge ckpt/safetensors files) in: ComfyUI\\models\\checkpoints How do I share models between another UI and ComfyUI? . args and prepend the comfyui directory to sys. OPTIONS = {} USE_GOOGLE_DRIVE = False #@param {type:"boolean"} UPDATE_COMFY_UI = True #@param {type:"boolean"} WORKSPACE = 'ComfyUI'. 投稿日 2023-03-15; 更新日 2023-03-15 If you're running on Linux, or non-admin account on windows you'll want to ensure /ComfyUI/custom_nodes, ComfyUI_I2I, and ComfyI2I. The Apply ControlNet node can be used to provide further visual guidance to a diffusion model. Thank you for making these. Hi all! I recently made the shift to ComfyUI and have been testing a few things. 8, 2023. We collaborate with the diffusers team to bring the support of T2I-Adapters for Stable Diffusion XL (SDXL) in diffusers! It achieves impressive results in both performance and efficiency. A node suite for ComfyUI with many new nodes, such as image processing, text processing, and more. 100. 1: Enables dynamic layer manipulation for intuitive image. Otherwise it will default to system and assume you followed ConfyUI's manual installation steps. gitignore","contentType":"file"},{"name":"LICENSE","path":"LICENSE. Run ComfyUI with colab iframe (use only in case the previous way with localtunnel doesn't work) You should see the ui appear in an iframe. safetensors" from the link at the beginning of this post. Another Comfyui review post (My reaction and criticisms as a newcomer and A1111 fan) r/StableDiffusion • ComfyUI Weekly Update: Better memory management, Control Loras, ReVision and T2I adapters for SDXLHi, I see that ComfyUI is getting a lot of ridicule on socials because of its overly complicated workflow. {"payload":{"allShortcutsEnabled":false,"fileTree":{"comfy/t2i_adapter":{"items":[{"name":"adapter. When comparing sd-webui-controlnet and T2I-Adapter you can also consider the following projects: ComfyUI - The most powerful and modular stable diffusion GUI with a graph/nodes interface. ComfyUI The most powerful and modular stable diffusion GUI and backend. g. ComfyUI Guide: Utilizing ControlNet and T2I-Adapter. ComfyUI Community Manual Getting Started Interface. If you want to open it. We would like to show you a description here but the site won’t allow us. The ControlNet input image will be stretched (or compressed) to match the height and width of the text2img (or img2img) settings. Furthermore, this extension provides a hub feature and convenience functions to access a wide range of information within ComfyUI. This project strives to positively impact the domain of AI. DirectML (AMD Cards on Windows) {"payload":{"allShortcutsEnabled":false,"fileTree":{"comfy":{"items":[{"name":"cldm","path":"comfy/cldm","contentType":"directory"},{"name":"extra_samplers","path. T2I-Adapter-SDXL - Depth-Zoe. It will download all models by default. gitignore","path":". #3 #4 #5 I have implemented the ability to specify the type when inferring, so if you encounter it, try fp32. This extension provides assistance in installing and managing custom nodes for ComfyUI. mv checkpoints checkpoints_old. bat you can run to install to portable if detected. Update to the latest comfyui and open the settings, it should be added as a feature, both the always-on grid and the line styles (default curve or angled lines). A few days ago I implemented T2I-Adapter support in my ComfyUI and after testing them out a bit I'm very surprised how little attention they get compared to controlnets. Learn about the use of Generative Adverserial Networks and CLIP. {"payload":{"allShortcutsEnabled":false,"fileTree":{"models/style_models":{"items":[{"name":"put_t2i_style_model_here","path":"models/style_models/put_t2i_style_model. Info. ComfyUI with SDXL (Base+Refiner) + ControlNet XL OpenPose + FaceDefiner (2x) ComfyUI is hard. Is there a way to omit the second picture altogether and only use the Clipvision style for. this repo contains a tiled sampler for ComfyUI. ComfyUI with SDXL (Base+Refiner) + ControlNet XL OpenPose + FaceDefiner (2x) ComfyUI is hard. T2I-Adapters are used the same way as ControlNets in ComfyUI: using the ControlNetLoader node. A summary of all mentioned or recommeneded projects: ComfyUI and T2I-Adapter. 2) Go SUP. It allows for denoising larger images by splitting it up into smaller tiles and denoising these. As the key building block. I love the idea of finally having control over areas of an image for generating images with more precision like Comfyui can provide. In this ComfyUI tutorial we will quickly c. However, relying solely on text prompts cannot fully take advantage of the knowledge learned by the model, especially when flexible and accurate controlling (e. In A1111 I typically develop my prompts in txt2img, then copy the +/-prompts into Parseq, setup parameters and keyframes, then export those to Deforum to create animations. 3D人Stable diffusion with comfyui. When comparing T2I-Adapter and ComfyUI you can also consider the following projects: stable-diffusion-webui - Stable Diffusion web UI stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. (early. pth @dfaker also started a discussion on the. AP Workflow 6. 106 15,113 9. Upload g_pose2. All images were created using ComfyUI + SDXL 0. IP-Adapter for ComfyUI [IPAdapter-ComfyUI or ComfyUI_IPAdapter_plus] ; IP-Adapter for InvokeAI [release notes] ; IP-Adapter for AnimateDiff prompt travel ; Diffusers_IPAdapter: more features such as supporting multiple input images ; Official Diffusers Disclaimer . The unCLIP Conditioning node can be used to provide unCLIP models with additional visual guidance through images encoded by a CLIP vision model. 「ControlNetが出たぞー!」という話があって実装したと思ったらその翌日にT2I-Adapterが発表されて全力で脱力し、しばらくやる気が起きなかったのだが、ITmediaの連載でも触れたように、AI用ポーズ集を作ったので、それをMemeplex上から検索してimg2imgまたはT2I-Adapterで好きなポーズや表情をベースとし. py","path":"comfy/t2i_adapter/adapter. ComfyUI A powerful and modular stable diffusion GUI. Store ComfyUI. Product. Inference - A reimagined native Stable Diffusion experience for any ComfyUI workflow, now in Stability Matrix . If you want to open it in another window use the link. Provides a browser UI for generating images from text prompts and images. Sign In. These originate all over the web on reddit, twitter, discord, huggingface, github, etc. Sep 10, 2023 ComfyUI Weekly Update: DAT upscale model support and more T2I adapters. Preprocessing and ControlNet Model Resources: 3. Preprocessing and ControlNet Model Resources: 3. StabilityAI official results (ComfyUI): T2I-Adapter. Members Online. This function reads in a batch of image frames or video such as mp4, applies ControlNet's Depth and Openpose to generate a frame image for the video, and creates a video based on the created frame image. To launch the demo, please run the following commands: conda activate animatediff python app. . bat on the standalone). Just enter your text prompt, and see the. In Summary. Sep. The output is Gif/MP4. 0 Depth Vidit, Depth Faid Vidit, Depth, Zeed, Seg, Segmentation, Scribble. Learn more about TeamsComfyUI Custom Nodes. and no, I don't think it saves this properly. When comparing sd-webui-controlnet and ComfyUI you can also consider the following projects: stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. ComfyUI Custom Workflows. That’s so exciting to me as an Apple hardware user ! Apple’s SD version is based on diffuser’s work, it’s goes with 12sec per image on 2Watts of energy (neural engine) (Fu nvidia) But it was behind and rigid (no embeddings, fat checkpoints, no. If you have another Stable Diffusion UI you might be able to reuse the dependencies. Now we move on to t2i adapter. こんにちはこんばんは、teftef です。. Not by default. If you have another Stable Diffusion UI you might be able to reuse the dependencies. Yeah, suprised it hasn't been a bigger deal. Controls for Gamma, Contrast, and Brightness. Launch ComfyUI by running python main. These files are Custom Workflows for ComfyUI ComfyUI is a super powerful node-based , modular , interface for Stable Diffusion. The ControlNet Detectmap will be cropped and re-scaled to fit inside the height and width of the txt2img settings. 今回は少し変わった Stable Diffusion WebUI の紹介と使い方です。. json containing configuration. The fuser allows different adapters with various conditions to be aware of each other and synergize to achieve more powerful composability, especially the combination of element-level style and other structural information. . Not all diffusion models are compatible with unCLIP conditioning. r/StableDiffusion • New AnimateDiff on ComfyUI supports Unlimited Context Length - Vid2Vid will never be the same!!! ComfyUIの基本的な使い方. With this Node Based UI you can use AI Image Generation Modular. To load a workflow either click load or drag the workflow onto comfy (as an aside any picture will have the comfy workflow attached so you can drag any generated image into comfy and it will load the workflow that. another fantastic video. The extracted folder will be called ComfyUI_windows_portable.