Comfyui workflow examples github


Comfyui workflow examples github. small also works. KitchenComfyUI: A reactflow base stable diffusion GUI as ComfyUI alternative interface MentalDiffusion : Stable diffusion web interface for ComfyUI CushyStudio : Next-Gen Generative Art Studio (+ typescript SDK) - based on ComfyUI 示例的VH node ComfyUI-VideoHelperSuite node: ComfyUI-VideoHelperSuite mormal Audio-Drived Algo Inference new workflow 音频驱动视频常规示例 最新版本示例 motion_sync Extract facial features directly from the video (with the option of voice synchronization), while generating a PKL model for the reference video ,The old version GitHub community articles Repositories. Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything. ComfyUI Examples. If you have another Stable Diffusion UI you might be able to reuse the dependencies. Download this workflow file and load in ComfyUI. Use the sdxl branch of this repo to load SDXL models; The loaded model only works with the Flatten KSampler and a standard ComfyUI checkpoint loader is required for other KSamplers ComfyUI Manager: Plugin for CompfyUI that helps detect and install missing plugins. 2. This repo (opens in a new tab) contains examples of what is achievable with ComfyUI (opens in a new tab). Then you can load this image in ComfyUI to get the workflow that shows how to use the LCM SDXL lora with the SDXL base model: The important parts are to use a low cfg, use the "lcm" sampler and the "sgm_uniform" or "simple" scheduler. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. - comfyui-workflows/cosxl_edit_example_workflow. This workflow begins by using Bedrock Claude3 to refine the image editing prompt, generation caption of the original image, and merge the two image description into one. Contribute to modal-labs/modal-examples development by creating an account on GitHub. The text box GLIGEN model lets you specify the location and size of multiple objects in the image. Use natural language to generate variation of an image without re-describing the original image content. You can find the InstantX Canny model file here (rename to instantx_flux_canny. DeepFuze is a state-of-the-art deep learning tool that seamlessly integrates with ComfyUI to revolutionize facial transformations, lipsyncing, Face Swapping, Lipsync Translation, video generation, and voice cloning. Contribute to kijai/ComfyUI-LivePortraitKJ development by creating an account on GitHub. This was the base for my You signed in with another tab or window. Downloading a Model. Please consider a Github Sponsorship or PayPal donation (Matteo "matt3o" Spinelli). This new approach includes the addition of a noise masking strategy that may improve results further. There is now a install. 5 use the SD 1. You can ignore this. Here is a link to download pruned versions of the supported GLIGEN model files. Aug 2, 2024 · Good, i used CFG but it made the image blurry, i used regular KSampler node. Hunyuan DiT is a diffusion model that understands both english and chinese. Load the . All LoRA flavours: Lycoris, loha, lokr, locon, etc… are used this way. Reload to refresh your session. ComfyUI also has a mask editor that can be accessed by right clicking an image in the LoadImage node and “Open in MaskEditor”. Always refresh your browser and click refresh in the ComfyUI window after adding models or custom_nodes. The more sponsorships the more time I can dedicate to my open source projects. The resulting MKV file is readable. ComfyUI seems to work with the stable-diffusion-xl-base-0. Here is a workflow for using it: Save this image then load it or drag it on ComfyUI to get the workflow. SDXL Examples. Flux Schnell. 1-dev Image to You signed in with another tab or window. ComfyUI node of DTG. Contribute to AIFSH/ComfyUI-MimicMotion development by creating an account on GitHub. Examples of ComfyUI workflows. FLUX. safetensors for the example below), the Depth controlnet here and the Union Controlnet here. Features. 1 ComfyUI install guidance, workflow and example. 1-dev has been supported. Please read the AnimateDiff repo README and Wiki for more information about how it works at its core. 0. ComfyUI ControlNet aux: Plugin with preprocessors for ControlNet, so you can generate images directly from ComfyUI. - daniabib/ComfyUI_ProPainter_Nodes Added new nodes that implement iterative mixing in combination with the SamplerCustom node from ComfyUI, which produces very clean output (no graininess). Then I created two more sets of nodes, from Load Images to the IPAdapters, and adjusted the masks so that they would be part of a specific section in the whole image. You switched accounts on another tab or window. Here are the official checkpoints for the one tuned to generate 14 frame videos and the one for 25 frame videos. A CosXL Edit model takes a source image as input alongside a prompt, and interprets the prompt as an instruction for how to alter the image, similar to InstructPix2Pix. Area composition with Anything-V3 + second pass with AbyssOrangeMix2_hard. You can then load up the following image in ComfyUI to get the workflow: Follow the ComfyUI manual installation instructions for Windows and Linux. You can Load these images in ComfyUI to get the full workflow. These are examples demonstrating the ConditioningSetArea node. Examples of programs built using Modal. \nIn terms of composition, she stands against a background In the positive prompt node, type what you want to generate. Download aura_flow_0. Loads any given SD1. Scribble ControlNet. For Flux schnell you can get the checkpoint here that you can put in your: ComfyUI/models/checkpoints/ directory. 🖌️ ComfyUI implementation of ProPainter framework for video inpainting. The following images can be loaded in ComfyUI to get the full workflow. Here's a simple example of how to use controlnets, this example uses the scribble controlnet and the AnythingV3 model. Then press “Queue Prompt” once and start writing your prompt. Check my ComfyUI Advanced Understanding videos on YouTube for example, part 1 and part 2. Regular KSampler is incompatible with FLUX. Instead, you can use Impact/Inspire Pack's KSampler with Negative Cond Placeholder. For these examples I have renamed the files by adding stable_cascade_ in front of the filename for example: stable_cascade_canny. [Last update: 01/August/2024]Note: you need to put Example Inputs Files & Folders under ComfyUI Root Directory\ComfyUI\input folder before you can run the example workflow Flux. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. The only way to keep the code open and free is by sponsoring its development. Inpainting a cat with the v2 inpainting model: Inpainting a woman with the v2 inpainting model: It also works with non inpainting models. "text": "no humans,animal focus, looking at viewer, anime artwork, anime style, key visual, vibrant, studio anime, highly detailed"}, You signed in with another tab or window. You can take many of the images you see in this documentation and drop it inside ComfyUI to load the full node structure. 9, I run into issues. safetensors. safetensors and put it in your ComfyUI/checkpoints directory. Jul 5, 2024 · You signed in with another tab or window. Jun 30, 2023 · My research organization received access to SDXL. Try the example to recreate a image by JoyCaption and Flux, thanks to fancyfeast/joy-caption-pre-alpha [2024/08/05] 🌩️ FLUX. A collection of Post Processing Nodes for ComfyUI, which enable a variety of cool image effects - EllangoK/ComfyUI-post-processing-nodes ComfyUI nodes for LivePortrait. Dec 24, 2023 · If there was a special trick to make this connection, he would probably have explained how to do this, when he shared his workflow, in the first post. "The image showcases a classical painting of the iconic Mona Lisa, known for its enigmatic smile and mysterious gaze. The denoise controls the amount of noise added to the image. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Install the ComfyUI dependencies. Since ESRGAN GitHub community articles For some workflow examples and see what ComfyUI can do you can check out: Workflow examples can be found on the Examples page. You can load this image in ComfyUI to get the full workflow. 2024/03/28: Added ComfyUI nodes and workflow examples; Basic Workflow. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. Put the GLIGEN model files in the ComfyUI/models/gligen directory. You can then load up the following image in ComfyUI to get the workflow: Footer You signed in with another tab or window. json at main · roblaughter/comfyui-workflows Inputs: websocket_url - the url of the websocket you connect to, if you use the example it will be ws://localhost:8080; Outputs: Serving Config - A basic reference for this serving, used by the other nodes of this toolkit to get arguments and return images. The following is an older example for: aura_flow_0. If you already have files (model checkpoints, embeddings etc), there's no need to re-download those. ComfyUI (opens in a new tab) Examples. Contribute to shiimizu/ComfyUI-PhotoMaker-Plus development by creating an account on GitHub. Contribute to comfyanonymous/ComfyUI_examples development by creating an account on GitHub. safetensors, stable_cascade_inpainting. Area Composition Examples. This is what the workflow looks like in ComfyUI: "A vivid red book with a smooth, matte cover lies next to a glossy yellow vase. A Video Examples Image to Video. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. 0 and place it in the root of ComfyUI (Example: C:\ComfyUI_windows_portable). All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Hunyuan DiT 1. Topics Trending For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples. To review any workflow you can simply drop the JSON file onto your ComfyUI work area, also remember that any image generated with ComfyUI has the whole workflow embedded into itself. x, SDXL, Stable Video Diffusion, Stable Cascade, SD3 and Stable Audio. 1 model, use at least g5. SD3 Controlnets by InstantX are also supported. Common workflows and resources for generating AI images with ComfyUI. Example - high quality, best, etc. Download hunyuan_dit_1. If you're entirely new to anything Stable Diffusion-related, the first thing you'll want to do is grab a model checkpoint that you will use to generate your images. Lora Examples. /output easier. Collection of ComyUI workflow experiments and examples - diffustar/comfyui-workflow-collection GitHub community articles Repositories. Launch ComfyUI by running python main. 9 fine, but when I try to add in the stable-diffusion-xl-refiner-0. A sample workflow for running CosXL Edit models, such as my RobMix CosXL Edit checkpoint. Follow the ComfyUI manual installation instructions for Windows and Linux. I made this using the following workflow with two images as a starting point from the ComfyUI IPAdapter node repository. Here is an example of how to use upscale models like ESRGAN. For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples. 1. FFV1 will complain about invalid container. In the negative prompt node, specify what you do not want in the output. Here is an example of how the esrgan upscaler can be used for the upscaling step. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. You can see examples, instructions, and code in this repository. XLab and InstantX + Shakker Labs have released Controlnets for Flux. Disconnect and connect again for updated group membership to take effects. As of writing this there are two image to video checkpoints. ComfyUI Examples. You can then load up the following image in ComfyUI to get the workflow: AuraFlow 0. Is it a single image? Or what do An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. Aug 1, 2024 · For use cases please check out Example Workflows. GLIGEN Examples. This should update and may ask you the click restart. I then recommend enabling Extra Options -> Auto Queue in the interface. Nov 1, 2023 · All the examples in SD 1. Here’s a simple workflow in ComfyUI to do this with basic latent upscaling: Non latent Upscaling. Here’s a simple example of how to use controlnets, this example uses the scribble controlnet and the AnythingV3 model. I have not figured out what this issue is about. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. You can then load or drag the following image in ComfyUI to get the workflow: Flux Controlnets. These are examples demonstrating how to do img2img. Experienced Users. om 。 Jul 31, 2024 · You signed in with another tab or window. Perhaps there is not a trick, and this was working correctly when he made the workflow. SD3 performs very well with the negative conditioning zeroed out like in the following example: SD3 Controlnet. Note that in ComfyUI txt2img and img2img are the same node. 1-dev Text to Image, FLUX. Additionally, if you want to use H264 codec need to download OpenH264 1. Let's get started! XNView a great, light-weight and impressively capable file viewer. You signed out in another tab or window. Sytan SDXL ComfyUI: Very nice workflow showing how to connect the base model with the refiner and include an upscaler. This guide is about how to setup ComfyUI on your Windows computer to run Flux. py --force-fp16. Hunyuan DiT Examples. Would it be possible to have an example workflow for ComfyUI? I have installed the node, and it seems to work correctly, but I don't understand what input it needs. 8. This repo contains examples of what is achievable with ComfyUI. The artwork is characterized by Renaissance techniques with meticulous attention to detail in brushwork that gives it an aged appearance due to visible cracks on the surface indicating age or exposure over time. Text box GLIGEN. It shows the workflow stored in the exif data (View→Panels→Information). The workflow is the same as the one above but with a different prompt. These are examples demonstrating how to use Loras. This image contain 4 different areas: night, evening, day, morning. Contribute to huchenlei/ComfyUI_DanTagGen development by creating an account on GitHub. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. Also has favorite folders to make moving and sortintg images from . Improved AnimateDiff integration for ComfyUI, as well as advanced sampling options dubbed Evolved Sampling usable outside of AnimateDiff. json workflow file from the C:\Downloads\ComfyUI\workflows folder. Txt2Img is achieved by passing an empty image to the sampler node with maximum denoise. PhotoMaker for ComfyUI. bat you can run to install to portable if detected. Hello, I'm wondering if the ability to read workflows embedded in images is connected to the workspace configuration. ComfyUI nodes to crop before sampling and stitch back after sampling that speed up inpainting - lquesada/ComfyUI-Inpaint-CropAndStitch Do you want to create stylized videos from image sequences and reference images? Check out ComfyUI-AnimateAnyone-Evolved, a GitHub repository that improves the AnimateAnyone implementation with opse support. If you do not run ComfyUI locally, non-gpu instance such as t3. Example - low quality, blurred, etc. 2xlarge or above for fp8 version. See the documentation below for details along with a new example workflow. I'm facing a problem where, whenever I attempt to drag PNG/JPG files that include workflows into ComfyUI—be it examples Here's a quick example (workflow is included) of using a Ligntning model, quality suffers then but it's very fast and I recommend starting with it as faster sampling makes it a lot easier to learn what the settings do. Dynamic prompt expansion, powered by GPT-2 locally on your device - Seedsa/ComfyUI-MagicPrompt Here is a basic example how to use it: As a reminder you can save these image files and drag or load them into ComfyUI to get the workflow. use at least g5. ComfyUI Unique3D is custom nodes that running AiuniAI/Unique3D into ComfyUI - jtydhr88/ComfyUI-Unique3D a comfyui custom node for MimicMotion. GitHub community articles For some workflow examples and see what ComfyUI can do you can check out: Workflow examples can be found on the Examples page. 5 trained models from CIVITAI or HuggingFace as well as gsdf/EasyNegative textual inversions (v1 and v2), you should install them if you want to reproduce the exact output from the samples (most examples use fixed seed for this reason), but you are free to use any models ! If you're running on Linux, or non-admin account on windows you'll want to ensure /ComfyUI/custom_nodes and comfyui_controlnet_aux has write permissions. Here is the input image I used for this workflow: 👏 欢迎来到我的 ComfyUI 工作流集合地! 为了给大家提供福利,粗糙地搭建了一个平台,有什么反馈优化的地方,或者你想让我帮忙实现一些功能,可以提交 issue 或者邮件联系我 theboylzh@163. The value schedule node schedules the latent composite node's x position. Fully supports SD1. Img2Img Examples. I've encountered an issue where, every time I try to drag PNG/JPG files that contain workflows into ComfyUI—including examples from new plugins and unfamiliar PNGs that I've never brought into ComfyUI before—I receive a notification stating that the workflow cannot be read. Here is an example: You can load this image in ComfyUI to get the workflow. You can also animate the subject while the composite node is being schedules as well! Node: Load Checkpoint with FLATTEN model. Installing ComfyUI. 4xlarge for fp16 version. Contribute to Comfy-Org/ComfyUI_frontend development by creating an account on GitHub. Here is an example for how to use the Canny Controlnet: Here is an example for how to use the Inpaint Controlnet, the example input image can be found here. Here is the input image I used for this workflow: T2I-Adapter vs ControlNets. 5 checkpoint with the FLATTEN optical flow model. [2024/08/23] 🌩️ BizyAir now support ultimateSDupscale nodes upscale workflow [2024/08/14] 🌩️ BizyAir JoyCaption node has been released. x, SDXL, Stable Video Diffusion, Stable Cascade, SD3 and Stable Audio Upscale Model Examples. ComfyUI custom nodes - merge, grid (aka xyz-plot) and others - hnmr293/ComfyUI-nodes-hnmr Mar 28, 2024 · Contribute to hay86/ComfyUI_Dreamtalk development by creating an account on GitHub. Note that --force-fp16 will only work if you installed the latest pytorch nightly. You can construct an image generation workflow by chaining different blocks (called nodes) together. This example showcases the Noisy Laten Composition workflow. You signed in with another tab or window. Flux. I noticed that in his workflow image, the Merge nodes had an option called "same". . This is a custom node that lets you use TripoSR right from ComfyUI. The vase, with a slightly curved silhouette, stands on a dark wood table with a noticeable grain pattern. Apr 26, 2024 · Workflow. TripoSR is a state-of-the-art open-source model for fast feedforward 3D reconstruction from a single image, collaboratively developed by Tripo AI and Stability AI. It covers the following topics: You signed in with another tab or window. ; If you want to run FLUX. x, SD2. Once loaded go into the ComfyUI Manager and click Install Missing Custom Nodes. Front-end of ComfyUI modernized. kxyzrfi slfl ghwxpx lsulf uipnq kgo qnwsmm ubsrbfj wimov pkeyy

© 2018 CompuNET International Inc.