Error: Invalid JSON format
The user wants a ComfyUI workflow for generating high-quality images using Flux 2 Klein. Based on the documentation provided, I can see there's a reference to "ComfyUI Flux.2 Klein 4B Guide" which mentions it's a 4B parameter efficient image generation model.
From the documentation, Flux.2 Klein is a 4B parameter model that is efficient for text-to-image generation. I need to create a workflow that:
1. Loads the Flux 2 Klein model (likely using UNETLoader or CheckpointLoader)
2. Sets up text encoding for prompts
3. Uses a sampler appropriate for Flux models
4. Decodes the latent to an image
5. Saves the image
Since Flux models typically use specific sampling methods and may require specific conditioning, I should create a standard text-to-image workflow adapted for Flux 2 Klein.
Key components needed:
- CheckpointLoaderSimple or UNETLoader (depending on how Flux 2 Klein is loaded)
- CLIPTextEncode (Flux uses CLIP-L and T5XXL, but Klein might be simplified)
- EmptyLatentImage
- KSampler (or BasicScheduler + SamplerCustomAdvanced for better control)
- VAEDecode
- SaveImage
Given that Flux 2 Klein is a 4B model, it's likely a distilled/efficient version that might work with standard KSampler but with specific settings (like euler/simple scheduler, specific CFG scales).
Let me construct a proper workflow JSON with:
1. CheckpointLoaderSimple (loading the Flux 2 Klein checkpoint)
2. Dual CLIPTextEncode nodes (for positive and negative prompts - though Flux typically doesn't use negative prompts, I'll include it for completeness or set it to empty)
3. EmptyLatentImage (for the initial noise)
4. KSampler (with appropriate settings for Flux - typically 20-28 steps, cfg around 1.0-3.5, euler/simple sampler)
5. VAEDecode
6. SaveImage
Actually, looking at the documentation structure, Flux models often use:
- UNETLoader for the model
- DualCLIPLoader for the text encoders (CLIP_L and T5_XXL)
- VAELoader for the VAE
But since Flux 2 Klein might be a checkpoint format (combined), I'll use CheckpointLoaderSimple for simplicity, or provide the separate loaders if it's the diffusers format.
Given the user wants "high-quality", I should use:
- Higher resolution (1024x1024 or 1280x1280)
- Appropriate steps (20-28)
- CFG scale around 1.0 (Flux works well with low CFG)
- Euler or DPM++ 2M sampler
- Simple or normal scheduler
Let me create the JSON structure with proper node IDs and connections.
Node structure:
1. CheckpointLoaderSimple (id: 1) - loads flux2Klein model
2. CLIPTextEncode (id: 2) - positive prompt
3. CLIPTextEncode (id: 3) - negative prompt (optional for Flux but good to have)
4. EmptyLatentImage (id: 4) - 1024x1024
5. KSampler (id: 5) - sampling
6. VAEDecode (id: 6) - decode latent
7. SaveImage (id: 7) - save output
Connections:
- 1 (MODEL) -> 5 (model)
- 1 (CLIP) -> 2 (clip)
- 1 (CLIP) -> 3 (clip)
- 2 (CONDITIONING) -> 5 (positive)
- 3 (CONDITIONING) -> 5 (negative)
- 4 (LATENT) -> 5 (latent_image)
- 1 (VAE) -> 6 (vae)
- 5 (LATENT) -> 6 (samples)
- 6 (IMAGE) -> 7 (images)
Wait, for Flux specifically, sometimes the VAE is separate. But assuming it's a checkpoint, it includes VAE.
Actually, looking at the documentation mention: "ComfyUI Flux.2 Klein 4B Guide" mentions it's a 4B model. Flux models typically use:
- Model: The diffusion model
- CLIP: For text encoding (usually CLIP_L and T5_XXL)
- VAE: For encoding/decoding
But for a "CheckpointLoaderSimple" approach, I'll assume it's a consolidated checkpoint.
Let me verify the parameters:
- Flux models work best with CFG 1.0 (guidance_distilled) or low CFG like 3.5
- Steps: 20-28
- Sampler: euler, dpmpp_2m
- Scheduler: simple, normal, or beta
I'll set:
- cfg: 1.0 (standard for Flux)
- steps: 20
- sampler_name: "euler"
- scheduler: "simple"
- denoise: 1.0
Also, I should add metadata to indicate this is built for Flux 2 Klein.
Let me construct the JSON now. {"last_node_id":7,"last_link_id":9,"nodes":[{"id":1,"type":"CheckpointLoaderSimple","pos":[100,200],"size":{"0":315,"1":98},"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[1]},{"name":"CLIP","type":"CLIP","links":[2,3]},{"name":"VAE","type":"VAE","links":[6]}],"widgets_values":["flux2Klein4B.safetensors"],"title":"Load Flux 2 Klein Checkpoint"},{"id":2,"type":"CLIPTextEncode","pos":[450,100],"size":{"0":400,"1":200},"inputs":[{"name":"clip","type":"CLIP","link":2}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[4]}],"widgets_values":["masterpiece, best quality, high-resolution photograph, professional lighting, detailed textures, 8k uhd"],"title":"Positive Prompt (CLIP)"},{"id":3,"type":"CLIPTextEncode","pos":[450,350],"size":{"0":400,"1":200},"inputs":[{"name":"clip","type":"CLIP","link":3}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[5]}],"widgets_values":["low quality, blurry, distorted, watermark, signature, text, bad anatomy"],"title":"Negative Prompt (CLIP)"},{"id":4,"type":"EmptyLatentImage","pos":[100,450],"size":{"0":315,"1":106},"inputs":[],"outputs":[{"name":"LATENT","type":"LATENT","links":[7]}],"widgets_values":[1024,1024,1],"title":"Empty Latent Image (1024x1024)"},{"id":5,"type":"KSampler","pos":[900,200],"size":{"0":315,"1":262},"inputs":[{"name":"model","type":"MODEL","link":1},{"name":"positive","type":"CONDITIONING","link":4},{"name":"negative","type":"CONDITIONING","link":5},{"name":"latent_image","type":"LATENT","link":7}],"outputs":[{"name":"LATENT","type":"LATENT","links":[8]}],"widgets_values":[0,"randomize",20,1.0,"euler","simple",1],"title":"KSampler (Flux Optimized)"},{"id":6,"type":"VAEDecode","pos":[1250,200],"size":{"0":210,"1":46},"inputs":[{"name":"samples","type":"LATENT","link":8},{"name":"vae","type":"VAE","link":6}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[9]}],"title":"VAE Decode"},{"id":7,"type":"SaveImage","pos":[1520,200],"size":{"0":315,"1":270},"inputs":[{"name":"images","type":"IMAGE","link":9}],"outputs":[],"widgets_values":["Flux2Klein_Output"],"title":"Save High Quality Image"}],"links":[[1,1,0,5,0,"MODEL"],[2,1,1,2,0,"CLIP"],[3,1,1,3,0,"CLIP"],[4,2,0,5,1,"CONDITIONING"],[5,3,0,5,2,"CONDITIONING"],[6,1,2,6,1,"VAE"],[7,4,0,5,3,"LATENT"],[8,5,0,6,0,"LATENT"],[9,6,0,7,0,"IMAGE"]],"groups":[],"config":{},"extra":{"workspace_info":{"id":"flux2klein-high-quality"},"metadata":{"name":"Flux 2 Klein High Quality Image Generation","description":"Optimized workflow for generating high-quality images using Flux 2 Klein 4B model. Built with anycoder - https://huggingface.co/spaces/akhaliq/anycoder","author":"ComfyUI Expert","tags":["flux","text2image","high-quality","4B"],"version":"1.0"}},"version":0.4}