With usable demo interfaces for ComfyUI to use the models (see below)! After test, it is also useful on SDXL-1. Then I can no longer load the SDXl base model! It was useful as some other bugs were fixed. This tutorial covers vanilla text-to-image fine-tuning using LoRA. Stability. ついに出ましたねsdxl 使っていきましょう。. next (vlad) and automatic1111 (both fresh installs just for sdxl). 5 model, and the SDXL refiner model. . Downloading SDXL. 0. Update README. SDXL 1. 5 across the board. 5x of them and then pass unfinished results to refiner which means progress bar will only go to half before it stops - this is ideal workflow for refiner. 30ish range and it fits her face lora to the image without. 5 based counterparts. Play around with them to find. 0 release of SDXL comes new learning for our tried-and-true workflow. I will first try out the newest sd. 0は正式版です。Baseモデルと、後段で使用するオプションのRefinerモデルがあります。下記の画像はRefiner、Upscaler、ControlNet、ADetailer等の修正技術や、TI embeddings、LoRA等の追加データを使用していません。Select the SDXL 1. 0 models for NVIDIA TensorRT optimized inference; Performance Comparison Timings for 30 steps at 1024x1024 Accelerator Baseline (non-optimized) NVIDIA TensorRT (optimized) Percentage improvement; A10: 9399 ms: 8160 ms ~13%: A100: 3704 ms: 2742 ms ~26%: H100:Normally A1111 features work fine with SDXL Base and SDXL Refiner. scaling down weights and biases within the network. juggXL + refiner 2 steps: In this case, there is a base SDXL model and an optional "refiner" model that can run after the initial generation to make images look better. Part 2 - We added SDXL-specific conditioning implementation + tested the impact of conditioning parameters on the generated images. g. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. SDXL 1. I don't want it to get to the point where people are just making models that are designed around looking good at displaying faces. 5 upscaled with Juggernaut Aftermath (but you can of course also use the XL Refiner) If you like the model and want to see its further development, feel free to write it in the comments. A switch to choose between the SDXL Base+Refiner models and the ReVision model A switch to activate or bypass the Detailer, the Upscaler, or both A (simple) visual prompt builder To configure it, start from the orange section called Control Panel. そもそもSDXLのRefinerって何? SDXLの学習モデルはBaseとRefinerに分類され、それぞれ役割が異なります。 SDXLは、画像を生成する際にBaseとRefinerをそれぞれ処理するので2Pass方式と呼ばれ、従来の1Pass方式と比べるとより綺麗な画像が生成. While 7 minutes is long it's not unusable. SDXL は従来のモデルとの互換性がないのの、高いクオリティの画像生成能力を持って. 5 + SDXL Base+Refiner - using SDXL Base with Refiner as composition generation and SD 1. But that's why they cautioned anyone against downloading a ckpt (which can execute malicious code) and then broadcast a warning here instead of just letting people get duped by bad actors trying to pose as the leaked file sharers. SDXL mix sampler. eilertokyo • 4 mo. I've been trying to use the SDXL refiner, both in my own workflows and I've copied others. What SDXL 0. Notes: ; The train_text_to_image_sdxl. You can also give the base and refiners different prompts like on. Based on a local experiment, full inference with both the base and refiner model requires about 11301MiB VRAM. 1. It means max. In the second step, we use a specialized high-resolution model and apply a technique called SDEdit (also known as "img2img") to the latents genera…Use in Diffusers. CFG Scale and TSNR correction (tuned for SDXL) when CFG is bigger than 10. Did you simply put the SDXL models in the same. Overview: A guide for developers and hobbyists for accessing the text-to-image generation model SDXL 1. 5 보다 훨씬 좋아진SDXL을 사용할 수 있게 되었는데훨씬 높아진 퀄리티는 기본에어느 정도의 텍스트 입력도 지원하고그림의 디테일을 보완할 때 사용하는 Refiner도 추가되었다WebUI 에서도 이제 SDXL을 지원하여아래 내용을. . The other difference is 3xxx series vs. Testing the Refiner Extension. download history blame contribute delete. true. Is the best balanced I could find between image size (1024x720), models, steps (10+5 refiner), samplers/schedulers, so we can use SDXL on our laptops without those expensive/bulky desktop GPUs. Misconfiguring nodes can lead to erroneous conclusions, and it's essential to understand the correct settings for a fair assessment. 9 will be provided for research purposes only during a limited period to collect feedback and fully refine the model before its general open release. The Stability AI team takes great pride in introducing SDXL 1. AP Workflow v3 includes the following functions: SDXL Base+Refiner The first step is to download the SDXL models from the HuggingFace website. Volume size in GB: 512 GB. 9 are available and subject to a research license. 0 seed: 640271075062843 RTX 3060 12GB VRAM, and 32GB system RAM here. SD XL. SDXL is trained with 1024*1024 = 1048576 sized images with multiple aspect ratio images , so your input size should not greater than that number. but if I run Base model (creating some images with it) without activating that extension or simply forgot to select the Refiner model, and LATER activating it, it gets OOM (out of memory) very much likely when generating images. The SDXL refiner is incompatible and you will have reduced quality output if you try to use the base model refiner with ProtoVision XL. But, as I ventured further and tried adding the SDXL refiner into the mix, things took a turn for the worse. 08 GB. SDXL is just another model. 9 the refiner worked better I did a ratio test to find the best base/refiner ratio to use on a 30 step run, the first value in the grid is the amount of steps out of 30 on the base model and the second image is the comparison between a 4:1 ratio (24 steps out of 30) and 30 steps just on the base model. Using SDXL 1. HOWEVER, surprisingly, GPU VRAM of 6GB to 8GB is enough to run SDXL on ComfyUI. 6. Hi, all. x, SD2. I think developers must come forward soon to fix these issues. 0 Base+Refiner, with a negative prompt optimized for photographic image generation, CFG=10, and face enhancements. The refiner is although only good at refining noise from an original image still left in creation, and will give you a blurry result if you try. 0 Refiner model. sdxlが登場してから、約2ヶ月、やっと最近真面目に触り始めたので、使用のコツや仕様といったところを、まとめていけたらと思います。 (現在、とある会社にAIモデルを提供していますが、今後はSDXLを使って行こうかと考えているところです。 The Stable Diffusion XL (SDXL) model is the official upgrade to the v1. Per the announcement, SDXL 1. md. 6B parameter refiner model, making it one of the largest open image generators today. 9 for img2img. 0 is “built on an innovative new architecture composed of a 3. Suddenly, the results weren't as natural, and the generated people looked a bit too. Did you simply put the SDXL models in the same. Testing was done with that 1/5 of total steps being used in the upscaling. The prompt and negative prompt for the new images. Because of various manipulations possible with SDXL, a lot of users started to use ComfyUI with its node workflows (and a lot of people did not, because of its node workflows). 0以降が必要)。しばらくアップデートしていないよという方はアップデートを済ませておきましょう。 SDXLは、Baseモデルと refiner を使用して2段階のプロセスで完全体になるように設計されています。(詳細は こちら をご覧ください。)v1. Instead you have to let it VAEdecode to an image, then VAEencode it back to a latent image with the VAE from SDXL and then upscale. Phyton - - Hub-Fa. This ability emerged during the training phase of the AI, and was not programmed by people. That is not the ideal way to run it. . Maybe all of this doesn't matter, but I like equations. Here are the models you need to download: SDXL Base Model 1. I created this comfyUI workflow to use the new SDXL Refiner with old models: Basically it just creates a 512x512 as usual, then upscales it, then feeds it to the refiner. Base SDXL model will. 0-refiner Model Card Model SDXL consists of a mixture-of-experts pipeline for latent diffusion: In a first step, the base. L’interface de configuration du Refiner apparait. Installing ControlNet. Steps: 30 (the last image was 50 steps because SDXL does best at 50+ steps) Sampler: DPM++ 2M SDE Karras CFG set to 7 for all, resolution set to 1152x896 for all SDXL refiner used for both SDXL images (2nd and last image) at 10 steps Realistic vision took 30 seconds on my 3060 TI and used 5gb vram SDXL took 10 minutes per image and used. The chart above evaluates user preference for SDXL (with and without refinement) over Stable Diffusion 1. Re-download the latest version of the VAE and put it in your models/vae folder. 5 checkpoint files? currently gonna try them out on comfyUI. 05 - 0. 0: Guidance, Schedulers, and Steps SDXL-refiner-0. 1/1. I wanted to share my configuration for ComfyUI, since many of us are using our laptops most of the time. The training data of SDXL had an aesthetic score for every image, with 0 being the ugliest and 10 being the best-looking. Let me know if this is at all interesting or useful! Final Version 3. I cant say how good SDXL 1. Download both the Stable-Diffusion-XL-Base-1. それでは. That being said, for SDXL 1. To use the refiner model: Navigate to the image-to-image tab within AUTOMATIC1111 or. 0でRefinerモデルを使う方法と、主要な変更点についてご紹介します。Use SDXL Refiner with old models. For today's tutorial I will be using Stable Diffusion XL (SDXL) with the 0. SDXL offers negative_original_size , negative_crops_coords_top_left , and negative_target_size to negatively condition the model on. this applies to both sd15 and sdxl thanks @AI-Casanova for porting compel/sdxl code; mix&match base and refiner models (experimental): most of those are "because why not" and can result in corrupt images, but some are actually useful also note that if you're not using actual refiner model, you need to bump refiner stepsI run on an 8gb card with 16gb of ram and I see 800 seconds PLUS when doing 2k upscales with SDXL, wheras to do the same thing with 1. 1. 9vaeSwitch to refiner model for final 20%. 0 seed: 640271075062843RTX 3060 12GB VRAM, and 32GB system RAM here. There are two ways to use the refiner: use. จะมี 2 โมเดลหลักๆคือ. Confused on the correct way to use loras with sdxlBy default, AP Workflow 6. To make full use of SDXL, you'll need to load in both models, run the base model starting from an empty latent image, and then run the refiner on the base model's. After the first time you run Fooocus, a config file will be generated at Fooocusconfig. If you're using Automatic webui, try ComfyUI instead. base and refiner models. stable-diffusion-xl-refiner-1. Model. 9vae. Install SD. With the refiner they're noticeable better but it takes a very long time to generate the image (up to five minutes each). . 1. The base model was trained on the full range of denoising strengths while the refiner was specialized on "high-quality, high resolution data" and denoising of <0. So overall, image output from the two-step A1111 can outperform the others. 15:49 How to disable refiner or nodes of ComfyUI. But, as I ventured further and tried adding the SDXL refiner into the mix, things. 0 will be, hopefully it doesnt require a refiner model because dual model workflows are much more inflexible to work with. 23-0. Yes, in theory you would also train a second LoRa for the refiner. silenf • 2 mo. x models through the SDXL refiner, for whatever that's worth! Use Loras, TIs, etc, in the style of SDXL, and see what more you can do. 次にSDXLのモデルとVAEをダウンロードします。 SDXLのモデルは2種類あり、基本のbaseモデルと、画質を向上させるrefinerモデルです。 どちらも単体で画像は生成できますが、基本はbaseモデルで生成した画像をrefinerモデルで仕上げるという流れが一般的なよう. SDXL-0. 236 strength and 89 steps for a total of 21 steps) 3. 0. But if SDXL wants a 11-fingered hand, the refiner gives up. It's a LoRA for noise offset, not quite contrast. 9 vae. Setup. Note: to control the strength of the refiner, control the "Denoise Start" satisfactory results were between 0. Note that for Invoke AI this step may not be required, as it’s supposed to do the whole process in a single image generation. 5, so currently I don't feel the need to train a refiner. Having it enabled the model never loaded, or rather took what feels even longer than with it disabled, disabling it made the model load but still took ages. 0 vs SDXL 1. Overall, SDXL 1. I selecte manually the base model and VAE. separate. SDXL 1. No matter how many AI tools come and go, human designers will always remain essential in providing vision, critical thinking, and emotional understanding. Which, iirc, we were informed was. safetensors. Searge-SDXL: EVOLVED v4. makes them available for SDXL always show extra networks tabs in the UI use less RAM when creating models (#11958, #12599) textual inversion inference support for SDXL extra networks. 0 involves an. Click “Manager” in comfyUI, then ‘Install missing custom nodes’. SDXL-VAE-FP16-Fix was created by finetuning the SDXL-VAE to: 1. It's using around 23-24GBs of RAM when generating images. sdxl-0. I've found that the refiner tends to. Download both the Stable-Diffusion-XL-Base-1. Follow me here by clicking the heart ️ and liking the model 👍, and you will be notified of any future versions I release. 6. As for the FaceDetailer, you can use the SDXL model or any other model of your choice. Study this workflow and notes to understand the basics of. SDXL most definitely doesn't work with the old control net. While the bulk of the semantic composition is done by the latent diffusion model, we can improve local, high-frequency details in generated images by improving the quality of the autoencoder. 0_0. How to run it in my computer? If you haven’t install StableDiffusionWebUI before, please follow this guideDownload the SD XL to SD 1. In fact, ComfyUI is more stable than WEBUI(As shown in the figure, SDXL can be directly used in ComfyUI) @dorioku. 8. I also need your help with feedback, please please please post your images and your. Basically the base model produces the raw image and the refiner (which is an optional pass) adds finer details. Template Features. SDXL works "fine" with just the base model, taking around 2m30s to create a 1024x1024 image (SD1. Yup, all images generated in the main ComfyUI frontend have the workflow embedded into the image like that (right now anything that uses the ComfyUI API doesn't have that, though). 5. When you define the total number of diffusion steps you want the system to perform, the workflow will automatically allocate a certain number of those steps to each model, according to the refiner_start. 9 模型啦 快来康康吧!,第三期 最新最全秋叶大佬1. Striking-Long-2960 • 3 mo. AP Workflow v3 includes the following functions: SDXL Base+RefinerThe first step is to download the SDXL models from the HuggingFace website. 20 votes, 57 comments. 0 base and have lots of fun with it. Positive A Score. Do I need to download the remaining files pytorch, vae and unet? also is there an online guide for these leaked files or do they install the same like 2. Play around with different Samplers and different amount of base Steps (30, 60, 90, maybe even higher). Here's what I've found: When I pair the SDXL base with my LoRA on ComfyUI, things seem to click and work pretty well. 0 refiner model in the Stable Diffusion Checkpoint dropdown menu. The refiner has been trained to denoise small noise levels of high quality data and as such is not expected to work as a pure text-to-image model; instead, it should only be used as an image-to-image model. Img2Img batch. 0. How it works. r/StableDiffusion. To generate an image, use the base version in the 'Text to Image' tab and then refine it using the refiner version in the 'Image to Image' tab. 0 weights. An SDXL base model in the upper Load Checkpoint node. By reading this article, you will learn to do Dreambooth fine-tuning of Stable Diffusion XL 0. SDXL consists of a two-step pipeline for latent diffusion: First, we use a base model to generate latents of the desired output size. blakerabbit. SDXL 1. last version included the nodes for the refiner. I've been using the scripts here to fine tune the base SDXL model for subject driven generation to good effect. With Stable Diffusion XL you can now make more realistic images with improved face generation, produce legible text within. 1. I did and it's not even close. Below are the instructions for installation and use: Download Fixed FP16 VAE to your VAE folder. The sample prompt as a test shows a really great result. 5 and 2. Notebook instance type: ml. The style selector inserts styles to the prompt upon generation, and allows you to switch styles on the fly even thought your text prompt only describe the scene. Basic Setup for SDXL 1. 0以降 である必要があります(※もっと言うと後述のrefinerモデルを手軽に使うためにはv1. There might also be an issue with Disable memmapping for loading . An SDXL refiner model in the lower Load Checkpoint node. scheduler License, tags and diffusers updates (#1) 3 months ago. 5 to SDXL cause the latent spaces are different. These images can then be further refined using the SDXL Refiner, resulting in stunning, high-quality AI artwork. SDXL output images can be improved by making use of a refiner model in an image-to-image setting. sdxl original vae is fp32 only (thats not sdnext limitation, that how original sdxl vae is written). Update README. In the second step, we use a specialized high. As for the FaceDetailer, you can use the SDXL model or any other model of your choice. 21 steps for generation, 7 for refiner means it switches after 14 steps to the refiner Reply reply venture70Copax XL is a finetuned SDXL 1. This article will guide you through the process of enabling. For good images, typically, around 30 sampling steps with SDXL Base will suffice. 9 with updated checkpoints, nothing fancy, no upscales, just straight refining from latent. 5 + SDXL Base+Refiner is for experiment only. I trained a LoRA model of myself using the SDXL 1. The joint swap system of refiner now also support img2img and upscale in a seamless way. Aka, if you switch at 0. There isn't an official guide, but this is what I suspect. Sorry this took so long, when putting the VAE and Model files manually in the proper modelssdxl and modelssdxl-refiner folders: Traceback (most recent call last): File "D:aiinvoke-ai-3. Base model alone; Base model followed by the refiner; Base model only. 2 comments. Hires isn't a refiner stage. 90b043f 4 months ago. with sdxl . webui it should auto switch to --no-half-vae (32-bit float) if NaN was detected and it only checks for NaN when NaN check is not disabled (when not using --disable-nan-check) this is a new feature in 1. More than 100 million people use GitHub to discover, fork, and contribute to over 420 million projects. Please don't use SD 1. 0. 0 base and refiner and two others to upscale to 2048px. 0 and SDXL refiner 1. SDXL. batがあるフォルダのmodelsフォルダを開く Stable-diffuion. SDXL 1. The code. : sdxlネイティブ。 複雑な設定やパラメーターの調整不要で比較的高品質な画像の生成が可能 拡張性には乏しい : シンプルさ、利用のしやすさを優先しているため、先行するAutomatic1111版WebUIやSD. Contribute to camenduru/sdxl-colab development by creating an account on GitHub. ComfyUI allows processing the latent image through the refiner before it is rendered (like hires fix), which is closer to the intended usage than a separate img2img process. 7 contributors. History: 18 commits. Overall all I can see is downsides to their openclip model being included at all. SDXL 1. The Ultimate SD upscale is one of the nicest things in Auto11, it first upscales your image using GAN or any other old school upscaler, then cuts it into tiles small enough to be digestable by SD, typically 512x512, the pieces are overlapping each other. May need to test if including it improves finer details. You just have to use it low enough so as not to nuke the rest of the gen. 85, although producing some weird paws on some of the steps. g5. This tutorial is based on the diffusers package, which does not support image-caption datasets for. stable-diffusion-xl-refiner-1. The model is released as open-source software. Please tell me I don't have to design my own. The latent tensors could also be passed on to the refiner model that applies SDEdit, using the same prompt. And this is how this workflow operates. 0 ComfyUI. SD-XL 1. Testing the Refiner Extension. Open the ComfyUI software. If you have the SDXL 1. 次に2つ目のメリットは、SDXLのrefinerモデルを既に正式にサポートしている点です。 執筆時点ではStable Diffusion web UIのほうはrefinerモデルにまだ完全に対応していないのですが、ComfyUIは既にSDXLに対応済みで簡単にrefinerモデルを使うことがで. To begin, you need to build the engine for the base model. For the base SDXL model you must have both the checkpoint and refiner models. 0 Base Model; SDXL 1. I think I would prefer if it were an independent pass. Replace. On balance, you can probably get better results using the old version with a. SDXL includes a refiner model specialized in denoising low-noise stage images to generate higher-quality images from the base model. Adjust the workflow - Add in the "Load VAE" node by right click > Add Node > Loaders > Load VAE. 0 Base and Refiner models in Automatic 1111 Web UI. " GitHub is where people build software. 3. SDXL 1. total steps: 40 sampler1: SDXL Base model 0-35 steps sampler2: SDXL Refiner model 35-40 steps. Right now I'm sending base SDXL images to img2img, then switching to the SDXL Refiner model, and. Updating ControlNet. and example with sdxl base + sdxl refiner would be that if you have base steps 10 and refiner start at 0. (I have heard different opinions about the VAE not being necessary to be selected manually since it is baked in the model but still to make sure I use manual mode) 3) Then I write a prompt, set resolution of the image output at 1024. SDXL-refiner-1. 0_0. 5 counterpart. SD1. 6B parameter refiner model, making it one of the largest open image generators today. There are two ways to use the refiner:</p> <ol dir=\"auto\"> <li>use the base and refiner models together to produce a refined image</li> <li>use the base model to produce an image, and subsequently use the refiner model to add more details to the image (this is how SDXL was originally trained)</li> </ol> <h3 tabindex=\"-1\" dir=\"auto\"><a. Thanks for the tips on Comfy! I'm enjoying it a lot so far. patrickvonplaten HF staff. Robin Rombach. SDXLの導入〜Refiner拡張導入のやり方をシェアします。 ①SDフォルダを丸ごとコピーし、コピー先を「SDXL」などに変更 今回の解説はすでにローカルでStable Diffusionを起動したことがある人向けです。 ローカルにStable Diffusionをインストールしたことが無い方は以下のURLが環境構築の参考になります. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. Originally Posted to Hugging Face and shared here with permission from Stability AI. 0 model, maybe the author of it managed to finetune it enough to make it produce enough detail without refiner. That being said, for SDXL 1. json: sdxl_v0. Not OP, but you can train LoRAs with kohya scripts (sdxl branch). Host and manage packages. 6 billion, compared with 0. Next, select the base model for the Stable Diffusion checkpoint and the Unet profile for. Part 3 - we will add an SDXL refiner for the full SDXL process. 0:00 How to install SDXL locally and use with Automatic1111 Intro. sd_xl_refiner_1. Make the following changes: In the Stable Diffusion checkpoint dropdown, select the refiner sd_xl_refiner_1. Setup a quick workflow to do the first part of the denoising process on the base model but instead of finishing it stop early and pass the noisy result on to the refiner to finish the process. Notes . 0 introduces denoising_start and denoising_end options, giving you more control over the denoising process for fine. No virus. I feel this refiner process in automatic1111 should be automatic. 9 + Refiner - How to use Stable Diffusion XL 0. 5. md. 0 with some of the current available custom models on civitai. In ComfyUI this can be accomplished with the output of one KSampler node (using SDXL base) leading directly into the input of another KSampler node (using. SDXL Refiner Model 1. Animal barrefiner support #12371. 0 refiner. 3 (This IS the refiner strength. Set denoising strength to 0. SD. The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance. 9-usage This repo is a tutorial intended to help beginners use the new released model, stable-diffusion-xl-0. SD1. 6. Not a LORA, but you can download ComfyUI nodes for sharpness, blur, contrast, saturation, sharpness, etc. There are two ways to use the refiner: use the base and refiner model together to produce a refined image; use the base model to produce an image, and subsequently use the refiner model to add. The best thing about SDXL imo isn't how much more it can achieve when you push it,. 🚀 I suggest you don't use the SDXL refiner, use Img2img instead. This checkpoint recommends a VAE, download and place it in the VAE folder. 次に2つ目のメリットは、SDXLのrefinerモデルを既に正式にサポートしている点です。 執筆時点ではStable Diffusion web UIのほうはrefinerモデルにまだ完全に対応していないのですが、ComfyUIは既にSDXLに対応済みで簡単にrefinerモデルを使うことができます。 Base SDXL model will stop at around 80% of completion (Use TOTAL STEPS and BASE STEPS to control how much noise will go to refiner), left some noise and send it to Refine SDXL Model for completion - this is the way of SDXL. 0. 0. In today’s development update of Stable Diffusion WebUI, now includes merged support for SDXL refiner. This checkpoint recommends a VAE, download and place it in the VAE folder. Model downloaded. 3-0. In today’s development update of Stable Diffusion WebUI, now includes merged support for SDXL refiner. This is why we also expose a CLI argument namely --pretrained_vae_model_name_or_path that lets you specify the location of a better VAE (such as this one). Click Queue Prompt to start the workflow. SDXL two staged denoising workflow. In this mode you take your final output from SDXL base model and pass it to the refiner. ago. These tools. 皆様ご機嫌いかがですか、新宮ラリです。 本日は、SDXL用アニメ特化モデルを御紹介します。 二次絵アーティストさんは必見です😤 Animagine XLは高解像度モデルです。 優れた品質のアニメスタイルの厳選されたデータセット上で、バッチサイズ16で27000のグローバルステップを経て、4e-7の学習率. Follow me here by clicking the heart ️ and liking the model 👍, and you will be notified of any future versions I release. Enable Cloud Inference featureProviding a feature to detect errors that occur when mixing models and clips from checkpoints such as SDXL Base, SDXL Refiner, SD1. 9モデルが実験的にサポートされています。下記の記事を参照してください。12GB以上のVRAMが必要かもしれません。 本記事は下記の情報を参考に、少しだけアレンジしています。なお、細かい説明を若干省いていますのでご了承ください。stable-diffusion-xl-refiner-1.