簡體   English   中英

如何使用 python 將 SD WebUI 中的 LORA 應用到 DreamShaper

[英]How to apply LORAs like in SD WebUI to DreamShaper using python

我一直在使用穩定的擴散 WebUI 為我的應用程序嘗試不同的模型和 LORA。 我現在正在嘗試做與我在 WebUI 中所做的相同的事情,但是在 Python 中。我有一個用於 DreamShaper v5 beta-2 的安全張量文件,其中包含 VAE。我正在為我下載的兩個 LORA 使用另外兩個安全張量文件來自 civitai.com。

這是我試過的代碼:

import torch
from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler

pipeline = StableDiffusionPipeline.from_ckpt("./DreamShaper_5_beta2_BakedVae.ckpt", torch_dtype=torch.float16)
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)
pipeline.unet.load_attn_procs("./flat illustration.safetensors", local_files_only=True)
pipeline.unet.load_attn_procs("./improve_backgrounds.safetensors", local_files_only=True)
pipeline.to("cuda")
pipeline.enable_xformers_memory_efficient_attention()

prompt = "Flat vector illustration of a scary and ominous grassy landscape with five or more trees, a large crack in the ground, and a gigantic monster sticking up high above the crack. The monster is based on an oak tree and made up of all kinds of litter and debris, including cans and bottles. The landscape is scattered with lots of litter and debris, especially tipped over garbage cans. There are hundreds of people running away from the monster, and the environment is dusty with no texture or shading. The color scheme of the grassy landscape is green and brown. <lora:flat illustration:1> <lora:improve_backgrounds:0.85>"
nprompt = "(deformed iris, deformed pupils, anime:1.4), text, close up, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, extremely focused on people"
image = pipeline(prompt, negative_prompt=nprompt, num_inference_steps=40, guidance_scale=7.5, cross_attention_kwargs={"scale": 1}).images[0]
image.save("blue_pokemon.png")

我嘗試使用 dreashaper 的 ckpt 文件而不是 safetensors 文件來避免以下錯誤:

Traceback (most recent call last):
  File "C:\Users\***\PycharmProjects\KnowledgeGraph\loratest\main.py", line 4, in <module>
    pipeline = StableDiffusionPipeline.from_ckpt("./DreamShaper_5_beta2_BakedVae.safetensors", torch_dtype=torch.float16)
  File "C:\Users\***\PycharmProjects\KnowledgeGraph\venv\lib\site-packages\diffusers\loaders.py", line 1284, in from_ckpt
    pipe = download_from_original_stable_diffusion_ckpt(
  File "C:\Users\***\PycharmProjects\KnowledgeGraph\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\convert_from_ckpt.py", line 1062, in download_from_original_stable_diffusion_ckpt
    raise ValueError(BACKENDS_MAPPING["safetensors"][1])
KeyError: 'safetensors'

但后來我得到了這個錯誤:

global_step key not found in model
In this conversion only the non-EMA weights are extracted. If you want to instead extract the EMA weights (usually better for inference), please make sure to add the `--extract_ema` flag.
Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPTextModel: ['vision_model.encoder.layers.20.mlp.fc1.weight', 'vision_model.encoder.layers.8.mlp.fc2.weight', 'vision_model.encoder.layers.16.mlp.fc1.weight', 'vision_model.encoder.layers.10.layer_norm1.weight', 
...
'vision_model.encoder.layers.16.self_attn.out_proj.weight', 'vision_model.encoder.layers.2.layer_norm1.weight', 'vision_model.encoder.layers.14.mlp.fc1.bias', 'vision_model.encoder.layers.19.mlp.fc1.weight', 'vision_model.encoder.layers.9.mlp.fc1.weight', 'vision_model.encoder.layers.1.layer_norm1.bias', 'vision_model.encoder.layers.12.layer_norm1.bias', 'vision_model.encoder.layers.2.mlp.fc2.bias', 'vision_model.encoder.layers.8.self_attn.out_proj.bias']
- This IS expected if you are initializing CLIPTextModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPTextModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Traceback (most recent call last):
  File "C:\Users\***\PycharmProjects\KnowledgeGraph\loratest\main.py", line 6, in <module>
    pipeline.unet.load_attn_procs("./flat illustration.safetensors", local_files_only=True)
  File "C:\Users\***\PycharmProjects\KnowledgeGraph\venv\lib\site-packages\diffusers\loaders.py", line 217, in load_attn_procs
    state_dict = torch.load(model_file, map_location="cpu")
  File "C:\Users\***\PycharmProjects\KnowledgeGraph\venv\lib\site-packages\torch\serialization.py", line 815, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "C:\Users\***\PycharmProjects\KnowledgeGraph\venv\lib\site-packages\torch\serialization.py", line 1033, in _legacy_load
    magic_number = pickle_module.load(f, **pickle_load_args)
MemoryError

我的這段代碼來自 huggingface 文檔。 我嘗試查看 SD WebUI Github WIKI,但沒有找到任何內容。

同樣,我想要做的就是使用 Python 將 LORA 應用到 DreamShaper,就像我在 SD WebUI 中所做的那樣。 如果有幫助,這里是我從哪里獲得 LORA:

https://civitai.com/models/19130/flat-illustration

https://civitai.com/models/42190/improve-backgrounds

這是我的解決方案:

pip install safetensors

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM