Difference between revisions of "LoRA training"

From Interaction Station Wiki
Jump to navigation Jump to search
Line 40: Line 40:
 
Load the configuration file that you downloaded earlier.
 
Load the configuration file that you downloaded earlier.
 
We are going to leave it mostly at the default settings, but there are a few things we need to adjust.<br><br>
 
We are going to leave it mostly at the default settings, but there are a few things we need to adjust.<br><br>
[[File:Basic configuration.png]<br><br>
+
[[File:Basic configuration.png]]<br><br>
  
Change the model name to the name you want it to have.
+
Change the model name to the name you want it to have, and make sure the paths to the image folder and model folder are correct.
  
Check if the paths are correct, and adjust if neccesary.
+
In the parameters section we can configure to generate sample images during training. Adjust the prompts so they use your triggerword and suit your purpose.<br><br>
 
+
[[File:Sample_prompts.png ]]<br><br>
Adjust the prompts for the samples that will be generated during training.
 
  
 
Hit the training button, and wait for the first samples to appear.
 
Hit the training button, and wait for the first samples to appear.

Revision as of 22:13, 16 September 2024

Training a LoRA with Kohya

These instructions should work on the computers in WH.02.110

Preparing for training

Step 1: Collect the images you want to use for training

Step 2: Create the folder structure
Create a folder on the desktop and name it (your name, or name of your project).
Download this zip file, and unzip it in the folder you just created. The zip will create the basic file structure, and a configuration file.

Folder structure 1.png

Inside the img folder, there needs to be another folder. The name of this folder has to be <REPEATS><underscore><TRIGGERWORD><space><CLASS>

Folder structure 2.png

Copy your images into this folder

Images in folder.png

Step 3: Create captions

We are now ready to create the captions for each image.
On the desktop find the kohya icon, and double click it.

Kohya icon.png

A terminal window opens, and after a couple of seconds you should see a url (127.0.0.1:7860) that you can open in a browser to get a GUI. In the GUI select the Utilities Tab, and inside that the Captioning Tab and there select BLIP Captioning.
Make sure the "Image folder to caption", points to the folder containing your images. Also put your trigger word in "Prefix to add to BLIP caption". Now click the "Caption Images" button

Captioning.png

You won't see anything happening in the GUI but there should be some activity in the terminal

Captioning progress terminal.png

Created captions.png

Training your LoRA

Step 4: Adjust the configuration If you didn't already launch the kohya software

Make sure you select the LoRA tab

Load the configuration file that you downloaded earlier. We are going to leave it mostly at the default settings, but there are a few things we need to adjust.

Basic configuration.png

Change the model name to the name you want it to have, and make sure the paths to the image folder and model folder are correct.

In the parameters section we can configure to generate sample images during training. Adjust the prompts so they use your triggerword and suit your purpose.

Sample prompts.png

Hit the training button, and wait for the first samples to appear.

Using your LoRA in Stable Diffusion WebUI

Select the model that looks most promising to you, and copy it to the right folder in Stable Diffusion WebUI

In the WebUI check the LoRA tab, and refresh it to see your model. Clicking on it will add your LoRA to the prompt.

You can use your LoRA in combination with different compatible checkpoint models.

You can also combine different models.

Try changing the weight of the model.

Tray changing cfg scale