Difference between revisions of "LoRA training"

From Interaction Station Wiki
Jump to navigation Jump to search
Line 26: Line 26:
 
In the GUI select the Utilities Tab, and inside that the Captioning Tab and there select BLIP Captioning.<br>
 
In the GUI select the Utilities Tab, and inside that the Captioning Tab and there select BLIP Captioning.<br>
 
Make sure the "Image folder to caption", points to the folder containing your images. Also put your trigger word in "Prefix to add to BLIP caption". Now click the "Caption Images" button <br><br>
 
Make sure the "Image folder to caption", points to the folder containing your images. Also put your trigger word in "Prefix to add to BLIP caption". Now click the "Caption Images" button <br><br>
[[File:Captioning.png|thumb]]<br><br>
+
[[File:Captioning.png]]<br><br>
 
You won't see anything happening in the GUI but there should be some activity in the terminal <br><br>
 
You won't see anything happening in the GUI but there should be some activity in the terminal <br><br>
 
[[File:Captioning progress terminal.png]]<br><br>
 
[[File:Captioning progress terminal.png]]<br><br>

Revision as of 21:57, 16 September 2024

Training a LoRA with Kohya

These instructions should work on the computers in WH.02.110

Preparing for training

Step 1: Collect the images you want to use for training

Step 2: Create the folder structure
Create a folder on the desktop and name it (your name, or name of your project).
Download this zip file, and unzip it in the folder you just created. The zip will create the basic file structure, and a configuration file.

Folder structure 1.png

Inside the img folder, there needs to be another folder. The name of this folder has to be <REPEATS><underscore><TRIGGERWORD><space><CLASS>

Folder structure 2.png

Copy your images into this folder

Images in folder.png

Step 3: Create captions

We are now ready to create the captions for each image.
On the desktop find the kohya icon, and double click it.

Kohya icon.png

A terminal window opens, and after a couple of seconds you should see a url (127.0.0.1:7860) that you can open in a browser to get a GUI. In the GUI select the Utilities Tab, and inside that the Captioning Tab and there select BLIP Captioning.
Make sure the "Image folder to caption", points to the folder containing your images. Also put your trigger word in "Prefix to add to BLIP caption". Now click the "Caption Images" button

Captioning.png

You won't see anything happening in the GUI but there should be some activity in the terminal

Captioning progress terminal.png

Created captions.png

Training your LoRA

If you didn't already launch the kohya software

Make sure you select the LoRA tab

Load the configuration file that you downloaded earlier. We are going to leave it mostly at the default settings, but there are a few things we need to adjust.

Change the model name to the name you want it to have.

Check if the paths are correct, and adjust if neccesary.

Adjust the prompts for the samples that will be generated during training.

Hit the training button, and wait for the first samples to appear.

Using your LoRA in Stable Diffusion WebUI

Select the model that looks most promising to you, and copy it to the right folder in Stable Diffusion WebUI

In the WebUI check the LoRA tab, and refresh it to see your model. Clicking on it will add your LoRA to the prompt.

You can use your LoRA in combination with different compatible checkpoint models.

You can also combine different models.

Try changing the weight of the model.

Tray changing cfg scale