Difference between revisions of "VQGAN+CLIP"

From Interaction Station Wiki
Jump to navigation Jump to search
(Created page with "Create Images from a text prompt In this tutorial we will create images by typing text. If you want to do this from your home you can use this colab: https://github.com/just...")
(No difference)

Revision as of 13:52, 15 September 2021

Create Images from a text prompt

In this tutorial we will create images by typing text.

If you want to do this from your home you can use this colab: https://github.com/justin-bennington/somewhere-ml/blob/main/S2_GAN_Art_Generator_(VQGAN%2C_CLIP%2C_Guided_Diffusion).ipynb How to use it is described here: https://docs.google.com/document/d/1Lu7XPRKlNhBQjcKr8k8qRzUzbBW7kzxb5Vu72GMRn2E/edit

However, it is much faster to work on the computers at the Interaction Station, where we installed everything you need on the pc's in WH.02.110.


Step 1: Boot the PC to Ubuntu

We need to start the computer in the Linux Ubuntu Operating System, so when you start the computer keep an eye on the screen. When it lists some options, select "Ubuntu". If the computer starts Windows, you missed it. Just restart the computer and try again. When it starts Ubuntu and asks you for a login, select user "InteractionStation", and type the password "toegangstud"

Step 2: Start the conda environment in a terminal

Click the "Show appplications" icon in the bottom left of the screen and type "terminal" in the search box. Select the terminal icon that pops up. This will open a black window that will allow you to type commands. type the following:

cd Projects/VQGAN_CLIP-Docker

and then

conda activate vqgan-clip

You are now ready to run things but first we must modify the configuration file to your wishes.

Step 3: Modify the configuration file

Click on the "Files" icon in the top left of the Ubuntu screen and navigate to Projects - VQGAN_CLIP-Docker - configs. Now open the file called "local.json"