Difference between revisions of "YOLO"

From Interaction Station Wiki
Jump to navigation Jump to search
Line 34: Line 34:
 
  wget https://pjreddie.com/media/files/darknet53.conv.74
 
  wget https://pjreddie.com/media/files/darknet53.conv.74
  
*Step 1:  Images
+
'''*Step 1:  Dataset'''
 
*Download the images for the categories that you would like YOLO to detect.
 
*Download the images for the categories that you would like YOLO to detect.
  
*Step 2: Data Annotation
+
'''*Step 2: Data Annotation
*Download the YOLO annotation tool:
+
'''*Download the YOLO annotation tool:
 
  https://github.com/ManivannanMurugavel/Yolo-Annotation-Tool-New-
 
  https://github.com/ManivannanMurugavel/Yolo-Annotation-Tool-New-
 
*Enter the directory with the terminal:
 
*Enter the directory with the terminal:
Line 44: Line 44:
 
  cd Yolo-Annotation-Tool-New-
 
  cd Yolo-Annotation-Tool-New-
 
  python main.py
 
  python main.py
*Past the path where you have placed the images of the first category of your dataset and click the button "Load"
+
*Write the path where you have placed the images of the first category of your dataset and click the button "Load"
 +
*Select the category of those images in the combo box next that "Choose Class".
 
*Create the bounding boxes in each one of the images in this category by first, clicking on the image and then clicking on the button Next
 
*Create the bounding boxes in each one of the images in this category by first, clicking on the image and then clicking on the button Next
*When you are done with all the images of that category
+
*When you are done with all the images of that category, write the path of the next category and press the Load button.
 +
*Select the category of those images in the combo box next that "Choose Class".
 +
*Create the bounding boxes for all the images of this category.
 +
*Do the same for the rest of categories.
 +
*Whenever you are done, go to the terminal and close the tool by typing CTRL+C
 +
*The tool should have created one text file that contains the objects coordinates for each one of the images of your dataset.
 +
*Now we need to split the images of the dataset into two sets: train and test.
 +
*We will do that, by typing in the terminal:
 +
python process.py
 +
*By default this is done in a 90%/10% ratio, but it can be changed in the script.
 +
*This script will generate the train.txt and test.txt files that we need to copy into the Darknet folder.
 +
*
  
  
Line 54: Line 66:
  
  
 +
 +
 +
 +
 +
*More information:
 +
*[https://medium.com/@manivannan_data/how-to-train-yolov3-to-detect-custom-objects-ccbcafeb13d2]
 +
*[https://medium.com/@manivannan_data/how-to-train-yolov2-to-detect-custom-objects-9010df784f36]
 +
*[https://medium.com/@manivannan_data/yolo-annotation-tool-new-18c7847a2186]
  
 
=== Guide ===
 
=== Guide ===
 
*https://pjreddie.com/darknet/yolo/
 
*https://pjreddie.com/darknet/yolo/
 
*Terminal basic tutorial: https://maker.pro/linux/tutorial/basic-linux-commands-for-beginners
 
*Terminal basic tutorial: https://maker.pro/linux/tutorial/basic-linux-commands-for-beginners

Revision as of 22:22, 15 July 2019

Installation

  • Go to the directory and download the YOLO weights:
cd darknet
wget https://pjreddie.com/media/files/yolov3.weights

Using YOLO

  • Using YOLO with Darknet (GPU + OpenCV). Tested on Ubuntu 16.04

Using YOLO with an image

  • Go to the directory and run the detector for images:
cd darknet
./darknet detect cfg/yolov3.cfg yolov3.weights data/dog.jpg

Using YOLO with a video

  • Go to the directory and run the detector for videos:
cd darknet
./darknet detector demo cfg/coco.data cfg/yolov3.cfg yolov3.weights data/ny720p2.mp4
  • Run the detector (webcam):
./darknet detector demo cfg/coco.data cfg/yolov3.cfg yolov3.weights

Training your own Dataset

  • Download the convolutional weights from the darknet53 model that are pre-trained on Imagenet and place them in the Darknet folder:
wget https://pjreddie.com/media/files/darknet53.conv.74

*Step 1: Dataset

  • Download the images for the categories that you would like YOLO to detect.

*Step 2: Data Annotation *Download the YOLO annotation tool:

https://github.com/ManivannanMurugavel/Yolo-Annotation-Tool-New-
  • Enter the directory with the terminal:
cd Downloads
cd Yolo-Annotation-Tool-New-
python main.py
  • Write the path where you have placed the images of the first category of your dataset and click the button "Load"
  • Select the category of those images in the combo box next that "Choose Class".
  • Create the bounding boxes in each one of the images in this category by first, clicking on the image and then clicking on the button Next
  • When you are done with all the images of that category, write the path of the next category and press the Load button.
  • Select the category of those images in the combo box next that "Choose Class".
  • Create the bounding boxes for all the images of this category.
  • Do the same for the rest of categories.
  • Whenever you are done, go to the terminal and close the tool by typing CTRL+C
  • The tool should have created one text file that contains the objects coordinates for each one of the images of your dataset.
  • Now we need to split the images of the dataset into two sets: train and test.
  • We will do that, by typing in the terminal:
python process.py
  • By default this is done in a 90%/10% ratio, but it can be changed in the script.
  • This script will generate the train.txt and test.txt files that we need to copy into the Darknet folder.


  • We need to create 3 files:
    • myDataset.data




Guide