Difference between revisions of "Docker"

From Interaction Station Wiki
Jump to navigation Jump to search
(12 intermediate revisions by the same user not shown)
Line 1: Line 1:
ML Docker Image installed on the Interaction Station ML computers:<br/>
+
ML Docker Image installed on the Interaction Station ML computers (Ubuntu 16.04):<br/>
  
=Installing Docker CE on Ubuntu 16.04:=
+
=Installing Docker CE:=
 
*sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
 
*sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
 
*curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
 
*curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Line 9: Line 9:
 
*More info: https://unix.stackexchange.com/questions/363048/unable-to-locate-package-docker-ce-on-a-64bit-ubuntu
 
*More info: https://unix.stackexchange.com/questions/363048/unable-to-locate-package-docker-ce-on-a-64bit-ubuntu
  
==Change Docker root dir using systemd==
+
==Change Docker root dir using systemd (Don't do this, set volume instead)==
 
*systemctl status docker.service
 
*systemctl status docker.service
 
*sudo nano /etc/default/docker
 
*sudo nano /etc/default/docker
Line 22: Line 22:
  
  
=Installing nvidia-docker 1.0 on Ubuntu 16.04:=
+
=Installing nvidia-docker:=
 
* #nvidia-docker2 still not supported by nvidia-docker-composite
 
* #nvidia-docker2 still not supported by nvidia-docker-composite
 
*docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
 
*docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f
 
*sudo apt-get purge -y nvidia-docker
 
*sudo apt-get purge -y nvidia-docker
 
*curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
 
*curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
sudo apt-key add -
+
*sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
+
*distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
 
*curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
 
*curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
 
*sudo tee /etc/apt/sources.list.d/nvidia-docker.list
 
*sudo tee /etc/apt/sources.list.d/nvidia-docker.list
 
*sudo apt-get update
 
*sudo apt-get update
*sudo apt-get install -y nvidia-docker2
+
*sudo apt-get install -y nvidia-docker
 
*sudo pkill -SIGHUP dockerd
 
*sudo pkill -SIGHUP dockerd
 
* #Test nvidia-smi with the latest official CUDA image
 
* #Test nvidia-smi with the latest official CUDA image
Line 38: Line 38:
 
*Link:
 
*Link:
 
*https://github.com/NVIDIA/nvidia-docker
 
*https://github.com/NVIDIA/nvidia-docker
 +
 +
 +
=Installing docker-compose:=
 +
 +
=Installing nvidia-docker-compose:=
 +
*pip install nvidia-docker-compose
 +
*link: https://hackernoon.com/docker-compose-gpu-tensorflow-%EF%B8%8F-a0e2011d36
 +
* Permission Denied on curl and save for docker compose: https://github.com/docker/machine/issues/652
 +
 +
=Using Docker with nvidia-docker-compose=
 +
 +
*Public docker repository (When doing FROM in Dockerfile, we need to select one of those)
 +
*https://hub.docker.com/
 +
 +
*Dir structure:
 +
*docker-compose.yml
 +
*deepo
 +
*deepo/do_not_finish.sh
 +
*deepo/Dockerfile
 +
*deepo_data (folder that is visible by deepo image)
 +
 +
*docker-compose.yml:
 +
version: '3'
 +
services:
 +
  #machine name
 +
  deepo:
 +
    #container name
 +
    container_name: deepo
 +
    #path to Dockerfile
 +
    build: deepo
 +
    command: sh do_not_finish.sh
 +
    volumes:
 +
      - ./deepo_data:/media/deepo_data
 +
    tty: true
 +
 +
*Dockerfile:
 +
FROM ufoym/deepo
 +
ADD do_not_finish.sh /
 +
*Dockerfiles guide:
 +
*https://rock-it.pl/how-to-write-excellent-dockerfiles/
 +
 +
*do_not_finish.sh:
 +
#!/bin/bash
 +
sh -c 'while :; do sleep 100; done'
 +
 +
*We need that endless loop, because docker-compose closes the container when is deployed
 +
*The endless loop allowed us to use it with a docker exec
 +
 +
==Run it==
 +
*Steps 1 and 2: Within the folder where is the docker-compose.yml file
 +
*sudo nvidia-docker-compose build
 +
*sudo nvidia-docker-compose up
 +
 +
*Step 3: From another terminal:
 +
*sudo nvidia-docker exec -it deepo bash
 +
 +
==Troubleshooting problems==
 +
*Check nvidia-docker version (needs to be version 1)
 +
*nvidia-docker version
 +
*More info:
 +
*https://github.com/eywalker/nvidia-docker-compose/issues/26
 +
 +
 +
*Permission denied: u'./docker-compose.yml
 +
*https://github.com/docker/docker-snap/issues/26
  
 
=Deepo=
 
=Deepo=

Revision as of 07:03, 13 October 2018

ML Docker Image installed on the Interaction Station ML computers (Ubuntu 16.04):

Installing Docker CE:

Change Docker root dir using systemd (Don't do this, set volume instead)

Docker - clean up all the volumes

  • sudo docker system prune -a -f --volumes


Installing nvidia-docker:


Installing docker-compose:

Installing nvidia-docker-compose:

Using Docker with nvidia-docker-compose

  • Dir structure:
  • docker-compose.yml
  • deepo
  • deepo/do_not_finish.sh
  • deepo/Dockerfile
  • deepo_data (folder that is visible by deepo image)
  • docker-compose.yml:
version: '3'
services:
  #machine name
  deepo:
    #container name
    container_name: deepo
    #path to Dockerfile
    build: deepo
    command: sh do_not_finish.sh
    volumes:
      - ./deepo_data:/media/deepo_data
    tty: true
  • Dockerfile:

FROM ufoym/deepo ADD do_not_finish.sh /

  • do_not_finish.sh:
  1. !/bin/bash

sh -c 'while :; do sleep 100; done'

  • We need that endless loop, because docker-compose closes the container when is deployed
  • The endless loop allowed us to use it with a docker exec

Run it

  • Steps 1 and 2: Within the folder where is the docker-compose.yml file
  • sudo nvidia-docker-compose build
  • sudo nvidia-docker-compose up
  • Step 3: From another terminal:
  • sudo nvidia-docker exec -it deepo bash

Troubleshooting problems


Deepo

It includes:

  • cudnn
  • theano
  • tensorflow
  • sonnet
  • pytorch
  • keras
  • lasagne
  • mxnet
  • cntk
  • chainer
  • caffe
  • caffe2
  • torch


Installing Deepo:

Run Deepo image with Docker:

  • sudo nvidia-docker run -it ufoym/deepo:gpu bash

Run Deepo image with Docker (with python 2.7):

  • sudo nvidia-docker run -it ufoym/deepo:py27 bash

Setting up ML computers:

  • Linux distribution installed: Ubuntu 16.04

Partition made for machine learning:MachineLearning

  • In Windows: Disk Management -> Resize DataStorage
  • Create new ext4 patition

Mounting the partition automatically:

Get the UUID of the learning:MachineLearning partition

  • sudo blkid

Add partition to fstab:

  • sudo nano /etc/fstab
  • Add at the bottom these two lines:
  • UUID=(id of the MachineLearning partition) /media/MachineLearning rw,suid,dev,auto,user,async,exec 0 2
  • UUID=(id of the DataStorage partition) /media/DataStorage ntfs-3g defaults=en_US.UTF-8 0 0

Give writing permissions to new MachineLearning partition

  • sudo chmod -R a+rwx /media/MachineLearning/
  • Need extra space? Extending the partition

https://askubuntu.com/questions/492054/how-to-extend-my-root-partition

Installing NVIDIA Driver:

  • Set Ubuntu to boot on console mode. Type:
  • sudo apt-get install systemd
  • sudo systemctl set-default multi-user.target
  • sudo reboot now
  • Login and in console mode, type:
  • sudo add-apt-repository ppa:graphics-drivers/ppa
  • sudo apt update
  • sudo apt upgrade
  • For GeForce 1070Ti (07/2018), type:
  • sudo apt-get install nvidia-390
  • Re-set Ubuntu to boot on graphical mode. Type:
  • sudo systemctl set-default graphical.target
  • sudo reboot now

Checking if Nvidia Driver is properly installed. Type:

  • nvidia-smi
  • nvidia-settings

Installing CUDA 9.0 for Ubuntu 16.04 (the latest version is not supported by TensorFlow):

Checking if CUDA is properly installed. Type:

  • nvcc --version

Resources used:


Other options:

NTFS fstab wizard:

  • sudo apt-get install ntfs-config
  • sudo ntfs-config

Format large capacity HD with fs ExFat for having access to it from Ubuntu:

  • On Windows 10
  • cmd
  • diskpart
  • select disk '#' (where # is the number of the target drive)
  • list part
  • select part # (where # is the number of the partition)
  • format fs=exfat QUICK