Difference between revisions of "MediaPipe for TouchDesigner"

From Interaction Station Wiki
Jump to navigation Jump to search
Line 144: Line 144:
 
== MediaPipe examples ==
 
== MediaPipe examples ==
  
download link:
 
  
https://we.tl/t-RihN6L38gw
 
  
 
===faceDetect-CHOP===
 
===faceDetect-CHOP===
 +
 +
The code for script CHOP:
 +
 +
first, put a script CHOP on the empty network. and paste the code below:
 +
 +
 +
 +
<syntaxhighlight lang="python" line='line'>
 +
# me - this DAT
 +
# scriptOp - the OP which is cooking
 +
 +
import numpy as np
 +
import cv2
 +
import sys
 +
import mediapipe as mp
 +
 +
mp_face = mp.solutions.face_detection
 +
face_detection = mp_face.FaceDetection(
 +
    min_detection_confidence=0.7
 +
)
 +
 +
# press 'Setup Parameters' in the OP to call this function to re-create the parameters.
 +
def onSetupParameters(scriptOp):
 +
    page = scriptOp.appendCustomPage('Custom')
 +
    topPar = page.appendTOP('Face', label='Image with face')
 +
    return
 +
 +
# called whenever custom pulse parameter is pushed
 +
def onPulse(par):
 +
return
 +
 +
def onCook(scriptOp):
 +
    scriptOp.clear()
 +
    topRef = scriptOp.par.Face.eval()
 +
   
 +
    num_faces = 0
 +
    max_area = sys.float_info.min
 +
    width = 0
 +
    height = 0
 +
    xmin = 0.5
 +
    ymin = 0.5
 +
    lx = sys.float_info.max
 +
    ly = sys.float_info.max
 +
    rx = sys.float_info.max
 +
    ry = sys.float_info.max
 +
 
 +
    if topRef:
 +
        img = topRef.numpyArray(delayed=True)
 +
        frame = cv2.cvtColor(img, cv2.COLOR_RGBA2RGB)
 +
        frame *= 255
 +
        frame = frame.astype('uint8')
 +
        results = face_detection.process(frame)
 +
       
 +
        if results.detections:
 +
            num_faces = len(results.detections)
 +
           
 +
            for face in results.detections:
 +
            area = face.location_data.relative_bounding_box.width * face.location_data.relative_bounding_box.height
 +
            if area > max_area:
 +
            width = face.location_data.relative_bounding_box.width
 +
            height = face.location_data.relative_bounding_box.height
 +
            xmin = face.location_data.relative_bounding_box.xmin + width/2.0
 +
            ymin = 1 - (face.location_data.relative_bounding_box.ymin + height/2.0)
 +
            lx = face.location_data.relative_keypoints[0].x
 +
            ly = 1 - face.location_data.relative_keypoints[0].y
 +
            rx = face.location_data.relative_keypoints[1].x
 +
            ry = 1 - face.location_data.relative_keypoints[1].y
 +
                   
 +
            max_area = area
 +
       
 +
    tf = scriptOp.appendChan('face')
 +
    tw = scriptOp.appendChan('width')
 +
    th = scriptOp.appendChan('height')
 +
    tx = scriptOp.appendChan('tx')
 +
    ty = scriptOp.appendChan('ty')
 +
    leftx = scriptOp.appendChan('left_eye_x')
 +
    lefty = scriptOp.appendChan('left_eye_y')
 +
    rightx = scriptOp.appendChan('right_eye_x')
 +
    righty = scriptOp.appendChan('right_eye_y')
 +
    nosex = scriptOp.appendChan('nose_x')
 +
    nosey = scriptOp.appendChan('nose_y')
 +
    tf.vals = [num_faces]
 +
    tw.vals = [width]
 +
    th.vals = [height]
 +
    tx.vals = [xmin]
 +
    ty.vals = [ymin]
 +
   
 +
    leftx.vals = [lx]
 +
    lefty.vals = [ly]
 +
    rightx.vals = [rx]
 +
    righty.vals = [ry]
 +
 
 +
 
 +
    scriptOp.rate = me.time.rate
 +
 +
    return
 +
</syntaxhighlight>
 +
 
1.http://interactionstation.wdka.hro.nl/mediawiki/images/c/c5/Butter-FaceDetectionScriptCHOP.toe.zip
 
1.http://interactionstation.wdka.hro.nl/mediawiki/images/c/c5/Butter-FaceDetectionScriptCHOP.toe.zip
  

Revision as of 09:35, 19 September 2022

MediaPipe for TouchDesigner

On Apple Silicon (M1) using Rosetta2

We will install an emulated x86_64 brew using Rosetta2 alongside native Apple Silicon brew. This way we can install the by TouchDesigner expected python3.7 while also installing the MediaPipe module.

There is a MediaPipe module build for Apple Silicon (M1) machines found at [this Github issue](https://github.com/google/mediapipe/issues/3277), but I couldn't find a native Python 3.7 build for Apple Silicon to run in TouchDesigner.

Install brew x86_64 using Rosetta2

$ arch -x86_64 /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"

Make alias for x86_64 brew

Add this to rc file (.bashrc, .zshrc etc)

alias ibrew="arch -x86_64 /usr/local/bin/brew"

Install python3.7

$ ibrew install python@3.7

Alias for python@3.7 and pip3

Add to rc file

alias iPY37=/usr/local/opt/python@3.7/bin/python3
alias iPIP37=/usr/local/opt/python@3.7/bin/pip3

Now you can use `iPY37` to run x86_64 python@3.7 and `iPIP37` to install packages.

Install Mediapipe for python@3.7 x86_64

$ iPIP37 install mediapipe

Find python@3.7 x86_64 module path

Find the Python Module path of installed x86_64 python@3.7

$ ibrew --prefix

/usr/local

Append `/lib/python3.7/site-packages` to this path to get `/usr/local/lib/python3.7/site-packages`. This is the *Python 64-bit Module Path* you need to fill in TouchDesigner by going to `Preferences -> general -> Python 64-bit Module Path`

Restart TouchDesigner

Mediapipe should be installed! To verify the installation go to `Dialogs -> Textport and DATs`, import the MediaPipe module; `import mediapipe as mp` and list the components; `print(dir(mp.solutions))`

Windows

For windows we need to have a parallel copy of the same Python version on the harddrive. TouchDesigner expects us to work with Python3.7.x so grab this version from the [Python website](https://www.python.org/downloads/windows/). I used [Python 3.7.9](https://www.python.org/ftp/python/3.7.9/python-3.7.9-amd64.exe) for this tutorial.

Install excecutable

Double-click the downloaded executable file and install Python to the `DataStorage` drive `E:/` in a folder named `Python`

Installing MediaPipe

To install MediaPipe for our newly installed Python, open Windows PowerShell and run the following commands

$ python

...
Python 3.7.9
...

$ exit()
  • Note: Check if the Python version corresponds with the Python you've installed in step 1.*

Install mediapipe using pip package manager:

$ pip install mediapipe

Open TouchDesigner starter

Download the [MediaPipe starter project](http://interactionstation.wdka.hro.nl/mediawiki/images/a/a5/Mp-starter-toe.zip), unzip it and double-click to open the project in TouchDesigner. If you installed Python3.7 in a different folder than `E:\Python\Lib\site-packages` you need to change the `pythonpath` variable within the `DAT Excecute` OP to the correct folder.

To check if you can use MediaPipe in TouchDesigner, navigate to `Dialogs -> Textport and DATs`, import the MediaPipe module; `import mediapipe as mp` and list the components; `print(dir(mp.solutions))`.


what you also can do is use DAT Execute OP and copy and paste the code below.

Starter code:

 1# me - this DAT
 2# 
 3# frame - the current frame
 4# state - True if the timeline is paused
 5# 
 6# Make sure the corresponding toggle is enabled in the Execute DAT.
 7
 8def onStart():
 9	import sys
10	
11	pythonpath = "E:/Python/Lib/site-packages"
12	sys.path = [pythonpath] + sys.path
13	return
14
15def onCreate():
16	return
17
18def onExit():
19	return
20
21def onFrameStart(frame):
22	return
23
24def onFrameEnd(frame):
25	return
26
27def onPlayStateChange(state):
28	return
29
30def onDeviceChange():
31	return
32
33def onProjectPreSave():
34	return
35
36def onProjectPostSave():
37	return

MediaPipe examples

faceDetect-CHOP

The code for script CHOP:

first, put a script CHOP on the empty network. and paste the code below:


 1# me - this DAT
 2# scriptOp - the OP which is cooking
 3
 4import numpy as np
 5import cv2
 6import sys
 7import mediapipe as mp
 8
 9mp_face = mp.solutions.face_detection
10face_detection = mp_face.FaceDetection(
11    min_detection_confidence=0.7
12)
13
14# press 'Setup Parameters' in the OP to call this function to re-create the parameters.
15def onSetupParameters(scriptOp):
16    page = scriptOp.appendCustomPage('Custom')
17    topPar = page.appendTOP('Face', label='Image with face')
18    return
19
20# called whenever custom pulse parameter is pushed
21def onPulse(par):
22	return
23
24def onCook(scriptOp):
25    scriptOp.clear()
26    topRef = scriptOp.par.Face.eval()
27    
28    num_faces = 0
29    max_area = sys.float_info.min
30    width = 0
31    height = 0
32    xmin = 0.5
33    ymin = 0.5
34    lx = sys.float_info.max
35    ly = sys.float_info.max
36    rx = sys.float_info.max
37    ry = sys.float_info.max
38   
39    if topRef:
40        img = topRef.numpyArray(delayed=True)
41        frame = cv2.cvtColor(img, cv2.COLOR_RGBA2RGB)
42        frame *= 255
43        frame = frame.astype('uint8')
44        results = face_detection.process(frame)
45        
46        if results.detections:
47            num_faces = len(results.detections)
48            
49            for face in results.detections:
50            	area = face.location_data.relative_bounding_box.width * face.location_data.relative_bounding_box.height
51            	if area > max_area:
52            		width = face.location_data.relative_bounding_box.width
53            		height = face.location_data.relative_bounding_box.height
54            		xmin = face.location_data.relative_bounding_box.xmin + width/2.0
55            		ymin = 1 - (face.location_data.relative_bounding_box.ymin + height/2.0)
56            		lx = face.location_data.relative_keypoints[0].x
57            		ly = 1 - face.location_data.relative_keypoints[0].y
58            		rx = face.location_data.relative_keypoints[1].x
59            		ry = 1 - face.location_data.relative_keypoints[1].y
60                    
61            		max_area = area
62        
63    tf = scriptOp.appendChan('face')
64    tw = scriptOp.appendChan('width')
65    th = scriptOp.appendChan('height')
66    tx = scriptOp.appendChan('tx')
67    ty = scriptOp.appendChan('ty')
68    leftx = scriptOp.appendChan('left_eye_x')
69    lefty = scriptOp.appendChan('left_eye_y')
70    rightx = scriptOp.appendChan('right_eye_x')
71    righty = scriptOp.appendChan('right_eye_y')
72    nosex = scriptOp.appendChan('nose_x')
73    nosey = scriptOp.appendChan('nose_y')
74    tf.vals = [num_faces]
75    tw.vals = [width]
76    th.vals = [height]
77    tx.vals = [xmin]
78    ty.vals = [ymin]
79    
80    leftx.vals = [lx]
81    lefty.vals = [ly]
82    rightx.vals = [rx]
83    righty.vals = [ry]
84   
85   
86    scriptOp.rate = me.time.rate
87
88    return

1.http://interactionstation.wdka.hro.nl/mediawiki/images/c/c5/Butter-FaceDetectionScriptCHOP.toe.zip

2.http://interactionstation.wdka.hro.nl/mediawiki/images/a/af/Cloud-face-voice-FaceDetectionScriptCHOP.20.toe.zip

FaceMeshSOP

1.http://interactionstation.wdka.hro.nl/mediawiki/images/1/11/MediaPipeFaceMeshSOP.zip

Hand-CHOP-index-finger

1.http://interactionstation.wdka.hro.nl/mediawiki/images/f/f2/Hand-index.zip

Hand-CHOP-all-fingers

1. http://interactionstation.wdka.hro.nl/mediawiki/images/d/de/MediaPipeHandCHOP-all-fingers.toe.zip

PoseCHOP

1.http://interactionstation.wdka.hro.nl/mediawiki/images/2/2d/MediaPipePoseCHOP.toe.zip

SegmentationMask

1. http://interactionstation.wdka.hro.nl/mediawiki/images/7/7e/SegmentationMask.zip