Difference between revisions of "MediaPipe for TouchDesigner"
Line 245: | Line 245: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
+ | ===2.Hand Tracking CHOP-the index finger=== | ||
<syntaxhighlight lang="python" line='line'> | <syntaxhighlight lang="python" line='line'> | ||
Line 308: | Line 309: | ||
return | return | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | ===2.Hand Tracking CHOP-all fingers=== | ||
+ | |||
+ | <syntaxhighlight lang="python" line='line'> | ||
+ | |||
+ | </syntaxhighlight> | ||
+ | ===3.background replacement Selfie segmentation === | ||
+ | |||
+ | <syntaxhighlight lang="python" line='line'> | ||
+ | |||
+ | </syntaxhighlight> | ||
+ | |||
+ | ===4.posture tracking PoseCHOP === | ||
+ | |||
+ | <syntaxhighlight lang="python" line='line'> | ||
+ | |||
</syntaxhighlight> | </syntaxhighlight> | ||
==[[examples link:]]== | ==[[examples link:]]== |
Revision as of 09:46, 19 September 2022
MediaPipe for TouchDesigner
On Apple Silicon (M1) using Rosetta2
We will install an emulated x86_64 brew using Rosetta2 alongside native Apple Silicon brew. This way we can install the by TouchDesigner expected python3.7 while also installing the MediaPipe module.
There is a MediaPipe module build for Apple Silicon (M1) machines found at [this Github issue](https://github.com/google/mediapipe/issues/3277), but I couldn't find a native Python 3.7 build for Apple Silicon to run in TouchDesigner.
Install brew x86_64 using Rosetta2
$ arch -x86_64 /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
Make alias for x86_64 brew
Add this to rc file (.bashrc, .zshrc etc)
alias ibrew="arch -x86_64 /usr/local/bin/brew"
Install python3.7
$ ibrew install python@3.7
Alias for python@3.7 and pip3
Add to rc file
alias iPY37=/usr/local/opt/python@3.7/bin/python3
alias iPIP37=/usr/local/opt/python@3.7/bin/pip3
Now you can use `iPY37` to run x86_64 python@3.7 and `iPIP37` to install packages.
Install Mediapipe for python@3.7 x86_64
$ iPIP37 install mediapipe
Find python@3.7 x86_64 module path
Find the Python Module path of installed x86_64 python@3.7
$ ibrew --prefix
/usr/local
Append `/lib/python3.7/site-packages` to this path to get `/usr/local/lib/python3.7/site-packages`. This is the *Python 64-bit Module Path* you need to fill in TouchDesigner by going to `Preferences -> general -> Python 64-bit Module Path`
Restart TouchDesigner
Mediapipe should be installed! To verify the installation go to `Dialogs -> Textport and DATs`, import the MediaPipe module; `import mediapipe as mp` and list the components; `print(dir(mp.solutions))`
Windows
For windows we need to have a parallel copy of the same Python version on the harddrive. TouchDesigner expects us to work with Python3.7.x so grab this version from the [Python website](https://www.python.org/downloads/windows/). I used [Python 3.7.9](https://www.python.org/ftp/python/3.7.9/python-3.7.9-amd64.exe) for this tutorial.
Install excecutable
Double-click the downloaded executable file and install Python to the `DataStorage` drive `E:/` in a folder named `Python`
Installing MediaPipe
To install MediaPipe for our newly installed Python, open Windows PowerShell and run the following commands
$ python
...
Python 3.7.9
...
$ exit()
- Note: Check if the Python version corresponds with the Python you've installed in step 1.*
Install mediapipe using pip package manager:
$ pip install mediapipe
Open TouchDesigner starter
Download the [MediaPipe starter project](http://interactionstation.wdka.hro.nl/mediawiki/images/a/a5/Mp-starter-toe.zip), unzip it and double-click to open the project in TouchDesigner. If you installed Python3.7 in a different folder than `E:\Python\Lib\site-packages` you need to change the `pythonpath` variable within the `DAT Excecute` OP to the correct folder.
To check if you can use MediaPipe in TouchDesigner, navigate to `Dialogs -> Textport and DATs`, import the MediaPipe module; `import mediapipe as mp` and list the components; `print(dir(mp.solutions))`.
what you also can do is use DAT Execute OP and copy and paste the code below.
Starter code:
1# me - this DAT
2#
3# frame - the current frame
4# state - True if the timeline is paused
5#
6# Make sure the corresponding toggle is enabled in the Execute DAT.
7
8def onStart():
9 import sys
10
11 pythonpath = "E:/Python/Lib/site-packages"
12 sys.path = [pythonpath] + sys.path
13 return
14
15def onCreate():
16 return
17
18def onExit():
19 return
20
21def onFrameStart(frame):
22 return
23
24def onFrameEnd(frame):
25 return
26
27def onPlayStateChange(state):
28 return
29
30def onDeviceChange():
31 return
32
33def onProjectPreSave():
34 return
35
36def onProjectPostSave():
37 return
MediaPipe in TouchDesigners
1.faceDetect-CHOP
The code for script CHOP:
first, put a script CHOP on the empty network. and paste the code below:
1# me - this DAT
2# scriptOp - the OP which is cooking
3
4import numpy as np
5import cv2
6import sys
7import mediapipe as mp
8
9mp_face = mp.solutions.face_detection
10face_detection = mp_face.FaceDetection(
11 min_detection_confidence=0.7
12)
13
14# press 'Setup Parameters' in the OP to call this function to re-create the parameters.
15def onSetupParameters(scriptOp):
16 page = scriptOp.appendCustomPage('Custom')
17 topPar = page.appendTOP('Face', label='Image with face')
18 return
19
20# called whenever custom pulse parameter is pushed
21def onPulse(par):
22 return
23
24def onCook(scriptOp):
25 scriptOp.clear()
26 topRef = scriptOp.par.Face.eval()
27
28 num_faces = 0
29 max_area = sys.float_info.min
30 width = 0
31 height = 0
32 xmin = 0.5
33 ymin = 0.5
34 lx = sys.float_info.max
35 ly = sys.float_info.max
36 rx = sys.float_info.max
37 ry = sys.float_info.max
38
39 if topRef:
40 img = topRef.numpyArray(delayed=True)
41 frame = cv2.cvtColor(img, cv2.COLOR_RGBA2RGB)
42 frame *= 255
43 frame = frame.astype('uint8')
44 results = face_detection.process(frame)
45
46 if results.detections:
47 num_faces = len(results.detections)
48
49 for face in results.detections:
50 area = face.location_data.relative_bounding_box.width * face.location_data.relative_bounding_box.height
51 if area > max_area:
52 width = face.location_data.relative_bounding_box.width
53 height = face.location_data.relative_bounding_box.height
54 xmin = face.location_data.relative_bounding_box.xmin + width/2.0
55 ymin = 1 - (face.location_data.relative_bounding_box.ymin + height/2.0)
56 lx = face.location_data.relative_keypoints[0].x
57 ly = 1 - face.location_data.relative_keypoints[0].y
58 rx = face.location_data.relative_keypoints[1].x
59 ry = 1 - face.location_data.relative_keypoints[1].y
60
61 max_area = area
62
63 tf = scriptOp.appendChan('face')
64 tw = scriptOp.appendChan('width')
65 th = scriptOp.appendChan('height')
66 tx = scriptOp.appendChan('tx')
67 ty = scriptOp.appendChan('ty')
68 leftx = scriptOp.appendChan('left_eye_x')
69 lefty = scriptOp.appendChan('left_eye_y')
70 rightx = scriptOp.appendChan('right_eye_x')
71 righty = scriptOp.appendChan('right_eye_y')
72 nosex = scriptOp.appendChan('nose_x')
73 nosey = scriptOp.appendChan('nose_y')
74 tf.vals = [num_faces]
75 tw.vals = [width]
76 th.vals = [height]
77 tx.vals = [xmin]
78 ty.vals = [ymin]
79
80 leftx.vals = [lx]
81 lefty.vals = [ly]
82 rightx.vals = [rx]
83 righty.vals = [ry]
84
85
86 scriptOp.rate = me.time.rate
87
88 return
2.Hand Tracking CHOP-the index finger
1# me - this DAT
2# scriptOp - the OP which is cooking
3
4import numpy as np
5import cv2
6import mediapipe as mp
7
8mp_hands = mp.solutions.hands
9hands = mp_hands.Hands(
10 max_num_hands=1,
11 min_detection_confidence=0.5,
12 min_tracking_confidence=0.5
13)
14
15# press 'Setup Parameters' in the OP to call this function to re-create the parameters.
16def onSetupParameters(scriptOp):
17 page = scriptOp.appendCustomPage('Custom')
18 p = page.appendTOP('Image', label='Video image')
19 return
20
21# called whenever custom pulse parameter is pushed
22def onPulse(par):
23 return
24
25def onCook(scriptOp):
26 scriptOp.clear()
27 input = scriptOp.par.Image.eval().numpyArray(delayed=True)
28 image = cv2.cvtColor(input, cv2.COLOR_RGBA2RGB)
29 image *= 255
30 image = image.astype('uint8')
31 results = hands.process(image)
32
33 wrist = []
34 index_tip = []
35 num_hands = 0
36 if results.multi_hand_landmarks:
37 num_hands += 1
38 for hand in results.multi_hand_landmarks:
39 wrist.append(hand.landmark[0])
40 index_tip.append(hand.landmark[8])
41
42 tf = scriptOp.appendChan('hands')
43 tf.vals = [num_hands]
44
45 if len(wrist) > 0:
46 twx = scriptOp.appendChan('wrist:x')
47 twy = scriptOp.appendChan('wrist:y')
48
49 twx.vals = [wrist[0].x]
50 twy.vals = [wrist[0].y]
51
52 if len(index_tip) > 0:
53 tix = scriptOp.appendChan('index_tip:x')
54 tiy = scriptOp.appendChan('index_tip:y')
55
56 tix.vals = [index_tip[0].x]
57 tiy.vals = [index_tip[0].y]
58
59 scriptOp.rate = me.time.rate
60
61 return