Difference between revisions of "About Kinect"

From Interaction Station Wiki
Jump to navigation Jump to search
(Created page with "What is a Kinect? How does it work? What kind of things can you do with it? http://en.wikipedia.org/wiki/Kinect Here is a video that explains how it works: http://www.youtub...")
 
 
(10 intermediate revisions by 3 users not shown)
Line 4: Line 4:
  
 
Here is a video that explains how it works:
 
Here is a video that explains how it works:
http://www.youtube.com/watch?feature=player_embedded&v=RT7hGBY5FZU#!
+
 
 +
{{#ev:youtube|RT7hGBY5FZU}}
 +
 
 
(very geeky, but he explains it very well)
 
(very geeky, but he explains it very well)
 +
  
 
Here is another video you have to see!
 
Here is another video you have to see!
http://www.youtube.com/watch?feature=player_embedded&v=RT7hGBY5FZU#!
+
 
 +
{{#ev:youtube|dTKlNGSH9Po}}
 +
 
 +
 
  
 
So with Kinect we can see 3D. We can recognize up to 7 people within a range of about 5 meters.  
 
So with Kinect we can see 3D. We can recognize up to 7 people within a range of about 5 meters.  
Line 21: Line 27:
 
That's all them dots in 3D
 
That's all them dots in 3D
 
examples:
 
examples:
http://vimeo.com/22689237
+
 
http://vimeo.com/19723907
+
{{#ev:vimeo|22689237}} <br>
 +
 
 +
 
 +
{{#ev:vimeo|19723907}} <br>
 +
 
 +
 
  
 
Pointclouds can also be used to create a 3d model.
 
Pointclouds can also be used to create a 3d model.
http://www.youtube.com/watch?v=quGhaggn3cQ&feature=player_embedded#!
+
 
 +
{{#ev:youtube|quGhaggn3cQ}}
 +
 
 +
 
  
 
'''2. mocap'''
 
'''2. mocap'''
 
a.k.a. motion capture and skeleton tracking.
 
a.k.a. motion capture and skeleton tracking.
 
some examples:
 
some examples:
http://www.youtube.com/watch?v=acOKVi-BNJA
+
 
http://www.youtube.com/watch?v=RUG-Uvq-J-w
+
{{#ev:youtube|acOKVi-BNJA}}
 +
 
 +
 
 +
{{#ev:youtube|RUG-Uvq-J-w}}
 +
 
 +
 
  
 
'''3. blob detection'''
 
'''3. blob detection'''
Line 37: Line 56:
 
That way we know where a person is in space, but not the exact position of the shoulder, or left foot.
 
That way we know where a person is in space, but not the exact position of the shoulder, or left foot.
 
a great example is here:
 
a great example is here:
http://www.youtube.com/watch?v=xFgvNMN2DiQ&NR=1
+
{{#ev:youtube|xFgvNMN2DiQ|400|left}}
{{#ev:youtube|xFgvNMN2DiQ}}
+
 
 +
[[Category:Cameras]]
 +
[[Category:Making]]
 +
[[Category:Machine Vision]]
 +
[[Category:Sensors]]
 +
[[Category:Motion Tracking]]

Latest revision as of 12:22, 18 November 2022

What is a Kinect? How does it work? What kind of things can you do with it?

http://en.wikipedia.org/wiki/Kinect

Here is a video that explains how it works:

{{#ev:youtube|RT7hGBY5FZU}}

(very geeky, but he explains it very well)


Here is another video you have to see!

{{#ev:youtube|dTKlNGSH9Po}}


So with Kinect we can see 3D. We can recognize up to 7 people within a range of about 5 meters. Because the Kinect works with infrared, it can see in the dark. (very important when you do a installation using video projection). Furthermore you can use use the kinect for "motion capture", using so called "skeleton-tracking".


The kind of projects that use Kinect can be roughly divided in 3 categories.

1. pointclouds That's all them dots in 3D examples:

{{#ev:vimeo|22689237}}


{{#ev:vimeo|19723907}}


Pointclouds can also be used to create a 3d model.

{{#ev:youtube|quGhaggn3cQ}}


2. mocap a.k.a. motion capture and skeleton tracking. some examples:

{{#ev:youtube|acOKVi-BNJA}}


{{#ev:youtube|RUG-Uvq-J-w}}


3. blob detection With blob detection we are not interested in peoples exact pose, but more in the overall silhouette. Or we just want to know someones center of mass. That way we know where a person is in space, but not the exact position of the shoulder, or left foot. a great example is here: {{#ev:youtube|xFgvNMN2DiQ|400|left}}