Difference between revisions of "About Kinect"
Line 57: | Line 57: | ||
a great example is here: | a great example is here: | ||
{{#ev:youtube|xFgvNMN2DiQ|400|left}} | {{#ev:youtube|xFgvNMN2DiQ|400|left}} | ||
+ | |||
+ | [[Category:Cameras]] | ||
+ | [[Category:Making]] | ||
+ | [[Category:Machine Vision]] |
Revision as of 20:13, 15 October 2019
What is a Kinect? How does it work? What kind of things can you do with it?
http://en.wikipedia.org/wiki/Kinect
Here is a video that explains how it works:
{{#ev:youtube|RT7hGBY5FZU}}
(very geeky, but he explains it very well)
Here is another video you have to see!
{{#ev:youtube|dTKlNGSH9Po}}
So with Kinect we can see 3D. We can recognize up to 7 people within a range of about 5 meters. Because the Kinect works with infrared, it can see in the dark. (very important when you do a installation using video projection). Furthermore you can use use the kinect for "motion capture", using so called "skeleton-tracking".
The kind of projects that use Kinect can be roughly divided in 3 categories.
1. pointclouds That's all them dots in 3D examples:
{{#ev:vimeo|22689237}}
{{#ev:vimeo|19723907}}
Pointclouds can also be used to create a 3d model.
{{#ev:youtube|quGhaggn3cQ}}
2. mocap a.k.a. motion capture and skeleton tracking. some examples:
{{#ev:youtube|acOKVi-BNJA}}
{{#ev:youtube|RUG-Uvq-J-w}}
3. blob detection With blob detection we are not interested in peoples exact pose, but more in the overall silhouette. Or we just want to know someones center of mass. That way we know where a person is in space, but not the exact position of the shoulder, or left foot. a great example is here: {{#ev:youtube|xFgvNMN2DiQ|400|left}}