A More Flexible Approach to Utilizing Depth Cameras for Hand andTouch Interaction
Abstract
Many researchers have utilized depth cameras for tracking user's hands to implement various interaction methods, such as touch-sensitive displays and gestural input. With the recent introduction of Microsoft's low-cost Kinect sensor, there is increased interest in this strategy. However, a review of the existing literature on these systems suggests that the majority suffer from similar limitations due to the image processing methods used to extract, segment, and relate the user's body to the environment/display. This paper presents a simple, efficient method for extracting interactions from depth images that is more flexible in terms of sensor placement, display orientation, and dependency on surface reflectivity.
Attachments
No supporting information for this articleArticle statistics
Views: 191
Downloads
PDF: 180