IJRCS – Volume 1 Issue 3 Paper 4


Author’s Name : S Sangeetha | R Sujitha

Volume 01 Issue 03  Year 2014  ISSN No:  2349-3828  Page no:  14-19



Gesture analyzing procedure seems to be more interesting domain nowadays. The problems to face respect other tracking algorithms are mainly two: the high complexity of the hand structure which translate in a very large amount of possible gestures, and the rapidness of the movements we are able to make when moving the hand or just the fingers. Recent approaches try to fit a 3D hand model to the observed RGB-D data by an optimization function that minimizes the error between the model and the data. However, these algorithms are very dependent of the initialization point, being unpractical to run in a natural environment. Here we introduce some new algorithm that is enabled to solve all the problems related with hand gesturing processes. Segmentation based multi threshold systems are introduced. We present an accelerometer based smart ring and a similarity matching based extensible hand gesture recognition algorithms.Existing scenarios do arise some drawbacks.To solve these kind of problems, it is common to use an offline dataset with pre-learnt gestures that will serve as a first rough estimate. In concrete, we present an algorithm that uses an articulated ICP minimization function, that is initialized by the parameters obtained from a dataset of hand gestures trained through deep learning framework. This set up has two strong points. First, deep learning provides a very fast and accurate estimate of performed hand gesture. Second, the articulated ICP algorithm allows to capture the possible variability of a gesture performed by different person or slightly different gesture. Our proposed algorithm is evaluated and validated in several ways. Independent evaluations for deep learning framework and articulated ICP are performed. Moreover, different real sequences are recorded to validate our approach, and finally quantitative and qualitative comparisons are conducted with state-of-the-art algorithms.


Hand Gesture, Hand Recognition, Tracking, Deep Learning, Iterative Closest Point Algorithm


  1. M. Wang, Y. Gao, K. Lu, and Y. Rui, “View-based discriminative probabilistic modeling for 3d object retrieval and recognition,” IEEE Trans. Image Processing, vol. 22, no. 4, pp. 1395–1407, 2013.
  2. M.-C. Hu, C.-W. Chen, W.-H. Cheng, C.-H. Chang, J.-H. Lai, and J.-L. Wu, “Real-time human movement retrieval and assessment with kinect sensor,” IEEE Transactions on Cybernetics, vol. 45, no. 4, pp. 742–753, 2015
  3. ICPR2016. Joint contest on multimedia challenges beyond visual analysis. [Online]. Available: http://gesture.chalearn.org/icpr2016 contest
  4. J. Romero, H. Kjellstrom, and D. Kragic, “Monocular real- time 3D¨ articulated hand pose estimation,” in Humanoids, 2009.
  5. I. Oikonomidis, N. Kyriazis, and A. A. Argyros, “Markerless and efficient 26-dof hand pose recovery,” in ACCV, 2010.
  6. J. Sanchez-Riera, Y.-S. Hsiao, T. Lim, K.-L. Hua, and W.- H. Cheng, “A robust tracking algorithm for 3D hand gesture with rapid hand motion through deep learning,” in 2014 IEEE International Conference on Multimedia and Expo Workshops, 2014
  7. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient- based learning applied to document recognition,” in Proceedings of the IEEE, 1998, pp. 2278–2324.
  8. A. W. Fitzgibbon, “Robust registration of 2D and 3D point sets,” in BMVC, 2001
  9. H. Liang, J. Yuan, and D. Thalmann, “Resolving ambiguous hand pose predictions by exploiting part correlations,” IEEE Trans. Circuits Syst. Video Techn., vol. 25, no. 7, pp. 1125– 1139, 2015.
  10. V. Athitsos and S. Sclaroff, “Estimating 3D Hand Pose from a Cluttered Image,” in CVPR, 2003.
  11. N. Shimada, K. Kimura, and Y. Shirai, “Real-Time 3D Hand Posture Estimation Based on 2D Appearance Retrieval Using Monocular Camera,” in ICCV Workshop (RATFG-RTS), 2001.