NAE elects Prof. Alex (Sandy) Pentland as a Member

March 1st, 2014 Irfan Essa Posted in In The News, Sandy Pentland No Comments »

Congratulations to my Ph. D. Advisor, Sandy Pentland for being elected to the National Academy of Engineering.

“For contributions to computer vision and technologies for measuring human social behavior.”

via NAE Website – Prof. Alex Pentland.

AddThis Social Bookmark Button

Paper: IEEE PAMI (1997) “Coding, analysis, interpretation, and recognition of facial expressions”

July 14th, 1997 Irfan Essa Posted in Affective Computing, Face and Gesture, PAMI/ICCV/CVPR/ECCV, Papers, Research, Sandy Pentland No Comments »

Coding, analysis, interpretation, and recognition of facial expressions

Essa, I.A. Pentland, A.P. In IEEE Transactions on Pattern Analysis and Machine Intelligence, July 1997, Volume: 19 , Issue: 7, pp 757 – 763, ISSN: 0162-8828, CODEN: ITPIDJ. INSPEC Accession Number:5661539
Digital Object Identifier: 10.1109/34.598232

Abstract

We describe a computer vision system for observing facial motion by using an optimal estimation optical flow method coupled with geometric, physical and motion-based dynamic models describing the facial structure. Our method produces a reliable parametric representation of the face’s independent muscle action groups, as well as an accurate estimate of facial motion. Previous efforts at analysis of facial expression have been based on the facial action coding system (FACS), a representation developed in order to allow human psychologists to code expression from static pictures. To avoid use of this heuristic coding scheme, we have used our computer vision system to probabilistically characterize facial motion and muscle activation in an experimental population, thus deriving a new, more accurate, representation of human facial expressions that we call FACS . Finally, we show how this method can be used for coding, analysis, interpretation, and recognition of facial expressions

AddThis Social Bookmark Button

Paper: IEEE PAMI (1996) “Task-specific gesture analysis in real-time using interpolated views”

December 14th, 1996 Irfan Essa Posted in Activity Recognition, Face and Gesture, PAMI/ICCV/CVPR/ECCV, Papers, Research, Sandy Pentland No Comments »

Darrell, T.J.; Essa, I.A.; Pentland, A.P., “Task-specific gesture analysis in real-time using interpolated views” Transactions on Pattern Analysis and Machine Intelligence , vol.18, no.12, pp.1236-1242, Dec 1996
URL: [ieeexplore.ieee.org] [DOI]

Abstract

Hand and face gestures are modeled using an appearance-based approach in which patterns are represented as a vector of similarity scores to a set of view models defined in space and time. These view models are learned from examples using unsupervised clustering techniques. A supervised teaming paradigm is then used to interpolate view scores into a task-dependent coordinate system appropriate for recognition and control tasks. We apply this analysis to the problem of context-specific gesture interpolation and recognition, and demonstrate real-time systems which perform these tasks

AddThis Social Bookmark Button

Event: International Conference on Face and Gesture Recognition (1996).

October 13th, 1996 Irfan Essa Posted in Events, Face and Gesture, Sandy Pentland No Comments »

International Conference on Face and Gesture Recognition (FG) 1996, October 13-16, 1996, Killington, Vermont

fg96yptitle.gif

AddThis Social Bookmark Button

Paper: IEEE ICCV (1995) “Facial expression recognition using a dynamic model and motion energy”

June 20th, 1995 Irfan Essa Posted in Face and Gesture, PAMI/ICCV/CVPR/ECCV, Papers, Sandy Pentland No Comments »

Essa, I.A. Pentland, A.P. (1995), “Facial expression recognition using a dynamic model and motion energy”, In Proceedings of Fifth International Conference on Computer Vision, 1995, 20-23 June 1995, page(s): 360 – 367, 06/20/1995 – 06/23/1995, Cambridge, MA, ISBN: 0-8186-7042-8, INSPEC Accession Number:5028034
Digital Object Identifier: [DOI:10.1109/ICCV.1995.466916][IEEEXplore#]

Abstract

Previous efforts at facial expression recognition have been based on the Facial Action Coding System (FACS), a representation developed in order to allow human psychologists to code expression from static facial “mugshots.” We develop new more accurate representations for facial expression by building a video database of facial expressions and then probabilistically characterizing the facial muscle activation associated with each expression using a detailed physical model of the skin and muscles. This produces a muscle based representation of facial motion, which is then used to recognize facial expressions in two different ways. The first method uses the physics based model directly, by recognizing expressions through comparison of estimated muscle activations. The second method uses the physics based model to generate spatio temporal motion energy templates of the whole face for each different expression. These simple, biologically plausible motion energy “templates” are then used for recognition. Both methods show substantially greater accuracy at expression recognition than has been previously achieved

AddThis Social Bookmark Button