The Minds of the New Machines | Research Horizons | Georgia Tech’s Research News

March 15th, 2018 Irfan Essa Posted in In The News, Machine Learning No Comments »

A nice write-up in Georgia Tech’s Research Horizons Magazine about ML@GT

Machine learning has been around for decades, but the advent of big data and more powerful computers has increased its impact significantly — ­moving machine learning beyond pattern recognition and natural language processing into a broad array of scientific disciplines. A subcategory of artificial intelligence, machine learning deals with the construction of algorithms that enable computers to learn from and react to data rather than following explicitly programmed instructions. “Machine-learning algorithms build a model based on inputs and then use that model to make other hypotheses, predictions, or decisions,” explained Irfan Essa, professor and associate dean in Georgia Tech’s College of Computing who also directs the Institute’s Center for Machine Learning.

Source: The Minds of the New Machines | Research Horizons | Georgia Tech’s Research News

AddThis Social Bookmark Button

Real-Time Captcha Technique Improves Biometric Authentication | College of Computing

February 20th, 2018 Irfan Essa Posted in Computer Vision, In The News, Machine Learning No Comments »

A short write-up on one of my recent publications.

A new login authentication approach could improve the security of current biometric techniques that rely on video or images of users’ faces. Known as Real-Time Captcha, the technique uses a unique challenge that’s easy for humans — but difficult for attackers who may be using machine learning and image generation software to spoof legitimate users. The Real-Time Captcha requires users to look into their mobile phone’s built-in camera while answering a randomly-selected question that appears within a Captcha on the screens of the devices. The response must be given within a limited period of time that’s too short for artificial intelligence or machine learning programs to respond. The Captcha would supplement image- and audio-based authentication techniques that can be spoofed by attackers who may be able to find and modify images, video and audio of users — or steal them from mobile devices.

CITATION: Erkam Uzun, Simon Pak Ho Chung, Irfan Essa and Wenke Lee, “rtCaptcha: A Real-Time CAPTCHA Based Liveness Detection System,” (Network and Distributed Systems Security (NDSS) Symposium 2018).

Source: Real-Time Captcha Technique Improves Biometric Authentication | College of Computing

AddThis Social Bookmark Button

TEDx Talk (2017) on “Bridging Human and Artificial Intelligence” at TEDxCentennialParkWomen

November 1st, 2017 Irfan Essa Posted in In The News, Interesting, Machine Learning, Presentations, Videos No Comments »

A TEDx talk that I recently did.
In this talk, the speaker takes you on a journey of how AI systems have evolved over time. DIRECTOR OF MACHINE LEARNING AT GEORGIA INSTITUTE OF TECHNOLOGY Dr. Irfan Essa is a professor in the school of Interactive Computing and the inaugural Director of Machine Learning at Georgia Tech. One of the fastest growing research areas in computing, machine learning spans many disciplines that use data to discover scientific principles, infer patterns and extract meaningful knowledge. Essa directs an interdisciplinary team studying ways machine learning connects information and actions to bring the most benefit to the most people. This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at https://www.ted.com/tedx
AddThis Social Bookmark Button

Paper in IPCAI 2017 on “Video and Accelerometer-Based Motion Analysis for Automated Surgical Skills Assessment”

June 21st, 2017 Irfan Essa Posted in Activity Recognition, Aneeq Zia, Computer Vision, Eric Sarin, Medical, MICCAI, Vinay Bettadapura, Yachna Sharma No Comments »

Paper

  • A. Zia, Y. Sharma, V. Bettadapura, E.Sarin, and I. Essa (2017), “Video and Accelerometer-Based Motion Analysis for Automated Surgical Skills Assessment,” in Proceedings of Information Processing in Computer-Assisted Interventions (IPCAI), 2017. [PDF] [BIBTEX]
    @InProceedings{    2017-Zia-VAMAASSA,
      author  = {A. Zia and Y. Sharma and V. Bettadapura and E.Sarin
          and I. Essa},
      booktitle  = {Proceedings of Information Processing in
          Computer-Assisted Interventions (IPCAI)},
      month    = {June},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2017-Zia-VAMAASSA.pdf},
      title    = {Video and Accelerometer-Based Motion Analysis for
          Automated Surgical Skills Assessment},
      year    = {2017}
    }

Abstract

Purpose: Basic surgical skills of suturing and knot tying are an essential part of medical training. Having an automated system for surgical skills assessment could help save experts time and improve training efficiency. There have been some recent attempts at automated surgical skills assessment using either video analysis or acceleration data. In this paper, we present a novel approach for automated
assessment of OSATS based surgical skills and provide an analysis of different features on multi-modal data (video and accelerometer data).
Methods: We conduct the largest study, to the best of our knowledge, for basic surgical skills assessment on a dataset that contained video and accelerometer data for suturing and knot-tying tasks. We introduce “entropy based” features – Approximate Entropy (ApEn) and Cross-Approximate Entropy (XApEn), which quantify the amount of predictability and regularity of fluctuations in time-series data. The
proposed features are compared to existing methods of Sequential Motion Texture (SMT), Discrete Cosine Transform (DCT) and Discrete Fourier Transform (DFT), for surgical skills assessment.
Results: We report average performance of different features across all applicable OSATS criteria for suturing and knot tying tasks. Our analysis shows that the proposed entropy based features out-perform previous state-of-the-art methods using video data. For accelerometer data, our method performs better for suturing only. We also show that fusion of video and acceleration features can improve overall performance with the proposed entropy features achieving highest accuracy.
Conclusions: Automated surgical skills assessment can be achieved with high accuracy using the proposed entropy features. Such a system can significantly improve the efficiency of surgical training in medical schools and teaching hospitals.

  • Presented at The 8th International Conference on Information Processing in Computer-Assisted Interventions, in Barcelona, SPAIN, June 20-21, 2017.
  • Aneeq Zia awarded the “Young Investigator Travel Award” given to young investigators (including Ph.D. and MSc students and junior researchers) with accepted papers at IPCAI conference to attend IPCAI/CARS 2017.
  • This paper was also 1 of the 12 papers voted by the audience for a 25 minute long oral presentation and discussion session on the last day of conference (based on 5 minute short presentations given by all authors on the first day).
AddThis Social Bookmark Button

Paper in AAAI’s ICWSM (2017) “Selfie-Presentation in Everyday Life: A Large-Scale Characterization of Selfie Contexts on Instagram”

May 18th, 2017 Irfan Essa Posted in Computational Journalism, Computational Photography and Video, Computer Vision, Face and Gesture, Julia Deeb-Swihart, Papers, Social Computing No Comments »

Paper

  • J. Deeb-Swihart, C. Polack, E. Gilbert, and I. Essa (2017), “Selfie-Presentation in Everyday Life: A Large-Scale Characterization of Selfie Contexts on Instagram,” in In Proceedings of The International AAAI Conference on Web and Social Media (ICWSM), 2017. [PDF] [BIBTEX]
    @InProceedings{    2017-Deeb-Swihart-SELLCSCI,
      author  = {Julia Deeb-Swihart and Christopher Polack and Eric
          Gilbert and Irfan Essa},
      booktitle  = {In Proceedings of The International AAAI Conference
          on Web and Social Media (ICWSM)},
      month    = {May},
      organization  = {AAAI},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2017-Deeb-Swihart-SELLCSCI.pdf},
      title    = {Selfie-Presentation in Everyday Life: A Large-Scale
          Characterization of Selfie Contexts on Instagram},
      year    = {2017}
    }

Abstract

Carefully managing the presentation of self via technology is a core practice on all modern social media platforms. Recently, selfies have emerged as a new, pervasive genre of identity performance. In many ways unique, selfies bring us full circle to Goffman—blending the online and offline selves together. In this paper, we take an empirical, Goffman-inspired look at the phenomenon of selfies. We report a large-scale, mixed-method analysis of the categories in which selfies appear on Instagram—an online community comprising over 400M people. Applying computer vision and network analysis techniques to 2.5M selfies, we present a typology of emergent selfie categories which represent emphasized identity statements. To the best of our knowledge, this is the first large-scale, empirical research on selfies. We conclude, contrary to common portrayals in the press, that selfies are really quite ordinary: they project identity signals such as wealth, health and physical attractiveness common to many online media, and to offline life.

AddThis Social Bookmark Button