Paper (2009) In ACM Symposium on Interactive 3D Graphics “Human Video Textures”

March 1st, 2009 Irfan Essa Posted in ACM SIGGRAPH, Atsushi Nakazawa, Computational Photography and Video, James Rehg, Matt Flagg, Modeling and Animation, Papers, Sing Bing Kang No Comments »


Matthew FlaggAtsushi Nakazawa, Qiushuang Zhang, Sing Bing Kang, Young Kee Ryu, Irfan EssaJames M. Rehg (2009), Human Video Textures In Proceedings of the ACM Symposium on Interactive 3D Graphics and Games 2009 (I3D ’09), Boston, MA, February 27-March 1 (Fri-Sun), 2009 [PDF (see Copyright) | Video in DiVx | Website ]


This paper describes a data-driven approach for generating photorealistic animations of human motion. Each animation sequence follows a user-choreographed path and plays continuously by seamlessly transitioning between different segments of the captured data. To produce these animations, we capitalize on the complementary characteristics of motion capture data and video. We customize our capture system to record motion capture data that are synchronized with our video source. Candidate transition points in video clips are identified using a new similarity metric based on 3-D marker trajectories and their 2-D projections into video. Once the transitions have been identified, a video-based motion graph is constructed. We further exploit hybrid motion and video data to ensure that the transitions are seamless when generating animations. Motion capture marker projections serve as control points for segmentation of layers and nonrigid transformation of regions. This allows warping and blending to generate seamless in-between frames for animation. We show a series of choreographed animations of walks and martial arts scenes as validation of our approach.

Example Image from Project

AddThis Social Bookmark Button

Paper: ACM SIGGRAPH (2005) “Texture optimization for example-based synthesis”

July 25th, 2005 Irfan Essa Posted in Aaron Bobick, ACM SIGGRAPH, Computational Photography and Video, Nipun Kwatra, Papers, Research, Vivek Kwatra No Comments »

Vivek Kwatra, Irfan Essa, Aaron Bobick, and Nipun Kwatra (2005), “Texture optimization for example-based synthesis” In ACM Transactions on Graphics (TOG) Volume 24 , Issue 3 (July 2005) Proceedings of ACM SIGGRAPH 2005, Pages: 795 – 802, ISSN:0730-0301 (DOI|PDF|Project Site|Video|Talk)


TextureOptimizationWe present a novel technique for texture synthesis using optimization. We define a Markov Random Field (MRF)-based similarity metric for measuring the quality of synthesized texture with respect to a given input sample. This allows us to formulate the synthesis problem as minimization of an energy function, which is optimized using an Expectation Maximization (EM)-like algorithm. In contrast to most example-based techniques that do region-growing, ours is a joint optimization approach that progressively refines the entire texture. Additionally, our approach is ideally suited to allow for controllable synthesis of textures. Specifically, we demonstrate controllability by animating image textures using flow fields. We allow for general two-dimensional flow fields that may dynamically change over time. Applications of this technique include dynamic texturing of fluid animations and texture-based flow visualization.

AddThis Social Bookmark Button

Papers: ACM SIGGRAPH (2003) “Graphcut textures”

July 25th, 2003 Irfan Essa Posted in Aaron Bobick, ACM SIGGRAPH, Arno Schödl, Computational Photography and Video, Greg Turk, Papers, Vivek Kwatra No Comments »

Vivek Kwatra, Arno Schödl, Irfan Essa, Greg Turk, Aaron Bobick (2003), “Graphcut textures: image and video synthesis using graph cuts” In ACM Transactions on Graphics (TOG), Volume 22 , Issue 3, Proceedings of ACM SIGGRAPH 2003, Pages: 277 – 286, July 2003, ISSN:0730-0301. (DOI|Paper| SIGGRAPH Video (160 MB, 50 MB) | Video Results 87 MB | Project Site)


In this paper we introduce a new algorithm for image and video texture synthesis. In our approach, patch regions from a sample image or video are transformed and copied to the output and then stitched together along optimal seams to generate a new (and typically larger) output. In contrast to other techniques, the size of the GC-TOCpatch is not chosen a-priori, but instead a graph cut technique is used to determine the optimal patch region for any given offset between the input and output texture. Unlike dynamic programming, our graph cut technique for seam optimization is applicable in any dimension. We specifically explore it in 2D and 3D to perform video texture synthesis in addition to regular image synthesis. We present approximative offset search techniques that work well in conjunction with the presented patch size optimization. We show results for synthesizing regular, random, and natural images and videos. We also demonstrate how this method can be used to interactively merge different images to generate new scenes.

AddThis Social Bookmark Button

Paper: ACM SIGGRAPH (2000) “Video textures”

August 1st, 2000 Irfan Essa Posted in ACM SIGGRAPH, Arno Schödl, Computational Photography and Video, David Salesin, Papers, Research, Rick Szeliski No Comments »


  • A. Schödl, R. Szeliski, D. H. Salesin, and I. Essa (2000), “Video textures,” in ACM SIGGRAPH Proceedings of Annual Conference on Computer graphics and interactive techniques, New York, NY, USA, 2000, pp. 489-498. [BIBTEX]
    @InProceedings{    2000-Schodl-VT,
      address  = {New York, NY, USA},
      author  = {A. Sch{\"o}dl and R. Szeliski and D. H. Salesin and
          I. Essa},
      booktitle  = {ACM SIGGRAPH Proceedings of Annual Conference on
          Computer graphics and interactive techniques},
      pages    = {489--498},
      publisher  = {ACM Press/Addison-Wesley Publishing Co.},
      title    = {Video textures},
      year    = {2000}



Still Image of a VideoTexture of 4 Fish, Plants, and Bubbles in a Fish Tank

This paper introduces a new type of medium, called a video texture, which has qualities somewhere between those of a photograph and a video. A video texture provides a continuous infinitely varying stream of images. While the individual frames of a video texture may be repeated from time to time, the video sequence as a whole is never repeated exactly. Video textures can be used in place of digital photos to infuse a static image with dynamic qualities and explicit actions. We present techniques for analyzing a video clip to extract its structure, and for synthesizing a new, similar looking video of arbitrary length. We combine video textures with view morphing techniques to obtain 3D video textures. We also introduce video-based animation, in which the synthesis of video textures can be guided by a user through high-level interactive controls. Applications of video textures and their extensions include the display of dynamic scenes on web pages, the creation of dynamic backdrops for special effects and games, and the interactive control of video-based animation.

AddThis Social Bookmark Button