Temporally coherent superresolution of textured video via dynamic texture synthesis

Chih Chung Hsu, Li Wei Kang, Chia Wen Lin

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)


This paper addresses the problem of hallucinating the missing high-resolution (HR) details of a low-resolution (LR) video while maintaining the temporal coherence of the reconstructed HR details using dynamic texture synthesis (DTS). Most existing multiframe-based video superresolution (SR) methods suffer from the problem of limited reconstructed visual quality due to inaccurate subpixel motion estimation between frames in an LR video. To achieve high-quality reconstruction of HR details for an LR video, we propose a texture-synthesis (TS)-based video SR method, in which a novel DTS scheme is proposed to render the reconstructed HR details in a temporally coherent way, which effectively addresses the temporal incoherence problem caused by traditional TS-based image SR methods. To further reduce the complexity of the proposed method, our method only performs the TS-based SR on a set of key frames, while the HR details of the remaining nonkey frames are simply predicted using the bidirectional overlapped block motion compensation. After all frames are upscaled, the proposed DTS-SR is applied to maintain the temporal coherence in the HR video. Experimental results demonstrate that the proposed method achieves significant subjective and objective visual quality improvement over state-of-the-art video SR methods.

Original languageEnglish
Article number7001251
Pages (from-to)919-931
Number of pages13
JournalIEEE Transactions on Image Processing
Issue number3
Publication statusPublished - 2015 Mar 1
Externally publishedYes


  • Video super-resolution
  • dynamic texture synthesis
  • motion-compensated interpolation
  • video hallucination
  • video upscaling

ASJC Scopus subject areas

  • Software
  • Computer Graphics and Computer-Aided Design


Dive into the research topics of 'Temporally coherent superresolution of textured video via dynamic texture synthesis'. Together they form a unique fingerprint.

Cite this