TY - GEN
T1 - Dictionary learning-based distributed compressive video sensing
AU - Chen, Hung Wei
AU - Kang, Li Wei
AU - Lu, Chun Shien
PY - 2010
Y1 - 2010
N2 - We address an important issue of fully low-cost and low-complex video compression for use in resource-extremely limited sensors/devices. Conventional motion estimation-based video compression or distributed video coding (DVC) techniques all rely on the high-cost mechanism, namely, sensing/sampling and compression are disjointedly performed, resulting in unnecessary consumption of resources. That is, most acquired raw video data will be discarded in the (possibly) complex compression stage. In this paper, we propose a dictionary learning-based distributed compressive video sensing (DCVS) framework to "directly" acquire compressed video data. Embedded in the compressive sensing (CS)-based single-pixel camera architecture, DCVS can compressively sense each video frame in a distributed manner. At DCVS decoder, video reconstruction can be formulated as an l1-minimization problem via solving the sparse coefficients with respect to some basis functions. We investigate adaptive dictionary/basis learning for each frame based on the training samples extracted from previous reconstructed neighboring frames and argue that much better basis can be obtained to represent the frame, compared to fixed basis-based representation and recent popular "CS-based DVC" approaches without relying on dictionary learning.
AB - We address an important issue of fully low-cost and low-complex video compression for use in resource-extremely limited sensors/devices. Conventional motion estimation-based video compression or distributed video coding (DVC) techniques all rely on the high-cost mechanism, namely, sensing/sampling and compression are disjointedly performed, resulting in unnecessary consumption of resources. That is, most acquired raw video data will be discarded in the (possibly) complex compression stage. In this paper, we propose a dictionary learning-based distributed compressive video sensing (DCVS) framework to "directly" acquire compressed video data. Embedded in the compressive sensing (CS)-based single-pixel camera architecture, DCVS can compressively sense each video frame in a distributed manner. At DCVS decoder, video reconstruction can be formulated as an l1-minimization problem via solving the sparse coefficients with respect to some basis functions. We investigate adaptive dictionary/basis learning for each frame based on the training samples extracted from previous reconstructed neighboring frames and argue that much better basis can be obtained to represent the frame, compared to fixed basis-based representation and recent popular "CS-based DVC" approaches without relying on dictionary learning.
KW - Compressive sensing
KW - Dictionary learning
KW - L-minimization
KW - Single-pixel camera
KW - Sparse representation
UR - http://www.scopus.com/inward/record.url?scp=79951779893&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=79951779893&partnerID=8YFLogxK
U2 - 10.1109/PCS.2010.5702466
DO - 10.1109/PCS.2010.5702466
M3 - Conference contribution
AN - SCOPUS:79951779893
SN - 9781424471348
T3 - 28th Picture Coding Symposium, PCS 2010
SP - 210
EP - 213
BT - 28th Picture Coding Symposium, PCS 2010
T2 - 28th Picture Coding Symposium, PCS 2010
Y2 - 8 December 2010 through 10 December 2010
ER -