Context Tree-Based Image Contour Coding Using a Geometric Prior

Efficient encoding of object contours in images can facilitate advanced image/video compression techniques, such as shape-adaptive transform coding or motion prediction of arbitrarily shaped pixel blocks. We study the problem of lossless and lossy compression of detected contours in images. Specific...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on image processing Vol. 26; no. 2; pp. 574 - 589
Main Authors Amin Zheng, Cheung, Gene, Florencio, Dinei
Format Journal Article
LanguageEnglish
Published United States IEEE 01.02.2017
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Efficient encoding of object contours in images can facilitate advanced image/video compression techniques, such as shape-adaptive transform coding or motion prediction of arbitrarily shaped pixel blocks. We study the problem of lossless and lossy compression of detected contours in images. Specifically, we first convert a detected object contour into a sequence of directional symbols drawn from a small alphabet. To encode the symbol sequence using arithmetic coding, we compute an optimal variable-length context tree (VCT) T via a maximum a posterior (MAP) formulation to estimate symbols' conditional probabilities. MAP can avoid overfitting given a small training set X of past symbol sequences by identifying a VCT T with high likelihood P(X |T ) of observing X given T , using a geometric prior P(T ) stating that image contours are more often straight than curvy. For the lossy case, we design fast dynamic programming (DP) algorithms that optimally trade off coding rate of an approximate contour x̂ given a VCT T with two notions of distortion of x̂ with respect to the original contour x. To reduce the size of the DP tables, a total suffix tree is derived from a given VCT T for compact table entry indexing, reducing complexity. Experimental results show that for lossless contour coding, our proposed algorithm outperforms state-of-the-art context-based schemes consistently for both small and large training datasets. For lossy contour coding, our algorithms outperform comparable schemes in the literature in rate-distortion performance.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2016.2627813