0
2.0kviews
Short note on: Two dimensional image transform

Mumbai University > EXTC > Sem 7 > Data Compression and Encryption

Marks: 5 M

Year: May 2013

1 Answer
0
14views

1.Given two-dimensional data such as the 4× 4 Matrix:

$$ \begin{bmatrix} \ 5 & 6 & 7 & 4 \\ \ 6 & 5 & 7 & 5 \\ \ 7 & 7 & 6 & 6 \\ \ 8 & 8 & 8 & 8 \end{bmatrix} $$

2.Where each of the four columns is highly correlated, we can apply our simple one dimensional transform to the columns of D. The result is,

$$C=W.D= \begin {bmatrix} 1 & 1 & 1 &1 \\ \ 1 & 1 & -1 & -1 \\ \ 1 & -1 & -1 & 1 \\ \ 1 & -1 & 1 & -1\end{bmatrix}\begin{bmatrix}\ 5 & 6 & 7 & 4 \\\ 6 & 5 & 7 & 5 \\\ 7 & 7 & 6 & 6 \\\ 8 & 8 & 8 & 8\end{bmatrix} = \begin {bmatrix} 26 & 26 & 28 & 23 \\ \ -4 & -4 & 0 & -5 \\ \ 0 & 2 & 2 & 1 \\ \ -2 & 0 & -2 & -3 \end{bmatrix}$$

3.Each column of C’ is the transform of a column of D. In above example we can notice that the top element of each column of C’ is dominant, because the data in the corresponding column of D is correlated.

4.We can also observe that the rows of C’ are still correlated. C’ is the first stage in a two-stage process that produces the two dimensional transform of matrix D. The second stage should transform each row of C’ , and this is done by multiplying C’ by the transpose WT. Our particular W, however is symmetric so we end up with $$C= C’. W^T = W. D. W^T = W.D.W$$ or

$$C=\begin {bmatrix} 26 & 26 & 28 & 23 \\ \ -4 & -4 & 0 & -5 \\ \ 0 & 2 & 2 & 1 \\ \ -2 & 0 & -2 & -3 \end{bmatrix}\begin {bmatrix} 1 & 1 & 1 &1 \\ \ 1 & 1 & -1 & -1 \\ \ 1 & -1 & -1 & 1 \\ \ 1 & -1 & 1 & -1\end{bmatrix}=\begin{bmatrix}103 & 1 & -5 & 5 \\ \ -13 & -3 & -5 & 5 \\ \ 5 & -1 & -3 & -1 \\ \ -7 & 3 & -3 & -1\end{bmatrix}$$

The elements of C are decorreleted. The top-left element is dominant. It contains most of the total energy of the original D. The elements in the top row and the leftmost column are somewhat large, while the remaining elements are smaller than the original data items.

The double-stage , two-dimensional transformation has reduced the correlation in both the horizontal and vertical dimensions. As in the one-dimensional case, excellent compression can be achieved by quantizing the elements of C, especially those that correspond to higher frequencies (i.e , located toward the bottom-right corner of C)

This is the essence of orthogonal transforms. The important transforms are:

1.The Walsh Hadamard transform: It is fast and easy to compute ( It requires only additions and subtractions), but its performance , in terms of energy compaction, is lower that that of the DCT.

2.The Haar transform: It is a simple, fast transform. It is the simplest wavelet transform.

3.The Karhunen-Lo’ eve transform: It is the best one theoretically, in the sense of energy compaction (or, equivalently, pixel decorrelation). However, its coefficients are not fixed; they depend on the data to be compressed.

Calculating these coefficients (the basis of the transform ) is slow, as is the calculation of the transformed values themselves. Since the coefficients are data dependent, they have to be included in the compressed stream. For thse reasons and because the DCT performs almost as well, the KLT is not generally used in practice.

Please log in to add an answer.