Wednesday, April 3, 2019

Lossless predictive coding

Lossless prognostic codingObjective of the projectGenerating Huffman codeword expend Huffman coding to be transmitted to the decipherr.Compare and analyze quality of 7 linear, fixed divers(prenominal) Differential pulse code inflexion (DPCM) soothsayers to mother out which one achieves the best densification symmetry.Compare and derive the prostrate forecast against the original anatomy to ensure our answer has lossless abridgement. originAs we know, there is strong correlativity or tie-in amid spatially adjacent pels. The aim of the predictive coding is to remove redundance amidst consecutive pixels to facilitate the encoding of hardly the residual percentage between actual and predicated (only new information). In other way, a pixel is coded as the difference between its actual tax and a predicted value, which was computed from antecedently decoded erect. As a result of that, compression ratio depends on the random variable of the image, level of quantizatio n of the difference values and quality of the omen. later on predictive coding, we skunk start to compress the original file size by using Differential pulse code modulation (DPCM) encoding and get in a Huffman entropy codebook before transmitting the image. The decoder workouts the Huffman codebook to first decode the Huffman entropy and followed by decoding Differential pulse code modulation (DPCM) to derive the constructed image..Experimental resultDifferential Pulse Code chanting (DPCM) EncodingEncode original Lena512.pgm image to DPCM values using 7 linear and fixed different Predictor methodsA(A + C)/2(A + B + C)/3A+B-CA+(B-C)/2B+(A-C)/2(A+B)/2harmonize to 7 in a higher place formulas, we can compute the difference between the previous pixel and current one. The predictor code table result will be sent to the next step, entropy encoder.The results show that the DPCM predictor B + (A -C)/2 achieves the best compression RatioEntropy encoder (Huffman coding) downstairs par t shows how the approach of Huffman coding is generated measuring 1 Retrieve the output signal of DPCM encoding.Step 2 to each one value from DPCM encoding will be generated according to each occurrence probability in descending order.Step 3 Assign the Huffman codeword for each computed probability valueFor exampleEach child probabilities is added to create the parent. The adding of probabilities continues until the root with final probability 1.0 as shown above. Hence, the Huffman Coding Table is formed.A sec is assigned to every node. The 0 place is assigned to every left sub-tree of every node and the 1 smirch is assigned to every right sub-tree of every node.Average distance per sign = 10.6+20.2+30.1+30.1= 1.6 bits/symbolIn our test, since the optimal predictor is B + (A -C)/2, the codebook is generated according to this predictor. The DPCM values ranges from -75, 87. The table below shows the first 8 values among 162 DPCM values in Huffman codebook.RESULT DISCUSSIONS (O ptimal Predictor B + (A -C)/2, Encoding Part)Average bit/pixel=Compressed Size (Bits) / (columns x rows) =Compressed Size/ (512 x 512)After generating 162 codewords we are able to achieve 902428 bits. The derived average bit/pixel is rough 3.442 bits/pixel.Compression Ratio= Original image size / Compressed image sizeCompression Ratio= 8 bits / (Average bit/pixel) (image resolution 8 bits per pixel)The original image (Lena512.pgm picture) is a fixed 8 bits/pixel. therefore, the compression ratio is 8/3.442=2.3239 by using Huffman codes.In order to write a file into binary bit format, the 8 binary adjacent bits is take on and converted an integer. For Huffman decoding, the integers are written into a file.Entropy decoder (Huffman Decoding)The Huffman decoder uses the compressed image to regenerate the DPCM table.The compressed file integers are hear and converted into a embed of binary bits.If the binary code is not a prefix of any code, read each and every bit in the codebook t o find a match. If no match is found, continue the try till a match is found in the codebook. Retreive the corresponding DPCM value for the binary code set.Step 2 is repeated until the binary set is exhausted. A DPCM table is generated and this is called the DPCM decoding process.For exampleAfter converting integers into a set of binary bits, we have encoded bit stream of the first few pixels 110111001101110011011101According to Huffman lookup table, the corresponding DPCM values are 87 87 86DPCM DecodingAfter decoding DPCM, the reconstructed image is retrieved.According to the above images, it can be concluded that the compression is a lossless one. There is no difference between original image and compressed one, as above images shown in reality or DPCM values encoded and decoded in theory. stomach DiscussionAnalyzing Different Predictors for Lena.pgmThrough above diagram, we can see the compression ratio of using one neighboring pixel (Predictor 1 A) for prediction is lower than using two or three neighboring pixels. And predictors which use 2 neighboring pixels have lower compression ratio than ones with 3 neighboring pixelsAnalyzing Different Predictors for other imagesTo further evaluate performance of the predictors, 4 different images were chosen to compute the effects the performance of the predictors.From the above diagrams, compression ratio of methods from 3 to 6 (They are (A + B + C)/3, A+B-C, A+ (B-C)/2, B+ (A-C)/2 ) is higher than the rest. As there is strong linkup between adjacent pixels, any predictor which can utilize connection between adjacent pixels produces dear(p) compression ratioAs a result of that, the methods which use 1 or 2 neighboring pixels can not utilize this connection well.For examplePredictor A is utilize (only 1 pixel is used) for the first method. The encoded values are dependent on the previous value of the same row. Thus, the first columns values cannot be predicted or this predictor does not use connection between neighboring pixels appropriately.On the other hand, the images that fall at the center region have a lower compression ratio than ones which spreads over the entire white-grey-black scale..Since the conducted experiments pertain only on .pgm format images (Black-And-White image), it is unable to determine compression ratio of dyed images (RGB, YCbCr, HSV color bases)ConclusionAccording to all above experiments and diagrams, we can offer that there is no one definite predictor for every image to achieve best compression ratio because different images need different predictors to achieve remediate compression results.In general, since there is strong correlation between spatially adjacent pixels, any predictor which can utilize connection between adjacent pixels produces good compression ratio. In our experiment compression method from 3 to 6 will produce better compression than the rest.In reality, Static Huffman Coding is popular and easy to fulfill solely does not obtain theore tically optimum number of bits to encode symbols because of condition Huffman codeword must be an integer number of bits long, e.g. if probability of 1 symbol is 0.9, the optimum code word size should be 0.15 bits but in Huffman Coding, it is 1 bit. Moreover, if symbol probabilities are unknown or not stable (source changes), Dynamic Huffman coding should be chosen but the carrying out is very complicated.On the other hand, so as to achieve non-integer length coding and probability derived in real-time, Arithmetic coding is a good alternative. However, the implementation of Arithmetic coding is slow due to many multiplications (in or so cases, divisions).

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.