User Guide > File Formats and Compressions > Compressions > Huffman Compression |
Full Name | Huffman encoding |
Compression ID | ImGearCompressions.HUFFMAN = 2 |
ImageGear Assembly | ImageGear23.Core Assembly |
Bit Depth | Any |
File Formats | NCR, PCD, TIFF, KodakRAW, NEF, PEF |
IG Platforms Support | WIN32, WIN64, Unix, Unix64, .NET, .NET64, MAC |
Developed in 1952 by David Huffman, this is one of the older compression methods. The encoding and decoding processes are complex relative to todayas standards, but the compression ratio can be high if the image contains many repeat data values. It is best used for images with little or no pixel noise, e.g., cartoons or drawings with large areas of the same color and intensity (like a monotone sky). This compression scheme is often used by other compression algorithms for extra compression.
The Huffman method uses a conversion table to assign codes for each value, based on frequency of occurrence. The file is scanned for all of its values, with the values and their frequency of occurrence tallied. Using a binary tree, values are paired off by frequency of occurrence, beginning with the least frequent values. As the tree progresses upward, the least occurring values at the bottom of the tree continue to be incremented a bit at a time, with one bit added for each new branch added to the tree. In the end, the values that occur the most (at the top of the tree) have the shortest codes.
A potential problem with this compression method is decoding; the file as variable-length codes can cause the dropping or adding of a bit to the end of a line, thereby throwing off subsequent lines of data.