This glossary contains terminology used by Accusoft in both its software products and its documentation. Since these terms come from many different disciplines and because many of the terms have different meanings in each discipline, each glossary entry is followed by the name of the field from which the definition is taken.
For a detailed description, refer to the references listed in the Bibliography. For definitions of ImageGear licensing terminology, refer to ImageGear Licensing and Deployment Kit Terminology.
To find terms which start with a numeral (0-9), look under its spelling, for example, the term "8-bit gray level" can be found as "eight bit gray level."
Absolute coordinates refer to a common origin, for example, the upper left corner of a display screen. This is the opposite of relative coordinates.
Accusoft Application Program Interface. See API (software).
Red, Green, Blue - the 3 colors used to create all other colors when direct, or transmitted light is used (as in a video monitor). They are called additive primaries, because when these three colors are superimposed they produce white.
A method of filling in data that is missing due to under-sampling. In imaging, this usually involves the process of removing jagged edges by interpolating values in-between pixels of contrast. These methods are most often used to remove or reduce the stair-stepping artifact found in digital high contrast images.
Area Of Interest. An area of interest is a rectangle within an image defined as two points within the image. An AOI can be written as (x1,y1)-(x2,y2). All AOIs are parallel with the image's axes. See ROI (Accusoft image processing).
Application Programmer's Interface. The set of routines that make up a library or toolkit. Some times called a binding.
The proportion of an image's size given in terms of the horizontal length verses the vertical height. An aspect ratio of 4:3 indicates that the image is 4/3 times as wide as it is high.
A curve created from endpoints and two or more control points that serve as positions for the shape of the curve. Originated by P. Bezier (~1962) for the use in car body descriptions.
A raster operation that moves a block of bits representing a portion of an image or scene from one location in the frame buffer to another. Usually written as "bit blt".
See histogram (imaging).
An optimized movement of a large block of computer memory from one location to another. Used for moving images or sub-images to and from areas of computer memory.
bit_block_transfer.
An image is a bitmap if it contains a value for each of its pixels. This is the opposite of vector images where a small set of values generate an object.
A hypothetical 2-D plane containing a single bit of memory for each pixel in a image. If each 8-bit pixel is thought of as a stack of 8 coins, and an image as many rows and columns of these stacked coins then the 3rd bit plane would be the plane consisting of the 3rd coin from each stack.
The smallest rectangle that fits around a given object. In imaging, the rectangle is usually rotationally restricted to be parallel to both image axes.
Format originator: Microsoft Corporation
16011 NE 36th Way, Box 97917
Redmond, WA 98073
A function that is passed to another function as a parameter. The function receiving the call-back function can call this function. This is used to change the behavior of a given routine without knowing beforehand what it is expected to do.
A 2-dimensional equally spaced grid iron that uniquely assigns every point in the plane, (one and only one), co-ordinate pair; (x, y). In imaging, each point is usually referred to as a pixel and the x and y values take on integer values. Most images use the top-left as the (0,0), or origin. See coordinates.
An image blending function that replaces pixels of a specified hue range with pixels from a second image. This is often referred to the weatherman effect because most weather forecasters use a solid blue or green background to make it look as if they are standing in front of a huge weather map. It is important to remember that it is the hue that is used in the blending function and not the intensity or saturation.
Commission Internationale de l'Eclairage. (International Commission of Illumination). A standards organization which provides specifications for the description of device independent color.
The clipboard is a windows data structure used to exchanged data between applications. It is a common area where applications place data and others can access it. These operations are usually referred to as Cut (place data in) and Paste (take data out).
See MPEG (image compression).
Cyan, Magenta, Yellow, (K) black. Computer monitors are additive, but color printers are subtractive. Instead of combining light from monitor phosphors, printers coat paper with colored pigment that removes specific colors from the illumination light.
CMY is the subtractive color model that corresponds to the additive RGB model. Cyan, magenta, and yellow are the color complements of red, green, and blue. Due to the difficulties of manufacturing pigments that produce black when mixed together, a separate black ink is often used and is referred to as K (`B' is already used for blue).
See Look-Up-Table (computer hardware).
See color space (imaging).
A mathematical coordinate system (space) for assigning numerical values to colors. There are many ways to define such spaces, each with its own benefits and problems.
An image processing method for saving valuable disk and memory space by reducing the amount of space required to save a digital image. The graphics data is rewritten allowing it to be represented by a smaller set of data. Do not confuse this with encoding. See lossless (image compression) and lossy (image compression).
The ratio of a file's uncompressed size over its compressed size.
A 2-dimensional blob, for example, a region of interest (ROI), where at least one tangent is drawn that touches the blob at two different locations, and there is a point on the tangent between the two contacts that does not touch the blob.
In simpler words, if a rubber band could be snugly wrapped around a concave blob there would be places where the rubber band lifts off and does not touch the blob. Concave is the opposite of convex.
A 2-dimensional blob, for example, a region of interest (ROI), where every tangent that can be drawn touches the blob at a continuous stretch of the blob's surface with no gaps.
In simpler words, if a rubber band could be snugly wrapped around a convex blob there would be no places where the rubber band lifts off and is not touching the blob. Convex is the opposite of concave.
An image processing operation that is used to spatially filter an image. A convolution is defined by a kernel that is a small matrix of fixed numbers. The size of the kernel, the numbers within it, and a single normalizer value define the operation that is applied to the image. The kernel is applied to the image by placing the kernel over the image to be convolved and sliding it around to center it over every pixel in the original image.
At each placement the numbers (pixel values) from the original image are multiplied by the kernel number that is currently aligned above it. The sum of all these products is tabulated and divided by the kernel's normalizer. This result is placed into the new image at the position of the kernel's center. The kernel is translated to the next pixel position and the process repeats until all image pixels have been processed.
As an example, a 3x3 kernel holding all `1's with a normalizer of 1/9 performs a neighborhood averaging operation. Each pixel in the new image is the average of its 9 neighbors from the original.
A pair of numbers that represent a specific location in a two-dimensional plane, for example, an image or on a map.
An image processing method for removing the region near the edge of the image, but keeping the central area.
Format originator: Intel
Device-Dependent Bitmap. A Window image specification that depends on the capabilities of a specific graphics display controller. Since a DDB is matched to the current graphics controller, it is fast and easy to display since large blocks of memory need only be copied to the controller.
DIB (Windows).
When an image or other digital data set is compressed and stored, it is not usable until it is decompressed into it original form.
The co-ordinates of the coordinate system that describe the physical units that defines the computer screen.
Software written to work on a specific set of hardware platforms. Since these routines make use of physical device attributes, they may behave differently on other devices, although they will most often not work on other devices.
A set of low-level software routines that work with and control a specific hardware device. The names and functions are often standardized across many similar devices. This allows higher level software to use the hardware as a generic device. This frees the higher-level software from dealing with the particulars of specific devices and allows devices to be interchanged.
Software or data structures that are designed to work with or on a wide set of hardware platforms.
Device-Independent Bitmap is a Windows-defined image format specification. It is called device-independent because of its straightforward, common-denominator, format. It has all the information that a basic digital image needs and is laid out in a simple specification. Its simplicity makes it an ideal format for holding images that need to be shared by several programs.
See MPEG (image compression)
The method of using neighborhoods of display pixels to represent one image intensity or color. This method allows low-intensity resolution display devices to simulate higher resolution images. For example, a binary laser printer can use block patterns to display grayscale images.
halftone (imaging)
Dynamic Linked Library. A compiled and linked collection of computer functions that are not directly bound to an executable. These libraries are linked at run-time by Windows. Since Windows is in charge of managing (loading, linking, and removing) the DLLs, they are available to all executables currently running. Each executable links to a shared DLL saving memory by avoiding redundant functions from co-existing. DLLs allow a new level of modularity by providing a means to modify and update executables without re-linking. Just copy a new version of the DLL to the correct disk directory.
Dots Per Inch. The number of printer dots that can be printed in one inch. The printer's resolution is defined by the number of dots per inch: lower resolution = less dots per inch, higher resolution = more dots per inch.
In an image, an edge is a region of contrast or color change. Edges are useful in machine vision since optical edges often mark the boundary of physical objects.
A method that isolates and locates an optical edge in a digital image.
An edge map is the output of an image-processing filter that transforms an image into an image where intensity represents a change in the contrast (optical edge) of the original image.
An image where each pixel has 8-bits of information. An 8-bit pixel contains one of 256 possible values. There are two common types of 8-bit images: grayscale and indexed color.
In a grayscale image, each pixel takes one of 256 shades of gray and the shades are linearly distributed from 0 (black) to 256 (white). An 8-bit grayscale image does not require a palette but may have one.
An indexed color image is always a palette image. Each pixel is used as an index to the palette. These images can have up to 256 different colors. This includes hues as well as shades. Indexed 8-bit images are good for low color resolution images that do not need processing. They are 3 times smaller than full-color RGB images, but because the pixel values are not linear, many image-processing algorithms cannot work with them. They must be promoted to 24-bit for image processing.
This indicates 8-bit grayscale. 8-bit gray level is used to distinguish between 8-bit indexed color (8i) and 8 bit grayscale. An 8-bit gray level DIB image is one where each pixel in the bitmap is unchanged by its palette when displayed. Each palette entry is the same as its index.
This indicates 8-bit indexed color. 8i is used throughout this manual to distinguish between 8-bit grayscale (8-bit gray level) and 8-bit indexed color. An 8-bit indexed color DIB is one where each 8-bit pixel value in the bitmap is used as an index to the palette.
The palette dictates which RGB color the pixel displays. These images are compact ways of storing color images. However they are difficult to process because the bytes that make up the pixel can no longer be ordered with any certainty.
The format for storing uncompressed data (binary, ASCII, etc.), how it is packed (e.g. 4-bit pixels may be packed at a rate of two pixels per byte), and the unique set of symbols used to represent the range of data items.
Format originator: Adobe Systems, Inc.
1585 Charleston Road
Mountain View, CA 94039
An image-processing algorithm that redistributes the frequency of image pixel values allowing equal representation for any given continuous range of values. In an ideal world, an equalized image has the same number of pixels in the range from 10-20 as it does from 200-210. However, since digital images have quantized intensity values, the range totals are rarely identical but usually close.
See MPEG (image compression)
A specification for holding computer data in a disk file. The format dictates what information is present in the file and how it is organized.
An image-processing filter is a transform that removes a specified quantity from an image. For instance a spatial filter removes high, medium or low spatial frequencies from an image.
An image file format that allows 4-bits per pixel. This image can contain up to 16 (24) different colors or levels of gray.
A single picture, usually taken from a collection of images for example, a movie or video stream.
A computer peripheral that stores and sometimes manipulates digital images.
Image-processing algorithms that operate on a single image.
See special effects (image processing)
Gain and level are image-processing terms that correspond to the brightness and contrast control on a television. The gain is the "contrast", and the level is the "brightness." By changing the level, the entire range of pixel values are linearly shifted brighter or darker. Gain on the other hand linearly stretches or shrinks the intensity range, altering the contrast.
A non-linear function that is used to correct the inherent non-linearities of cameras and monitors. The intensity of the luminescent phosphor on the raster display is non-linear. Gamma correction is an adjustment to the pixel intensity values that make up for this inherent non-linearity.
A class of image processing transforms that alter the location of pixels. This class includes rotates and warps.
Name: Graphics Interchange File Format
Format originator: CompuServe Inc.
500 Arlington Center Blvd.
Columbus, OH 43220
This format uses the LZW compression created by Unisys. It is the same as the LZW compression used in the TIFF file format, except that the bytes are reversed and the string table is upside-down.
All GIF files have a palette. Some GIF files can be interlaced - the raster lines can appear as every 4 lines, then every 8 lines, then every other line. This is due to GIF files usually being received from a modem.
Graphical User Interface. A computer-user interface that uses graphical objects and a mouse for user interaction, for example Microsoft Windows.
A collection of software routines that work on digital images. These collections usually contain routines for drawing various graphical objects, for example, lines, circles, and rectangles.
A shade of gray assigned to a pixel. The shades are usually positive integer values taken from the grayscale. In an 8-bit image a gray level can have a value from 0 to 255.
A range of gray levels. Zero is usually black and higher numbers indicate brighter pixels.
A CCITT standard for transmission of facsimile data. It compresses black and white images using a combination of differential, run length and Huffman coding.
The reproduction of a continuous-tone image on a device that does not directly support continuous output. This is done by displaying or printing a pattern of small dots that simulate the desired output color or intensity. These methods are used extensively in magazines and newspapers.
A handle references a data object. A handle is a type of pointer but it usually contains, internally, more information about the referenced object.
A tabulation of pixel value populations displayed as a bar chart where the x-axis represents all the possible pixel values and the y-axis is the total image count of each given pixel value. A histogram counts how many pixels in the image have a given intensity value or range of values.
Each histogram intensity value or range of values is called a bin. Each bin contains a positive number that represents the number of pixels in the image that fall within the bin's range. A typical 8-bit grayscale histogram contains 256 bins. Each bin has a range of a single intensity value. Bin 0 contains the number of pixels in the image that have a grayscale value of 0 or black; bin 255 contains the number of white (255) pixels. When the collection of bins are sorted (0-255) and charted, the graph displays the intensity distributions of all the images pixels.
Hue Saturation, and Lightness. A method that describes any color as a triplet of real values. The hue represents the color or wavelength of the color. It is sometimes called tone and is commonly known as color. The hue is taken from the standard color wheel and is calibrated in degrees.
Saturation is the depth of the color. It states how gray the color is. It is a real valued parameter from 0.0 to 1.0 with 0.0 indicating full gray and 1.0 representing pure hue.
Lightness determines how black or white a color is. It ranges from 0.0 to 1.0 but with 0.0 representing black and 1.0 white. A lightness of 0.5 is a pure hue.
Hue, Saturation, and Value.
A method of encoding symbols that varies the length of the code in proportion to its information content. Groups of pixels that appear frequently in a image are coded with fewer bits than those of lower occurrence.
Intensity,Hue, and Saturation.
There are many digital image formats. Some of these are: TIFF, DIB, GIF, and JPEG. The image format specification dictates which image information is present and how it is organized in memory. Many formats support various sub-formats or `flavors'.
The general term "image processing" refers to a computer discipline wherein digital images are the main data object. This type of processing can be broken down into several sub-categories: compression, image enhancement, image filtering, image distortion, image display and coloring, and image editing.
machine vision
An image where each pixel value is used as an index to a palette for interpretation before the pixel is displayed. These images contain a palette that is initialized specifically for a given image. The pixel values are usually 8-bit and the palette 24-bit (8-red, 8-green, and 8-blue).
(eight) 8-bit image (digital imaging)
An image processing operation where each pixel is subtracted from the maximum pixel value allowed. This produces a photographic negative of the original. For an 8-bit image the inverse function is:
invert(pix) = 255-pix;
For an 8-bit RGB image the function is:
invert(Rpix) = 255-Rpix;
invert(Gpix) = 255-Gpix;
invert(Bpix) = 255-Bpix;
A term used to describe the visual appearance of lines and shapes in raster pictures that results from a grid of insufficient spatial resolution.
Joint Photographic Experts Group. A collaborative specification of the CCITT and the ISO for image compression. The standard JPEG compression algorithm, which is used by ImageGear, is a lossy compression scheme - it loses data.
Format originator: Joint Photographics Experts Group
A small matrix of pixels, usually no bigger that 9x9, that is used as an operator during image convolution. The kernel is set prior to the convolution in a fashion that emphasizes a particular feature of the image. Kernels are often used as spatial filters, each one tuned to a specific spatial frequency that the convolution is intended to highlight.
convolution (image processing).
A dictionary-based image compression method with lossless performance that results in fair compression ratios. Most files are compressed at 2:1.
See gain & level (imaging).
A collection of software functions that can be called upon by a higher level program. Most libraries are collections of similar routines, for example, those used for graphical or image processing.
DLL (Microsoft Windows)
A look-up-table or LUT is a continuous block of computer memory that computes the values of a function for one variable. The LUT is set up for the function's variable to be used as an address or offset into the memory block. The value that resides at this memory location becomes the function's output. Because the LUT values need only be initialized once, LUTs are very useful for image processing due to their inherent high speed.
LUT[pixel_value] = f(pixel_value)
LUTs come in various widths, usually in units of bits. An nxm bit LUT has 2n addresses or 256 stored values. Each value is 2m bits wide.
If the second dimension is left off it can be assumed to be equal to the first. In grayscale image processing, LUTs are commonly 8x8, and the bit widths are usually assumed.
A linear LUT, sometimes called a NOP LUT or pass through, is a LUT that is initialized to output the same values as the input. NOP_LUT[pixel_value] = pixel_value.
See palette (digital imaging).
A method of image compression where there is no loss in quality when the image is uncompressed. The uncompressed image is mathematically identical to its original. Lossless compression is usually lower in compression ratio than lossy compression.
A method of image compression where some image quality is sacrificed in exchange for higher compression ratios. The amount of quality degradation depends on the compression algorithm used and by a user-selected quality variable.
Look-Up-Table. See Look-Up-Table (computer hardware).
A LUT transform is an image processing method that takes an image and passes each pixel, one at a time, through a pre-set LUT. Each new pixel is a function of one and only one pixel from the original image and is arranged in the same location.
Any image-processing algorithm that transforms a single pixel into another single pixel, both from the same location, can be performed quickly using a LUT.
Square_root_LUT[pixel_value] = sqrt(pixel_value)
Look-Up-Table (computer hardware)
Lempel Ziff Welch. See Lempel Ziff Welch (data compression).
A sub-discipline of artificial intelligence that uses video cameras or scanners to obtain information about a given environment. Machine vision processes extract information from digital images about objects in the image. This is the opposite of computer graphics that takes various data describing objects in and produces an output image. Machine vision takes an image in and outputs some level of description about the objects in it, (i.e. color, size, brightness).
image processing.
See neighborhood process (image processing).
An image spatial filtering operation based on an input pixel and its 8 neighbors. The resulting value is the median (5th from the sorted values). A median filter is often used to reduce spike or speckling noise from a grayscale image. It has the advantage over convolution smoothing - it better preserves edges.
An imaging process where one image is gradually transformed into a second image, where both images previously existed. The result is a sequence of in-between images when played sequentially, as in a film loop show, give the appearance of the starting image being transformed to the second image.
Morphing is made up of a collection of image processing algorithms. The two major groups are: warps and blends. Do not confuse this with morphology.
Motion Pictures Experts Group. An ISO specification for the compression of digital-broadcast quality full-motion video and sound.
A class of image-processing routines that works on neighborhoods of pixels. Each pixel in the new image is computed as a function of the neighborhood of the pixel from the original pixel. The neighborhood ID is defined by a kernel that is set once for each image to be processed.
point process (image processing)
An image comprised of pixels that contain only a single bit of information. Each pixel is either on or off. Normally, "on" is white and "off" is black.
See MPEG (image compression).
An image or sub-image that can be placed over a given image. The pixels from the original image are not altered but the overlay can be viewed as if they had been. Usually used to place temporary text and annotation marks, for example, arrows on a image.
A binary image is usually stored in computer memory (8 pixels per byte). In this case each byte is referred to as being filled with packed bits. This saves space but makes reading and writing any individual pixel harder since most computers cannot directly access memory in chunks smaller than a byte.
A digital image palette is a collection of 3 look-up-tables, or LUTs, that are used to define a given pixel's display color. One LUT is for red, one for green and one for blue. The number of entries in the LUTs depend on the width (in bits) of the image's pixels.
A palette image requires its palette in order to be displayed in a fashion that makes sense to the viewer. This is often the case for color 8-bit images. Without a palette describing what color each pixel needs for display, this type of image would most likely be displayed as randomly selected noise.
A grayscale palette is one where each of the 3 LUTs are linear. The output is whatever is input to them. Since each color component (R, G, B) is an equal value, any pixels input to them are displayed in a varying shade of gray.
Look-Up-Table (computer hardware)
A sub-discipline of machine vision where images are searched for specific patterns. Optical character recognition or "OCR" is one type of pattern recognition, where images are searched for the letters of the alphabet.
Format originator: ZSoft Corp.
450 Franklin Road Suite 100
Marietta, GA 30067
An abbreviated version of the term PIcture (X) ELement. This is the most fundamental element of a digital image. A digital image is made up of rows and columns of points of light. Each indivisible point of light is a pixel. Each pixel in an image is addressed by its column (x) and its row (y) usually written as the coordinate pair (x, y). An 8-bit pixel can take on one of 256 values. A 24-bit pixel has 3, 8-bit components for each of the primary colors, red, green, and blue.
A class of image processing transforms where every pixel is taken, one at a time from an image, and mathematically transformed into a new value with no input from any other pixel in the image. A point process is a degenerative neighborhood process where the kernel is a matrix of pixels that is 1x1 or in other words a single pixel.
An alternative to the usual Cartesian method of addressing image pixels. Polar coordinates use the coordinate pair, angle and radius from an origin instead of column and row.
A special effect that decreases the number of colors or grayscale colors in an image. The default image pixel contains 256 levels of gray or 256 levels of red, green, and blue. Using this effect reduces these numbers.
A method of assigning color to ranges of a grayscale image's pixel values. Most often used to highlight subtle contrast gradients or for visually quantifying pixel values. The applied color usually has no correspondence to the original scene. The colors are used only as a guide or highlight.
A term that describes a single row of a digital image. A raster image is made up of rows of pixels. This is opposed to vector images, where an image is made up of a list of polygon nodes. A raster is sometimes called a scan-line.
Relative coordinates refer to position, as identified as the distance from a local origin.
The process of displaying an image. The final and actual displayed image is said to be rendered.
There are two types of resolution in digital images; spatial and intensity. Spatial resolution is the number of pixels per unit of length along the x and y axis. Intensity resolution is the number of quantized levels that a pixel can have.
Red, Green, Blue. A triplet of numeric values that describe a color.
Red, Green, Blue, Quad. A set of four numbers used to describe a color. The forth number is always set to zero. This creates an efficient color LUT or palette. It is more efficient because most computers find multiplying by 4 easier then by 3, as is the case in an RGB triplet.
Region Of Interest. A region of interest or ROI is a specification and date structure that allows for the definition of arbitrarily shaped regions within a given image, often called sub-images. A ROI can be thought of as a place holder which remembers a location within an image. ROIs are of several types, each defined in a manor that makes sense for its type.
ROIs are either a rectangle (also called an AOI), square, circle, or a segment list. A rectangle is defined by any two points in the image. From these two points one and only one rectangle can be drawn. A square is defined by a single point and a single length. A circle is defined by its center and radius. A segment list is an arbitrary list of triplets (x, y, xlen); a single point and a length to the right.
Every point in an image is either inside or outside of a given ROI.
Most image processing functions in this package work only within a given ROI. The ROI can encompass the entire image.
AOI (Image Processing)
See raster (imaging)
Screen coordinates are those of the actual graphics display controller. The origin is almost always at the upper left-hand corner of the display.
coordinates
A contiguous section of a raster line. It is defined in physical coordinates by the triplet of its left most point and length (x, y, length).
A skew is image distortion that often occurs when a scanner is sampling an image and the image slides to either side before the scan is complete. This has the effect of transforming squares into rhombuses.
Any image processing transform that is applied mostly for its artistic value. Special effects include, wipes, transitions, barn doors, etc.
An image processing method that takes a given image and assures that the intensity distribution fills the entire range of possible values. An 8-bit image that is stretched always has at least one pixel with a value of zero and one of 255. The term comes from the before and after histogram of the given image. A stretch operation linearly stretches a histogram so that is ranges from the minimum pixel value to the maximum pixel value.
Format originator: Truevision, Inc.
7340 Shadeland Station
Indianapolis, IN 46255
Tagged Image File Format.
Format originator: Aldus Corp
411 First Ave South
Seattle, WA 98104, and
Microsoft Corp
16011 NE 36th Way
Redmond, WA 98073
A small copy of an image. Thumbnails are used to display many images on the screen at once.
An algorithm that takes an image, alters it, and outputs a new image. Sometimes written as `xform'.
Three numbers used together to represent a single quantity or location, for example, RGB or (x, y, z).
A 24-bit image contains pixels made from RGB triplets.
A sequence of still images that are transmitted and displayed in synchronous order that give the appearance of live motion.
A geometric image processing routine that distorts an image by spatially compressing and stretching regions.
Format originator: Microsoft Corp
16011 NE 36th Way
Redmond, WA 98073
The real valued coordinates that make sense for the object, treating it as if it really exists. The world coordinates of a house on a map would be in miles or longitude and latitude. This is the opposite of screen, device or model coordinates.
Format originator: WordPerfect Corp
A mathematical method for referring to a pixel from a digital image. Since most digital images are maintained as a Cartesian matrix of pixels, each pixel has a unique address that can be described as an x or horizontal displacement from the origin and a y or vertical displacement from the origin.
coordinates
Shorthand for transform.
(Y) luminance, (I), (Q). YIQ is the color model used for U.S. commercial television. It was designed to be backwards compatible with the old black and white television sets. "Y" or luminance is a weighted average of the red, green, and blue that gives more weight to red and green than to blue. The I and Q contain the color components. Together they are called chromaticity.
A mathematical method that refers to a pixel's intensity from a digital image. An image can be written as: I(x,y)=z