Schweitzer Fachinformationen
Wenn es um professionelles Wissen geht, ist Schweitzer Fachinformationen wegweisend. Kunden aus Recht und Beratung sowie Unternehmen, öffentliche Verwaltungen und Bibliotheken erhalten komplette Lösungen zum Beschaffen, Verwalten und Nutzen von digitalen und gedruckten Medien.
The basic digital image is composed of a two-dimensional array of numbers. Each number in the array represents a value of the smallest visual element, a pixel. The indexed location of the pixel value in the array corresponds to the X and Y locations of the pixel within the image, as measured from the top-left corner. The values of a pixel in an X and a Y location in the digital grayscale image, f(x,y), represent the brightness of the pixel in a range from black to white, as seen in Figure 1.1. Let us assume that total number of pixels are 300 (0-299) and 250 (0-249) in the X and Y locations, respectively. Each image can be represented by the array of size 300 × 250 that has a value for each pixel.
Figure 1.1 Grayscale image.
Each image pixel value is related to the brightness of the image at that specific location. For a given camera device, the maximum value recorded for the image pixels is generally related to a characteristic of the camera referred to as the bit depth. For example, if bit depth is k, then there will be as much as 2k levels of brightness that can be defined. For example, if the bit depth is 8 bits, then a pixel can have 256 values (28) in the range between 0 and 255.
A grayscale image pixel most often only has brightness information that can be represented in 8 bit values and as such the image is often referred to as an 8 bit image. If the pixel value is 0, then it is the most dark (black) image pixel, whereas a value of 255 means the brightest image (white) pixel. For a better understanding, Figure 1.1 shows a magnified portion of an image where the location range of X pixels is 85-91 and Y range is 125-130 within a total of 300 × 250 pixels in the image. In the case of pixel location of 85 along the X direction and 125 along the Y direction, the image pixel value is f(85, 125) = 197, which is closer to 255 than 0 and therefore is rendered closer to bright end of the image scale (white). On the other hand, the value of image pixel where X = 91 and Y = 125 is 14, which is close to 0 and thus relatively dark (black).
Due to its simple representation as single pixel values, grayscale images are often used in machine vision applications as a starting point to measure the length or size of an object and to find a similar image pattern via pattern matching. The gray images can be acquired from digital monochrome or color cameras. When the color image is acquired, the color image can easily be converted to a grayscale image by using the color plane extraction function that is provided by NI Vision Development Module.
The most commonly used image format for finding the existence of the object, location, and size information is binary image. The binary image pixel has two digit values, where object has the value of 1 and background has the value of 0 in most cases. Since there are only two values used, it is often called a 1 bit image (bit depth of 1, or 21). To make a binary image, the grayscale image is commonly used as a starting point. In general, we use a threshold value to convert a grayscale image to a binary image. In the case that the object of interest in an image is bright against a dark background (the imaged object's pixel value is larger than a chosen threshold value), it is classified as the object (a pixel image value of 1) and if the image value is less than the threshold value, it can be classified as the background (the pixel image value of 0). However, it should be noted that there will be cases where the dark parts of an image may represent the object with the bright part comprising the background.
Once the grayscale image is converted to a binary image, various image processing functions can be used. For example, we can use the particle analysis function from which the size, area, and the center of the object can be easily obtained. Prior to particle analysis, the morphology functions are often used to modify aspects of the image for better or more reliable results. For example, we may want to remove unnecessary parts from the binary image or repair parts of an object that obviously misrepresents the object in the grayscale to binary conversion. By using the morphology functions in the LabVIEW Vision Development module, we can increase the accuracy of image analysis based on the binary image. Details of this process will be discussed later.
Digital color images from digital cameras are usually described by three color values: R (red), G (green), and B (blue). The three color values that represent an image pixel describe the color and brightness of the pixel. In other words, the brightness and color of the pixels in an image obtained from a digital color camera are generally defined by the combination of the R, G, and B values. All possible colors can be represented by these three primary colors. The digital color image is often referred to as a 24 or 32 bit image. Figure 1.2 shows the basic concept of a 32 bit color image. Among four possible 8 bit values in a 32 bit word, we use 8 bits for each of the R, G, and B components. The other 8 bit component is not used. This is due to the computer's natural representation of an integer as a 32 bit number.
Figure 1.2 32 bit color image.
Figure 1.3 shows an example of a color image. The total size of the image is 800 × 600. The X direction has 800 (0-799) pixels and Y direction has 600 (0-599) pixels. Each pixel has three component values representing R, G, and B. For example, the image value at X = 600, Y = 203, f(600, 203), is R = 196, G = 176, B = 187.
Figure 1.3 Color image (f(x, y) = 0 = R = 255, 0 = G = 255, 0 = B = 255).
For a better explanation, a USB camera was used to acquire the images via a LabVIEW VI, as shown in Figure 1.4. As seen in the lower part of Figure 1.4, the total size of the image (the number of pixels) is 640 × 480. The pixel location is defined by the X and Y locations, where upper left is (0,0) and lower right is (639,479). Each of the RGB values in a pixel has an 8 bit value, which corresponds to an integer range of 0-255. When we move the mouse cursor over the acquired image, the pixel's RGB values pointed to by the mouse cursor are shown at the bottom of the window. In the example as seen in Figure 1.4, the RGB values at the mouse X/Y image position (257,72) are (255,253,35).
Figure 1.4 Acquired color image.
Each pixel color and brightness is the combination of RGB values. For example, R (red) has the range of values between 0 and 255. If the value is close to 0, the R image becomes dark red, which can be seen as black. On the other hand, if the image value of R becomes 255, then the R component becomes the brightest, which is seen as bright red. The green and blue pixel component values have same property. If the R = 255, G = 0, and B = 0, then the pixel appears to be bright red. If all three RGB values are 255, then the pixel appears to be white (bright pixel), whereas if the RGB values are 0, then the pixel becomes dark (black).
One alternative method for color image representation, HSL (hue, saturation, and luminance), can be used instead of RGB (Table 1.1). The three HSL values are also generally represented by 8 bit values for each component. By using proper values of HSL, any color and brightness can be displayed in a pixel.
Table 1.1 The meaning of HSL.
Figure 1.5 shows the basic components of imaging systems. Imaging acquisition hardware requires a camera, lens, and lighting source. To get an image from the camera to the computer, we need to select the most appropriate camera communication interface (bus), which connects the camera to the computer. Some cameras require specific types of standardized communication busses integrated into computer interface cards called frame grabbers. Examples of a few standardized frame grabber communication busses are Analog, Camera Link, and Gigabit Ethernet (GigE). Other cameras connect to the computer over more common communication interfaces such as USB, Ethernet, or Fire Wire that are provided as standard configurations in most computers.
Figure 1.5 Basic component of imaging system.
Software is also needed to display and extract information from images. In this book, image processing techniques will be described for the purpose of processing and...
Dateiformat: ePUBKopierschutz: Adobe-DRM (Digital Rights Management)
Systemvoraussetzungen:
Das Dateiformat ePUB ist sehr gut für Romane und Sachbücher geeignet – also für „fließenden” Text ohne komplexes Layout. Bei E-Readern oder Smartphones passt sich der Zeilen- und Seitenumbruch automatisch den kleinen Displays an. Mit Adobe-DRM wird hier ein „harter” Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.Bitte beachten Sie: Wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!
Weitere Informationen finden Sie in unserer E-Book Hilfe.