Your current location :
Training Courses Basic Camera Optics 内容
Camera Basics


1、 Overview of machine vision camera

The main function of the camera is to collect images. The earlier camera is called analog camera, which needs to be used with image acquisition card. The analog electrical signal is converted into digital electrical signal by the acquisition card, and then transmitted to the computer. With the development and popularization of digital interface technologies such as IEEE1394, GigE, USB 3.0, camera link. Coaexpress, analog cameras are gradually replaced by digital cameras. Digital cameras follow certain communication protocols and can directly convert the collected images into digital electrical signals, so it has become the mainstream camera in machine vision system.


2、 Composition and working principle of CCD camera
1. CCD Sensor
Charge coupled device (CCD) is a kind of semiconductor imaging device, which can sense light and convert optical signal into electrical signal. It has the advantages of high sensitivity, strong light resistance, small size, long life and vibration resistance. The typical CCD camera is mainly composed of CCD chip, driving circuit, signal processing circuit, video output module, electronic interface circuit, optical mechanical interface, etc. The more the number of pixels contained in CCD chip, the higher the picture resolution.


2. Working principle of CCD camera
The light from one side of the object is focused on the CCD chip through the optical lens. Under the driving pulse provided by the driving circuit, the CCD completes the conversion, storage, transfer and reading of optical charge, so as to convert the optical signal into electrical signal output. The signal processing circuit receives the electrical signal from CCD, and carries out the pre-processing such as sample and hold, correlation double sampling, automatic gain control, and then synthesizes the video signal, which converts the electrical signal from CCD into the required video format.


3、 Composition and working principle of CMOS camera
1. CMOS Sensor
CMOS is the abbreviation of complementary metal oxide semiconductor (CMOS). CMOS technology integrates image sensor array, driving and control circuit, signal processing circuit, analog-to-digital converter, all digital interface circuit, etc., so as to improve the integration and design flexibility of CMOS camera. It is composed of peripheral circuit, DSP, CMOS module. The photosensitive chip converts the incident optical signal into electrical signal, collects and stores the charge signal in a certain form, and then outputs it to FPGA chip. FPGA preprocesses the signal and then caches it to memory for subsequent image transmission. The image is transmitted to the image receiving terminal through the corresponding communication interface.


2. The working principle of CMOS camera

In the process of photoelectric conversion, the workflow of CMOS camera is similar to that of CCD camera, which uses photodiode for photoelectric conversion. The main difference between them is the way of image data transmission. In the CCD sensor, the charge data of each pixel in each row will be transmitted to the next row of pixels in turn, which is output by the bottom part, and then amplified by the amplifier at the edge of the sensor, so the signal output consistency is good; in the CMOS sensor, each pixel unit will be integrated with an amplifier and a / D conversion circuit, and the CMOS chip can directly output digital Most CMOS cameras are designed with FPGA or DSP processing module directly, which can pre process image data such as filtering and correction.




4、 Camera Classification

1. By Chip Process Type

Classification Main Differences
CMOS (1) CMOS is integrated on metal oxide semiconductor materials;
(2) The manufacturing cost and power consumption of CMOS are low;
(3) The color restoration ability is weak;
(4) CMOS chip has rolling shutter type exposure and global exposure. Rolling shutter exposure is suitable for shooting still objects, and moving objects have dragging shadows and images will deform; global exposure can capture still or moving objects;
CCD (1) CCD is integrated on semiconductor single crystal material;
(2) The cost and power consumption of CCD are high;
(3) The color restoration is relatively strong, and the image sharpness and clarity are good;
(4) The whole exposure mode or CCD exposure mode can be used;
Linear Array Camera (1) The chip is linear;
(2) There must be relative motion between the camera and the object in order to image;
(3) The price is relatively high;
(4) It has very high row frequency and horizontal resolution;
(5) Due to the large amount of data transmission, the data transmission interface is generally GigE interface, Cameralink interface and coaexpress interface.
Array Camera
(1) The chip is area array, and the camera interfaces are C, CS and F;
(2) Objects can be imaged at rest or in motion;
(3) Prices vary according to performance;
(4) Two dimensional image information can be obtained in real time, and the image can be measured intuitively; data transmission interfaces include GigE, IEEE1394, USB, camera link and other interfaces.

2. By Image Mode

Classification

Main differences

Color camera

The image is in color

Monochrome camera

The image is grayscale


3. According to the signal output mode

Classification Main Differences
Analog Camera
(1) Generally, the resolution is low, and the acquisition speed is slow. The typical frame rate is 30 frames per second, which is cheap;
(2) Analog cameras are divided into progressive scanning and interlaced scanning. Generally, they are interlaced scanning, and image transmission is vulnerable to noise interference, resulting in image quality degradation;
(3) The output of the signal is analog signal, and the image acquisition card is used for analog / digital signal conversion outside the camera;
(4) More used in real-time monitoring and other security industries, the market utilization rate is gradually reduced.
Digital Camera
(1) The resolution ranges from 300000 to 120 million, with high acquisition speed and different prices;
(2) There are rolling shutter exposure and global exposure, the image quality is good;
(3) The signal output is digital signal, and the A / D signal conversion is completed inside the camera;
(4) Gradually replacing analog cameras.

4. Other Classification Methods

By Resolution Size
It can be divided into ordinary resolution camera and high resolution camera;
According to the output signal speed
It can be divided into ordinary speed camera and high speed camera;
Spectral Response
It can be divided into visible (ordinary) camera, infrared camera, ultraviolet camera, etc.


5、 Main parameters and application of area array camera
1. Sensor Size

That is, the target size is measured by the diagonal length of the chip; the linear array camera is measured by the transverse length of the chip. In industry, the commonly used sensor size of area array camera is shown in the table below, but the actual size is slightly different.

Chip
Horizontal H (mm)
Vertical V (mm)
Diagonal D (mm)
1"
12.8
9.6
16.0
2/3"
8.8
6.6
11.0
1/2”
6.4
4.8
8.0
1/3"
4.8
3.6
6.0
1/4"
3.2
2.4
4.0



2. Pixels

Pixel is the smallest unit of image. As shown in the following figure, the original image is partially enlarged, each small cell represents a pixel, and each pixel corresponds to a gray value.



3. Pixel

Pixel is the smallest photosensitive unit on the camera chip, and each pixel corresponds to a pixel in the image.



4. Pixel Size
Pixel size is the actual physical size of each pixel on the camera chip. The common physical sizes are 2.2um, 3.45um, 3.75um, 4.8um, 5.5um, 5.86um, 7.4um, etc. For a chip of the same size, the larger the pixel size, the more photons can be received, the higher the sensitivity and sensitivity of the chip, and the brighter the image will be.


5. Pixel Depth
The number of bits of data used to store each pixel is called pixel depth. For black and white cameras, pixel depth defines the gray scale from dark to bright. For example, if the pixel depth is 8 bits, the output image gray level is 2 to the eighth power, that is, o-255 is 256 levels in total. When the pixel depth is 10 bits, the output image gray level is the 10th power of 2, that is, o-1023 is 1024 levels in total. The larger the pixel depth, the higher the accuracy of measurement, but also reduce the speed of the system. Generally, 8-bit pixel depth is used in industry.


6. Resolution

The resolution is determined by the number of pixels arranged on the chip array. For the area array camera, the resolution is determined by multiplying the number of horizontal pixels and vertical pixels. For example, the resolution of a camera is 1280 (H) × 1024 (V), which means that the number of pixels per line is 1280. There are 1024 lines of pixels. The resolution of this camera is about 1.3 million pixels. When imaging the same field of view, the higher the resolution, the more obvious the display of details. At present, the resolution of commonly used cameras is 300000, 1300000, 2000000, 50000000, 10million, 29000000, 71000000, 120m, etc.


7. Accuracy
The actual physical size of each pixel in the image.

Accuracy = Field of view in one direction / Resolution of camera in one direction

For example, if the length of the horizontal direction of the field of view is 32mm and the horizontal resolution of the camera is 1600, the accuracy of the vision system can be obtained as 0.02mm per pixel. In practical application, in order to improve the stability of the system, the theoretical accuracy of machine vision is usually higher than the required accuracy.


8. Frame Rate / Line Frequency
The acquisition frequency of the camera is represented by frame rate for area array camera and row frequency for linear array camera. The unit of frame rate of area array camera is FPS (frame per second), that is, frames per second. It refers to how many images the camera can collect per second, and one image is one frame. For example, 15 frames per second means that the camera can capture up to 15 images per second. Generally speaking, the higher the resolution, the lower the frame rate. The line frequency unit of linear array camera is Hz, and 1 Hz corresponds to the acquisition of a row of images. For example, 50 kHz / s means that the camera scans 50000 lines in 1 second. Generally speaking, the higher the resolution, the lower the line frequency.


9. Gain

The amplification ratio of input signal and output signal is used to improve the brightness of the picture as a whole. The unit of gain is dB.


10. External Trigger
Industrial cameras generally have the function of external trigger, which can control the image acquisition according to the external signal, that is, to receive an external signal and collect an image. In practical use, the external trigger function of sensor and camera can be used flexibly.
*Note: in some occasions where the external trigger function of the camera is used, other electronic equipment may be used, such as DC / AC motor, frequency converter, contactor, etc. if the shielding of various signals is not good, it is likely to cause interference to the external trigger signal of the camera and affect the use of the camera.
*Signal output: when using the external trigger function of the camera, generally speaking, the lighting of the external light source is also in the external trigger state with the camera. The light source is on when the camera is collecting, but it is off when the camera is not collecting. Some cameras have the function of signal output, output trigger signal, control the light source on and off, so as to cooperate with the camera image acquisition.


11. Exposure Time / Exposure Mode
Exposure time refers to the time when light is projected onto the sensor chip of the camera. Generally, the longer the exposure time, the brighter the image. External trigger synchronous acquisition mode, the exposure time can be consistent with the line period, or a fixed time can be set. The exposure methods of industrial camera are divided into line exposure and frame exposure, in which line exposure refers to line by line exposure, and frame exposure is one-time exposure of all pixels. Linear array camera is line by line exposure, and fixed line frequency can be selected.


12. Drag
When shooting a moving object, the exposure time does not match with the moving speed, resulting in repeated imaging of the object on the pixel.






13. Dynamic Range
The dynamic range of the camera indicates the range of light signals that the camera can detect, and the range from "darkest" to "brightest". For a fixed camera, its dynamic range is a fixed value, which does not change with the external conditions. The dynamic range can be expressed in multiples, dB or bit. The larger the dynamic range is, the better the camera can adapt to different light intensities, and the richer the level it can represent, the wider the color space it contains. In the linear response region, the dynamic range of the camera is defined as the ratio of saturated exposure to noise equivalent exposure
Dynamic range = full well capacity of photosensitive element / equivalent noise signal


 

14. Noise / SNR


Noise: the rough part of an image produced by a photosensitive chip in the process of receiving and outputting light. It also refers to foreign pixels that should not appear in the image.
Signal to noise ratio: the ratio of real image signal to image noise in camera system.



15. Spectral Response
Spectral response refers to the ability of the camera chip to respond to different wavelengths of light. Generally, it is represented by spectral response curve. As shown in the figure, the x-axis represents the wavelength of incident light, and the y-axis represents the response ability. According to the different response spectra, the camera can be divided into visible camera (400nm-1000nm, peak 500nm-600nm), infrared camera (700nm) and ultraviolet camera (200nm)

-400nm)Different spectral response cameras are selected according to different applications.





6Camera Standards
1.GenlICam1.GenlICam

Genlcam (camera general protocol) can provide universal programming interface for various types of cameras, so as to realize the interchangeability of different brands of cameras. Its purpose is to realize the same application programming interface (APL) in the whole industry.






*The genlcam standard consists of many modules
(1) Gentl: (generic transport layer) standardizes transport layer programming interfaces. It realizes camera enumeration, camera register access, streaming data and asynchronous event transmission. Since gentl is a relatively low-level interface, end users usually use software development kit instead of directly using gentl. The main purpose of gentl is to ensure the drivers and software provided by different suppliers
Development kits work together seamlessly.
(2) Genapi: (general application programming interface) used to set up the application development interface of the camera. This file lists all the features of {standard and custom] cameras and defines their mapping to camera registers. The file format is XML based and therefore readable. This file is usually stored in the camera firmware and can be retrieved by the software development kit when the camera is first connected to the system.
(3) Sfnc: ([standard feature naming specification} standardizes the name, type, meaning and use of camera features in camera self description file. Make sure that cameras from different suppliers use the same name under the same function.
(4) Gencp: (general control protocol] standardizes the package layout of control protocol and uses it by interface standard to reuse some control path applications. Members of the genicam standards group maintain a reference implementation component that can parse files containing camera self descriptions. The product code is written in C + + and can be used for free. It is highly portable and compatible with a series of operating systems and compilers. Most of the available software development kits use this reference implementation, thus ensuring a high degree of interoperability.


 

 


2. Hardware Transmission Interface
For digital cameras, the commonly used interfaces are GigE, USB, IEEE1394 (1394a and 1394b), camera link and coaxpress. Different interfaces have different characteristics.
(1) GigE Interface
GigE vision standard is a widely used camera interface standard, which is based on the Ethernet (IEEE 802.3) communication standard. The standard was issued in May 2006 and revised in 2010 (version 1.2} and 2011 (version 2.0}). GigE 'vision supports multi stream channels and can realize ultra long distance, fast and accurate map image transmission by using standard Ethernet cable. Hardware and software from different vendors can operate seamlessly in Ethernet connection.
Gigabit Ethernet technology as the latest high-speed Ethernet technology, compared with the previous 100 gigabit network, in the transmission speed increased 10 times, and the price is cheap, is a high cost-effective data transmission solution. Like USB interface, 10m / 10om Ethernet interface is a standard interface on PC. upgrading from 1om / 10om network to Gigabit network does not need to change the network application program, components and operating system, which can achieve maximum compatibility and save cost. GigE interface is a widely used data transmission interface.
(2) USB Interface
The usb3 vision standard was launched in late 2011, and its version 1.0 was published in January 2013. Although it is a new standard, the machine vision industry is no stranger to USB technology. The simple plug and play installation and high performance of USB interface have attracted the attention of consumers. Experts from several companies have established a standard that can meet different needs in the machine vision industry. This method can support the existing USB host hardware and almost all operating systems. The image in the camera can be directly transferred to the user's cache by using hardware direct memory access (DMA). With the camera control concept of genlcam standard, end users can easily apply usb3vision to existing systems.
(3) IEEE1394 Interface
1394 interface, also known as "FireWire". The interface is based on the serial standard developed by apple in 1987. The maximum transmission speed of 1394a is 400Mbps and that of 1394b is 80ombps. Different digital devices are directly connected through 1394 interface, which is true point-to-point transmission. This interface also supports hot swap. In industrial applications, IEEE1394 is a relatively stable data transmission interface, which has gradually withdrawn from the mainstream market.
(4) Camera Link Interface
Camera link standard is formulated, modified and released by AIA. Camera link is developed from channel link technology. Camera link standard standardizes the interface between digital camera and image acquisition card, and adopts unified physical connector and cable definition. As long as the camera link standard camera and image card acquisition can be physically interconnected. Camera link is mainly used in high-speed data transmission occasions. Camera link interface is mostly used in linear array cameras. Compared with other types of interfaces, its price is higher.
The camera link standard was first released in 2000. This is a powerful and perfect communication link. It standardizes the connection between the camera and the image acquisition card, and defines a complete interface, including providing data transmission, camera timing, serial communication and sending signals to the camera in real time. Camera link is a non packet protocol, which is still the simplest camera / image capture card interconnection standard. Currently in version 2.0, standard specifications include mini camera link connector, power over camera link (POCl), POCl Lite (minimal POCl interface supporting base configuration), and cable performance specification.
(5) Coaxpress Interface
Coaexpress (CXP) standard was released in December 2010. Coaexpress provides a high-speed interface between the camera and the image capture card, and supports long cable transmission. In the simplest form of coaexpress, it uses a single coaxial cable: it transfers image data from the camera to the image acquisition card at a speed of 6.25gbits/s; at the same time, it transfers the control data from the image acquisition card to the camera at the speed of 20.8mbits/s; and it can provide the camera with up to 24 V power supply. Link aggregation can be used, with more than one coaxial cable sharing data. Version 1.1 supports smaller DIN 1.0 / 2.3 connectors.
Coaexpress supports real-time triggering, including triggering ultra-high-speed line scan cameras. The camera is connected to the camera via a standard 20.8 Mbits / s uplink, with a trigger delay of 3.4 microseconds (μ s), using high speed. When uplink, the trigger delay is generally 150 ns. At present, coaxpress has been able to support the fastest cameras on the market, with a large margin. Six links in one connector can achieve a speed of up to 3.6 Gbytes / s.