Today, our lives are highly dependent on sensors. As an extension of the “five senses” of human beings, sensors can perceive the world and can even observe details that the human body cannot perceive. This ability is also necessary for the future intelligent society.

However, no matter how excellent the performance of a single sensor is, it still cannot meet people’s requirements in many scenarios. For example, the expensive lidar in a car can determine that there is an obstacle ahead based on the generated point cloud, but if you want to know exactly what the obstacle is, you also need the on-board camera to help “take a look”; if you want to sense the movement of this object Status, may also need millimeter wave radar to help out.

This process is like the familiar “blind person touching the elephant”. Based on its own characteristics and expertise, each sensor can only see a certain aspect of the measured object’s characteristics. Only when all the characteristic information is integrated can it be formed. Complete and accurate insight. This method of integrating multiple sensors together is called “sensor fusion”.

Avnet: Say goodbye to the blind and touch the elephant, sensor fusion is the standard for an intelligent society

For sensor fusion, a more rigorous definition is: the use of computer technology to automatically analyze and synthesize information and data from multiple sensors or multiple sources under certain criteria to complete the required decision-making and estimation information processing process. These sensors as data sources can be the same (isomorphic) or different (heterogeneous), but they are not simply stacked together, but must be deeply integrated from the data level.

In fact, examples of sensor fusion are not uncommon in our lives. In summary, there are three main purposes for using sensor fusion technology:

· Gain a global view. A single sensor has a single function or insufficient performance, and a higher level of work can be completed by adding them together. For example, the 9-axis MEMS motion sensor unit we are familiar with is actually a combination of a 3-axis acceleration sensor, a 3-axis gyroscope and a 3-axis Electronic compass (geomagnetic sensor). Only through such sensor fusion can accurate motion sensing be obtained. Data to provide users with a realistic and immersive experience in high-end VR or other applications.

· Refined detection granularity. For example, in the perception of geographic location, GPS and other satellite positioning technologies have a detection accuracy of about ten meters and cannot be used indoors. If we can integrate local positioning technologies such as Wi-Fi, Bluetooth, UWB, or add MEMS inertial units , Then the accuracy of positioning and motion monitoring of indoor objects can be improved by orders of magnitude.

·Realize safety redundancy. In this regard, autonomous driving is the most typical example. The information obtained by various on-board sensors must be mutually backed up and mutually confirmed in order to be truly safe. For example, when the level of autonomous driving is raised above L3, millimeter-wave radar will be introduced on the basis of the on-board camera, and for L4 and L5, the lidar is basically standard, and it will even be considered to be collected through the V2X car network. Data fusion comes in.

In short, sensor fusion technology is just like a “coach”, capable of kneading sensors with different performances into a team, working together and complementing each other to win a game together.

The sensor to be fused is selected, and how to fusion is the next issue to be considered. The architecture of sensor fusion is divided into three types according to the way of fusion:

· Centralized: Centralized sensor fusion is to send the raw data obtained by each sensor directly to the central processing unit for fusion processing. The advantages of this are high precision and flexible algorithm. However, due to the large amount of data that needs to be processed, the central processing The computing power of the processor is more demanding, and the delay of data transmission needs to be considered, which is difficult to achieve.

· Distributed: The so-called distributed, which means that the original data obtained by each sensor is processed in a place closer to the sensor, and then the result is sent to the central processing unit for information fusion calculation, and the final result is obtained. This method requires low communication bandwidth, fast calculation speed, and good reliability. However, because the original data is filtered and processed, some information will be lost, so in principle, the final accuracy is not as high as centralized.

·Hybrid: As the name suggests, it combines the above two methods. Some sensors use a centralized fusion method, and other sensors use a distributed fusion method. Due to the advantages of both centralized fusion and distribution, the hybrid fusion framework has strong adaptability and high stability, but the overall system structure will be more complicated, and additional costs will be incurred in data communication and computing processing.

For sensor fusion schemes, there is another idea of ​​classifying according to the data information processing stage. Generally speaking, data processing has to go through three levels: data acquisition, feature extraction, and identification decision-making. Information fusion is carried out at different levels. Different strategies and different application scenarios produce different results.

According to this idea, sensor fusion can be divided into data-level fusion, feature-level fusion and decision-level fusion.

Data-level fusion: After the data is collected by multiple sensors, the data is fused. However, the data processed by data-level fusion must be collected by the same type of sensor, and cannot process heterogeneous data collected by different sensors.

Feature-level fusion: The feature vector that can reflect the attributes of the monitored object is extracted from the data collected by the sensor. At this level, information fusion for the features of the monitored object is called feature-level fusion. This method is feasible because part of the key feature information can replace all the data information.

·Decision-level fusion: On the basis of feature extraction, certain discrimination, classification, and simple logical operations are performed to make identification judgments. On this basis, information fusion is completed according to application requirements, and higher-level decisions are made, which is the so-called Decision-level integration. Decision-level fusion is generally application-oriented.

How to choose a sensor fusion strategy and architecture has no certain rules. It needs to be determined according to specific practical applications. Of course, it also needs to integrate factors such as computing power, communication, security, and cost to make correct decisions.

No matter which sensor fusion architecture is used, you may find that sensor fusion is largely a software job, and the main focus and difficulty are on the algorithm. Therefore, the development of efficient algorithms based on actual applications has become the top priority of sensor fusion development.

In the optimization algorithm, the introduction of artificial intelligence is an obvious development trend of sensor fusion. Through the artificial neural network, it can imitate the judgment and decision-making process of the human brain, and has the scalability of continuous learning and evolution, which undoubtedly provides acceleration for the development of sensor fusion.

Although software is critical, in the process of sensor fusion, it is not without the opportunity for hardware to show its strengths. For example, if all the sensor fusion algorithm processing is done on the main processor, the load on the processor will be very heavy. Therefore, a popular approach in recent years is to introduce a sensor hub (Sensor Hub), which can be processed in the main processor. The sensor’s data is processed independently outside the processor, without the participation of the main processor. In doing so, on the one hand, the load of the main processor can be reduced, and on the other hand, the system power consumption can be reduced by reducing the working time of the main processor. This is necessary in power-sensitive applications such as wearables and the Internet of Things.

According to market research data, the demand for sensor fusion systems will increase from US$2.62 billion in 2017 to US$7.58 billion in 2023, with a compound annual growth rate of approximately 19.4%. It can be predicted that the future development of sensor fusion technology and applications will show two obvious trends:

Driven by autonomous driving, the automotive market will be the most important track for sensor fusion technology, and more new technologies and new solutions will be born from this.

In addition, the trend of application diversification will also accelerate. In addition to those applications with higher performance and safety requirements in the past, sensor fusion technology in the consumer electronics field will usher in a huge room for development.

In short, sensor fusion provides us with a more effective way to gain insight into the world, keeping us away from the embarrassment of “blind people touching the elephant”, and then building a smarter future based on this insight.

The Links:   STK621-728S-E NL12876AC18-07DC PM150RSE120