The Edge Intelligent Future Touched by the Original AI Blind Stick



Friends who often surf the Internet must know that public transport in many cities has begun to gradually accept guide dogs. A set of widely circulated data is that as of 2017, the number of domestic guide dogs was only 116, which is rarer than giant pandas, while the number of blind people who need guide dog services is about 8 million. A series of high difficulties and high costs, such as breeding, training, internships, and employment, determine that the guide dog is a "luxury product" for the blind. Each guide dog costs between 120,000 and 150,000 yuan, and its life span is only a dozen years. Even if it is fortunate to have one, when it is retired, how should the visually impaired go out? This is probably a hot search. After fading away, it is still a topic worthy of continuous consideration and improvement by a civilized society.

Recently, a blind man in Turkey, Kürşat Ceylan, based on Arm's latest processor and NPU, created an AI blind stick that may be able to open a window for more visually impaired people, which has attracted our attention. So, what technical conditions are needed for AI to help visually impaired people integrate into public life for a long time and safely? AI blind cane: It is quite difficult to compare with guide dogs. Whether AI + blind stick can help the blind to travel smoothly, we might as well speculate on several important working abilities of the guide dog. First, guide dogs need to accurately identify obstacles.

At the same time, the guide dog also needs to lead the blind to avoid obstacles safely. The guide dog who is performing the task will wear a small vest with a rod to guide the owner to walk or stop properly. Moreover, guide dogs will make their own judgments based on real-time information, and sometimes even "intellectually disobey". When the order to move forward is found to be unsafe, they will refuse to obey even if the owner requests to continue. The blind stick is different. The initiative is completely in the hands of the blind person. Even if the voice assistant + AI inference chip can carry out independent security warnings, it is difficult for these "eyes" to restrict the activities of the owner, and naturally, a certain security risk comes. . However, in the event of personal danger due to equipment technology, a series of responsibilities and ethical issues arising from this have not been prepared and prepared by the entire society.

It not only includes avoiding big pits, cars, pedestrians, railings, etc. on the road but also recognizes key traffic information such as traffic lights to achieve the purpose of allowing blind people to travel smoothly. Those who are familiar with AI must know that based on machine vision + camera + sensor, it is not difficult to detect environmental obstacles. Therefore, in the AI ​​blind cane, Kürşat Ceylan implanted map navigation, obstacle detection algorithms, LED warning lights, microphones, etc.  Through the ultrasonic detector, 160 cm high obstacles can be successfully detected.

Importantly, the guide dog also needs to be integrated into the life of the blind. After living with the owner for a period of time, the guide dog will be very familiar with the regular rest time of the owner. For example, remember his commute routes, behavior habits, frequent supermarkets and exchanged friends, etc. This kind of personalized memory ability, AI can also be achieved through neural network deep learning. However, it should be noted that machine learning training often consumes a lot of computing power, which determines that the AI ​​blind cane algorithm can only be completed by uploading data to the cloud. One operation will inevitably cause a time delay and information privacy.

As for guide dogs, they can establish special emotional connections and trust with their owners, help them expand their social activities, and so on. Before the realization of super artificial intelligence, AI blind sticks obviously cannot be compared with them. Overall, the AI ​​blind cane has been able to complete such functions as navigation and obstacle avoidance at the audio-visual level, but it still cannot compare with the guide dog at the level of judgment, reasoning and emotion. It can be used in a limited range and relatively safe environment (such as office buildings, etc.), which may be the initial scene of the value of AI blind sticks.

From this, we also need to think about a new problem the edge intelligence that claims to rescue AIoT, why hasn't it changed our lives as scheduled? Since its introduction, edge computing has been regarded as an excellent aid for 5G + AI + cloud computing. If cloud computing is the "ultimate brain" of all things connected, then edge computing is a huge "neural end" that bears many "subconscious" responses. For example, the AI ​​guide stick is an excellent scene for edge computing applications. The guide stick needs to realize real-time interaction and judgment, like seeing the traffic light turn green, it can automatically judge that it can pass. It is not necessary to upload the street light information to the cloud, and the walking reminder will only be issued after the layers of the judgment of the cloud server. This undoubtedly greatly reduces the risk of travel caused by the delay and also reduces the overload of cloud computing.

But letting "cloud brain" lazy edge computing can also help the industry solve the three contradictions in the process of AIoT pan-intelligence: One is the contradiction between computing power and cost. To meet the real-time and availability requirements of terminal AI inference operations, a large amount of data needs to be processed locally. Either deploy high-performance AI chips on the terminal itself, which is obviously unrealistic from the perspective of cost control; or deploy sufficient edge AI in physical scenarios. Of course, to meet the computing needs of AIoT massive IoT, it is necessary to transform the network pipeline, such as the establishment of 5G edge data centers and the training of high-performance algorithms, and the need to compete for computing resources such as NPU and GPU.

The second is the contradiction between instant and power consumption. For devices such as guide rods, not only must they ensure real-time performance, but also complex AI tasks such as object detection, voice recognition, gesture monitoring, and even face recognition, plus the large range of sensing processes, directly lead to Consumption is relatively high. The battery life is only 5 hours. In other words, the blind may go out in the morning and return at night when there is no power. Edge computing can filter and analyze huge data traffic at the terminal, reducing the transmission path from the device to the cloud, which naturally improves the power consumption problem.

The third is the contradiction between convenience and safety. Everyone knows that the mutual cooperation of the Internet of Things can greatly improve the portability index of life, but in this era of intelligent door locks and cameras that are frequently selected by hackers, data is easily used by people with ulterior motives. Many companies even require that AI be deployed on their own private clouds, which limits the application of many cutting-edge technologies and increases the difficulty of operation and maintenance.

Comments

Popular posts from this blog

A "Super Magnetic Field" Can be Created on the Earth, Which is Equivalent to a Black Hole Magnetic Field

Super Performance Intel Xeon 128-Core CPU Comes Out

Oracle Linux 7.9 released: Based on Linux 5.4 LTS and UEK 6 Enterprise Kernel Construction