[ad_1]
Artificial intelligence has made incredible progress, but it often needs absurd amounts of data and computing power to make it happen. Now, some AI researchers are focusing on making the technology as efficient as possible.
Last week, researchers showed that it was possible to squeeze a powerful AI vision algorithm on a simple, low-power computer chip that can run for months on battery power. The trick could help bring more advanced artificial intelligence capabilities, such as image and voice recognition, to home and wearable devices, as well as medical gadgets and industrial sensors. It could also help keep data private and secure by reducing the need to send anything to the cloud.
“This result is very exciting for us,” says Song Han, assistant professor at MIT leading the project. While the work is a lab experiment for now, it “can quickly transition to real-world devices,” Han says.
Microcontrollers are relatively simple, inexpensive, low-power computer chips found in billions of products, including car engines, power tools, TV remote controls, and medical implants.
The researchers basically devised a way to reduce deep learning algorithms, large neural network programs that vaguely mimic the way neurons connect and fire in the brain. Over the past decade, deep learning has propelled huge advancements in AI, and it is the foundation of the current AI boom.
Deep learning algorithms generally operate on computer chips which divide the parallel calculations needed to train and operate the network more efficiently. Formation of the language model known as GPT-3, which is able to generate convincing language when there is a prompt, it took the equivalent of cutting edge AI chips running at full throttle for 355 years. These uses have led to increased sales of GPUs, chips well suited for deep learning, as well as a growing number of AI-specific chips for smartphones and other gadgets.
There are two parts to the new research approach. First, the researchers use an algorithm to explore possible neural network architectures, looking for one that matches the computational constraints of the microcontroller. The other part is a compact and memory efficient software library for running the network. The library is designed in concert with the network architecture, to eliminate redundancy and take into account the lack of memory on a microcontroller. “What we do is like finding a needle in a haystack,” Han says.
The researchers created a computer vision algorithm able to identify 1000 types of objects in images with 70% accuracy. The best previous low-power algorithms only achieved around 54% accuracy. It also required 21% of memory and reduced latency by 67% compared to existing methods. The team showed similar performance for a deep learning algorithm that listens for a particular “wake-up word” in an audio stream. Han says further improvements should be possible by refining the methods used.
“It is indeed quite impressive,” said Jae-sun Seo, an associate professor at Arizona State University who works on resource-limited machine learning.
“Commercial applications could include smart glasses, augmented reality devices that constantly perform object detection,” says Seo. “And edge devices with voice recognition on the device without connecting to the cloud.”
John cohn, a researcher at MIT-IBM Watson AI Research Group and part of the team behind the work, says some IBM customers are interested in using the technology. He says an obvious use would be in sensors designed to predict problems with industrial machinery. Currently, these sensors need to be wirelessly networked so that the calculation can be done remotely, on a more powerful system.
Another important application could be in medical devices. Han says he started working with colleagues at MIT on devices that use machine learning to continuously monitor blood pressure.
More WIRED stories
[ad_2]