Researchers Develop AI-Powered Backpack to Help Visually Impaired Navigate Alone
March 29, 2021
96         0
Youpal Guest Writer
by Youpal Guest Writer

Artificial intelligence (AI) developer Jagadish K. Mahendran and his team at the Institute for Artificial Intelligence, University of Georgia, has designed an AI-powered backpack that can help the visually impaired navigate and perceive the world around them.

The voice-activated backpack can help detect common challenges such as traffic signs, hanging obstacles, crosswalks, moving objects and changing elevations, Intel said in a statement which also features a video demonstration of the system.

The backpack carries a host computing unit (which could be a Raspberry Pi, Chromebook, or laptop) which connects to a vest jacket that conceals a camera while a fanny pack is also used to hold a power unit, capable of providing approximately eight hours of use. Luxonis OAK-D sensors, powered by the Open Source DepthAI Repositories, are affixed to the vest which is then connected to the computing unit in the backpack. Three tiny holes in the vest offers viewports for the OAK-D unit, which is attached to the inside.

The OAK-D unit is capable of simultaneously running advanced neural networks while also providing depth of field from two stereo cameras, as well as colour information from a single 4k camera. It contains an on-chip edge AI processor compatible with OpenVINO, Intel’s AI and computer vision platform, and is powered by a pocket-size battery pack capable of providing approximately 8 hours of use.

A USB-enabled GPS is mounted on top of the backpack and connects to the host computing unit, while the user can interact with the system with a Bluetooth-enabled earphone. They can do so via voice queries and commands. The system can then respond with verbal information. As the user moves through the locale, the system audibly conveys information about common obstacles including signs, tree branches and pedestrians. It also warns of upcoming crosswalks, curbs, staircases and entryways.

It essentially takes away the need for other assistance, such as a cane or guide dog.

According to the release, the system uses Intel’s OpenVINO toolkit for inference. Custom models were trained using GPUs, then converted to OpenVINO format for inference. A few pretrained TensorFlow Lite models were also used for training.

The project is non-commercial and will be open-sourced as the contributors to the project increase, including code, models, and datasets. In addition, the complete project will be published as a research paper in the near future, Intel said.

When Mahendran was developing the product, visual assistance systems for navigation ranged from GPS-based, voice-assisted smartphone apps to camera-enabled smart walking stick solutions. None of them was designed to capture the visual scene accurately to give the visually impaired a picture of the environment around them to the depth needed to facilitate independent navigation.

“Last year when I met up with a visually impaired friend, I was struck by the irony that while I have been teaching robots to see, there are many people who cannot see and need help. This motivated me to build the visual assistance system with OpenCV’s Artificial Intelligence Kit with Depth (OAK-D), powered by Intel,” Mahendran was quoted as saying in a statement from Intel.

Mahendran recently won the grand prize at the Intel-sponsored OpenCV Spatial AI 2020 Competition, the world’s largest spatial AI competition.

subscribe for YouMedia Newsletter
0 comments

Leave a Reply

Your email address will not be published. Required fields are marked *

newsletter
subscribe for YouMedia Newsletter
LET'S HANG OUT ON SOCIAL