AI & RoboticsNews

ASU researchers debut ViWi-BT, an AI/computer vision mmWave beam guide

The cellular industry’s shift from long-distance radio signals to short-distance millimeter waves is one of the 5G era’s biggest changes, expected to continue with submillimeter waves over the next decade. To more precisely direct millimeter wave and future terahertz-frequency signals toward user devices, Arizona State University researchers have developed ViWi-BT, a vision-wireless framework that improves beam tracking using computer vision and deep learning.

Smartphones historically operated much like other long-distance radios, scanning the airwaves for omnidirectional tower signals and tuning into whatever was strongest and/or closest. But in the 5G and 6G eras, networks of small cells will use beamforming antennas to more specifically target their signals in a given direction toward discovered client devices, which may be contemplating connections from multiple base stations at once. ViWi-BT’s goal is to use AI and a device’s cameras or lidar capabilities to identify physical impediments and advantages for the beam targeting process, enabling “vision-aided wireless communications.”

In short, a system with ViWi-BT capabilities will learn about its 3D environment using a database of previously transmitted millimeter wave beams and visual images, then predict the optimal beams for future users moving within the same space. The framework is taught with visual and wireless signal information from static elements (buildings, roads, and open sky), common locations of moving impediments (vehicles and people), and generally open spaces. Based on that knowledge, the system will be able to predict where it needs to send both direct line-of-sight beams and reflected non-line-of-sight beams, adjusting each based on live information about known conditions.

The researchers have developed simulations of how the model’s physical data will work, distilling highly detailed 3D objects into simpler approximations that the computer can more efficiently use for calculations with “no major impact on the accuracy” of results. Each object is given a fixed or moving role in the simulation, including its real-world electromagnetic properties relative to 28GHz millimeter wave signals so that absorption, reflection, and diffraction can be taken into account.

Predictions are made by a recurrent neural network (RNN) trained on previously observed beam sequences gathered from base stations within the space. While the RNN does well at predicting a single beam’s future direction without computer vision assistance, it gets considerably worse when asked to predict three or five beams and doesn’t get better with deeper training. Adding properly trained computer vision to the mix, ASU’s researchers say, would enable the system to identify possible future impediments, reflecting surfaces, and users’ motion patterns within the spaces.

Though the research is still in early stages, it’s likely to become increasingly important to bolstering performance as millimeter wave and sub-millimeter wave systems become necessary for ultra low latency communications. At a minimum, it might pave the way for base stations with their own camera hardware — a development that might transform modern-day surveillance into actionable intelligence that improves wireless communications.


Author: Jeremy Horwitz.
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!