NTT demonstrates low-latency AI video analysis using remote GPUs over IOWN APN
News

NTT demonstrates low-latency AI video analysis using remote GPUs over IOWN APN

Thursday, March 5, 2026 at 06:40 AM

NTT has successfully demonstrated low-latency AI video analysis by utilizing remote GPUs connected via its IOWN All-Photonics Network (APN). This infrastructure allows for high-speed, real-time data processing suitable for remote robot control and industrial AI applications.

Context

In March 2026, NTT and NTT DOCOMO successfully demonstrated a low-latency AI video analysis system that leverages remote GPU resources connected via the IOWN All-Photonics Network (APN). By implementing In-Network Computing Edge technology, the companies established a method to control AI inference directly from the 5G core network. This architectural shift allows high-volume video data from simplified devices to be processed by distant, high-performance servers with minimal delay, meeting the strict latency requirements necessary for 6G-era remote robot control. This development is a critical milestone in the IOWN roadmap, which targets a 100x reduction in power consumption and a 200x reduction in end-to-end latency by 2030. By centralizing GPU power in data centers while maintaining real-time responsiveness, NTT addresses the physical space and energy constraints of localized AI hardware. This breakthrough enables the deployment of lightweight, autonomous robots and wearable XR devices that lack onboard processing power, positioning IOWN APN as a foundational infrastructure for the next generation of industrial AI and the global semiconductor supply chain.

Related Companies

NTT
NTT
9432