Telecoms Turn to AI-Native Networks for 5G and 6G Readiness

According to industry analyses, this evolution is accelerating as carriers and vendors seek to deliver dynamic network slicing, real-time optimization, and flexible resource allocation.
What AI-Native Infrastructure Actually Means
AI-native telecom infrastructure goes beyond traditional virtualization or cloud-native network functions. Instead, it integrates machine learning, real-time analytics, and automated orchestration directly into core network components. Under this model:
The data plane is optimized for AI-driven decision-making, enabling faster routing, dynamic traffic management, and latency-sensitive load balancing.
The radio access network (RAN) leverages intelligent layers that can adapt in near-real time to changing conditions—traffic load, interference, or user mobility—without human intervention.
Distributed edge and cloud compute resources, often in partnership with hyperscalers or AI-chip producers, support both network functions and emerging AI workloads (e.g., analytics, IoT, AR/VR).
Industry observers note that this helps operators support complex use cases such as network slicing, real-time analytics, and AI services delivered over 5G or future 6G networks.
AI-Native Wireless Stack: Cisco and Nvidia
In October 2025, Cisco and Nvidia unveiled an “AI-native wireless stack” designed to enable telecom operators to scale current 5G networks and prepare for 6G. The stack includes an AI-optimized data-plane switch, cloud-reference architecture for neocloud and sovereign cloud environments, and support for both RAN and core networking functions.
According to their announcement, the stack integrates 5G RAN software, user-plane and core network components, and specialized 6G application support — signaling a foundational shift in how future wireless networks may be built.
Telcos’ Strategic Shift Toward AI Infrastructure
Major operators see AI-native infrastructure as a path to growth and a way to manage network complexity. As outlined in a 2025 analysis, telecom firms may leverage their existing assets — fiber, towers, and edge data centers — to support the growing AI economy, offering connectivity and compute for both network- and third-party AI services.
Moreover, by adopting AI-native architectures and investing in slicing and cloud-native/AI-native stacks, operators aim to support enterprise verticals (e.g., industrial IoT, remote healthcare, AR/VR, autonomous vehicles) that demand ultra-low latency and reliability.
Growing Investment in AI Infrastructure
A 2025 industry forecast estimates that operators worldwide will invest heavily in AI infrastructure — both for internal network optimization and for external AI services.
This trend reflects the recognition that next-generation telecom networks will need to be capable not just of carrying data, but of hosting and accelerating AI workloads themselves.
Technical Foundations: From RAN to 6G-Ready Architectures
Recent research supports the technical viability of AI-native network architectures. For instance:
A study titled The Interplay of AI-and-RAN: Dynamic Resource Allocation for Converged 6G Platform proposes a framework that dynamically allocates compute resources (e.g., GPUs) between latency-sensitive RAN workloads and general AI tasks. The authors observe that with proper resource management, RAN performance can be preserved while supporting AI workloads.
Another paper, AI-Driven Digital Twins: Optimizing 5G/6G Network Slicing with NTNs, shows that using AI-based digital twins and reinforcement learning for Non-Terrestrial Networks (NTNs) can improve latency performance by up to 25% compared with static slicing, while optimizing resource utilization.
Concerns about explainability and safety in AI-native RANs are also being addressed. For example, a recent proposal called XAI-on-RAN: Explainable, AI-native, GPU-Accelerated RAN Towards 6G outlines hybrid models that balance latency, GPU utilization, and explainability — a critical factor for mission-critical applications in vertical industries.
These developments suggest that AI-native infrastructure is not merely speculative: realistic technical pathways exist for integrating AI with core telecom network functions, while preserving performance, reliability, and transparency.
Why the Shift Matters — From Economic Pressures to New Revenue Streams
Telecom operators are facing slower traditional growth, increasing infrastructure costs, and pressure from cloud and hyperscale competitors. According to industry consulting, AI infrastructure offers a new growth avenue — allowing telcos to monetize not just connectivity, but also compute, edge services, and AI-ready network platforms.
By delivering flexible network slicing, low-latency performance, and integrated AI capabilities — from RAN to core to edge — carriers can better serve enterprise clients deploying latency-sensitive applications (industrial automation, remote surgery, AR/VR, autonomous systems) while also supporting future AI workloads that require distributed computing and network-aware resource allocation.
The adoption of AI-native architectures could also give operators a competitive edge relative to pure hyperscalers or cloud-only providers by offering a tightly coupled network-plus-compute proposition that leverages ownership of the “last mile,” edge infrastructure, and transport backbone.
Challenges and Risks: Engineering, Economics, and Complexity
Despite the promise, the transition toward AI-native telecom networks is not without significant challenges. Industry analysts warn that proposals — especially those built around large monolithic AI models managing RAN — risk underestimating the engineering complexity, resource demands, and economic trade-offs. Key challenges include:
Resource allocation and orchestration: Running AI workloads alongside latency-sensitive network functions (RAN, core) demands precise scheduling and partitioning of compute, often GPUs. As shown in academic frameworks, this is technically feasible — but only with sophisticated orchestration and dynamic resource management.
Transparency and explainability: When AI is used to control critical network operations — especially for vertical, mission-critical services — operators and regulators will demand visibility into decision logic. Emerging proposals for “explainable AI on RAN” attempt to address this, but standardization and validation remain nascent.
Economic viability: Building AI-native stacks, deploying edge infrastructure, and maintaining distributed compute for AI workloads remain capital-intensive. Carriers must balance upfront investments against long-term returns, especially in regions where traditional telco growth is slow.
Interoperability and vendor fragmentation: Networks often comprise a patchwork of vendors and legacy systems. Integrating AI-native components across RAN, core, and transport layers — possibly from different vendors — raises interoperability risks and management complexity.
The telecom industry’s migration toward AI-native infrastructure marks a defining shift in how networks will be designed, operated, and monetized. By embedding AI into the data plane, RAN, and edge compute — and by partnering with hyperscalers and AI-chip suppliers — operators aim to deliver on the promise of ultra-low latency, network slicing, and 6G readiness. While technical and economic challenges remain, the investments already underway suggest this is more than hype: it is a fundamental reimagining of telecom infrastructure for the AI era.





