Advertisement

Huawei Reveals Xinghe AI Fabric 2.0 to Power Always-On Data Centres

Huawei has rolled out its upgraded Xinghe AI Fabric 2.0 platform at the HUAWEI CONNECT 2025 Data Center Innovation Summit in Shanghai, positioning it as a foundational infrastructure for always-available, high-performance data centre networks. The new platform builds on the original 2018 AI Fabric but incorporates deeper AI integration, improved reliability and autonomous operations.

The architecture of Xinghe 2.0 is structured in three layers: AI Brain, AI Connectivity, and AI Network Elements, each designed to drive automation, performance and resilience. At the summit, Arthur Wang, President of Huawei’s Data Center Network Domain, described how the AI Brain layer uses the StarryWing Digital Map and NetMaster agent to enable drag-and-drop orchestration, cross-domain automation, and diagnosis of up to 80 per cent of faults without human intervention. The AI Connectivity layer embeds Huawei’s Network-Scale Load Balancing algorithms to optimise throughput and boost training/inference efficiency by about 10 per cent, while introducing a multi-level reliability scheme called iReliable. The AI Network Elements layer uses Huawei’s CloudEngine and XH series switches, along with StarryLink optical modules, to offer precision traffic visibility, group isolation and built-in security.

Huawei’s demonstration booth displayed 800GE systems, such as the 128×800GE fixed switch XH9330 and the liquid-cooled 128×400GE XH9230-LC, alongside their StarryLink optical modules. These assets underscored how the company aims to marry high-density hardware with the intelligent fabric.

One of the breakthroughs claimed by Huawei is in stability. Through innovations such as optical module channel loss-resistance and contamination/looseness detection, the platform is intended to maintain continuous operations even under hardware stress. Huawei asserts a ten-fold reliability improvement in cluster training environments.

In application, Huawei says the system will reduce new service rollout times from days to minutes. For general-purpose computing, unified simulation of switches and security devices shrinks the change window; for AI workloads, the integrated training-inference scheduler intelligently switches GPU roles to maximise utilisation and deliver up to 10 per cent gains in inference throughput.

Huawei positions the launch within a broader push to transform networks from passive infrastructure to active intelligence. It now frames its Xinghe stack as composed of four pillars: AI Campus, Intelligent WAN, AI Fabric 2.0, and AI Network Security. The updated security module uses an AI detection engine and a graph-based source-tracing capability, claiming a 95 per cent detection rate for unknown threats and up to 100-hop traceability.

The move comes at a time when Huawei is also laying bare parts of its chip strategy, announcing plans for the Ascend 950 in 2026 followed by the 960 and 970 chips in subsequent years. These will power new computing nodes such as the Atlas 950 and Atlas 960, forming a supernode architecture to rival established GPU-based clusters. Industry analysts interpret the upgraded fabric as essential if Huawei is to close the gap with Nvidia’s cluster interconnects and maintain competitiveness in AI infrastructure. Earlier assessments had raised concerns about Huawei’s reliance on optics and in-house interconnect designs in large scale deployments.

One external validation came via Gartner’s 2025 Magic Quadrant for Data Center Switching, in which Huawei is positioned as a leader — the report cites completeness of vision, partly reflecting strength in AI-driven fabric initiatives like Xinghe. Still, sceptics point out that claims around autonomous operations, zero outages and high utilisation often face real-world complexity barriers — heterogeneous hardware ecosystems, third-party device support, and software bugs could erode these theoretical gains when scaled.
Previous Post Next Post

Advertisement

Advertisement

نموذج الاتصال