Sip & Learn: The New Geography of AI: From Core Hubs to Edge Clusters and the Network Consequences
As AI infrastructure shifts from a handful of mega-hubs to a more fragmented map of power-first compute clusters, network architecture is becoming the decisive factor that determines whether distributed AI operates as one system or several disconnected sites. New “AI-ready” sites are increasingly chosen for access to power and cooling rather than proximity to legacy fibre crossroads, forcing the ecosystem to rethink interconnect design from the outset. In this session, EXA Infrastructure and Ciena bring together the network, data centre and AI compute perspectives to explore how topology choices (metro mesh, long-haul, data centre infrastructure and subsea) evolve, and how latency-sensitive, real-time multimodal workloads pull compute closer to the edge, reshaping resiliency, routing, and investment priorities. Panellists will explore:
- AI clusters vs. traditional hubs: “power-first, fibre-second” realities, how should operators evaluate connectivity readiness and what does “right-sized” network capacity look like for training vs inference-heavy clusters?
- Topology choices in a fragmented footprint: what becomes the “glue”? How do long-haul deployment strategies change when there are many smaller AI clusters? What is the evolving role of subsea for distributed AI capacity balancing and inter-region model movement? What are the practical trade-offs between performance, route diversity, and speed-to-deploy?
- Latency-sensitive AI use cases: why inference drags compute toward the edge -Which workloads genuinely require edge proximity and which can stay centralised? How do latency requirements impact interconnect assumptions (buffering, path diversity, optical vs packet choices)? What does “good enough latency” mean in practice for enterprise AI adoption?
- Operating model & economics: building networks that keep up with AI cadence. How do you design for rapid scaling without building stranded capacity? What partnership models accelerate deployment (carrier-to-DC-to-neocloud collaboration)? How do automation, observability and security change in a distributed AI footprint?

.jpeg)