Blog Azure Infrastructure Security & Compliance

Everything About Networking in the Azure Cloud

Did you know Microsoft owns and runs more than 600,000 kilometres of networking cables across the planet? That's connecting more than 70 Azure regions, and that's over 400 data centres!

Reading time 5 minutes Published: 27 January 2026

In short:

  • VNets/subnets feel simpler in Azure, but the same fundamentals still apply: routing, isolation, and security decisions make or break production.
  • Design networking early (tenants/subscriptions, hub-spoke), because retrofitting rules and segmentation later is painful and risky.
  • Use central control points (hub firewall/NSGs) and avoid direct spoke-to-spoke to keep a clean zero-trust posture.
  • Hybrid connectivity is where complexity jumps. VPN is basic, but ExpressRoute/Virtual WAN are often needed for enterprise reliability and scale.

 

In the early days, networking was all physical: routers, switches, cables, the things you would touch. Today, in the cloud, all the “fun stuff” happens in code, now, “software-defined networking (SDN)”. When you create a virtual network (Vnet), Azure spins up servers and configures routing behind the scenes.

You don’t see the hardware anymore, but the logic remains; it's still important to understand it.

 

Why networking still matters in Azure

Virtual Networks (VNets)

In Azure, VNets replace the old VLANs you’d use on-prem. You don’t create a physical switch anymore. Instead, you define a virtual network, and Azure takes care of routing in the background. It may seem simple, but the complex logic is still there.

“Developers start using Azure or AWS, or any cloud, and then just deploy something. But before workloads go to production, we need security and isolation. That's when networking really becomes a thing.”

Wesley Haakman, Principal Azure Architect & Microsoft MVP

Subnets

Subnets work differently, too. For example, if you create a subnet like 172.16.0.0/16, Azure reserves five IPs: the first four and the last (172.16.0.0–172.16.0.3 and 172.16.255.255). One is used for the virtual router/default gateway, and the rest are reserved for Azure platform services within the SDN layer.

Even though routing between subnets is automatic inside a VNet, networking becomes important as workloads move toward production. If you connect multiple VNets, you’ll often need user-defined routes (UDRs). For example, in a hub-and-spoke design, traffic between spokes might need to pass through a central hub firewall. Altogether, it’s important to understand how Azure reserves IPs, manages routing, and handles subnet isolation. Any misunderstanding can break communication or compromise security in your environment.

Listen to the podcast

Watch the video on YouTube

Design for isolation and security

You can’t just connect everything to a VNet. To secure and isolate traffic:

  • Use a hub-spoke model, where all traffic from spokes routes through a central hub, where you can place an Azure Firewall (for centralised filtering and traffic inspection) or a network virtual appliance.
  • You can also use Network Security Groups (NSGs) at the subnet or resource level. The same old principles apply in the cloud: deny everything by default, then open what’s necessary.

In Azure, resources in the same subnet communicate automatically. That’s different from traditional networking, where you needed physical connections. You need to plan which VNets or subnets can communicate and enforce isolation with firewalls or NSGs.

Mind you:

Retrofitting rules later is hard, so start with tenants, subscriptions, and governance, then design networking. 

Zero Trust works best in a hub-and-spoke. Connect spokes only to the hub, not to each other. Even one open port across multiple Vnets can expose sensitive resources.  

Most Azure services integrate with networking. Private endpoints, VMs, and front-end services like Application Gateway or Front Door can work with firewalls to secure traffic while maintaining flexibility. Finally, always avoid direct spoke-to-spoke connections unless necessary, as misconfigurations can create unintended paths and compromise security.

 

Connecting Azure to on-prem is where it gets tricky

Seamlessly connecting on-premises environments to Azure is critical, particularly for enterprises seeking to extend their infrastructure. 

But connecting on-prem environments adds another layer of complexity. VPNs work for simple setups, but enterprise networks often require ExpressRoute or Azure Virtual WAN for higher bandwidth and reliability.

  • ExpressRoute: Adds private, high-speed connectivity but brings extra complexity. It involves third-party carriers, BGP routing, and coordination between multiple teams. It’s powerful, but not something you configure casually.
  • Azure Virtual WAN: If you need global performance without the ExpressRoute headache, it's often a better fit. It gives enterprise-grade connectivity over Microsoft’s backbone with simpler management.

 

Troubleshooting and monitoring

Once your network is up and running, diagnosing issues in Azure differs from on-prem. You don’t ping cables or trace physical hops any more, but virtual flows. 

Meanwhile, many checks are automated: the system monitors backbone connections, backend pools, and overall network health. You’ll know if a connection is down without having to run tests constantly. This reduces the need for continuous logging, though selective logging remains useful for diagnosing issues.

Fortunately, Azure automates many network checks. Health probes and diagnostic logs help track connectivity and traffic. Tools like Network Watcher let you inspect flows and identify blocked or failing connections.

Cost and latency still matter

Azure networking isn’t free, and design choices affect both cost and latency. Cross-region networking costs can add up quickly, leading to increased latency. Therefore, keep dependent workloads in the same region and plan for inter-region data transfer charges. Plan your network with these trade-offs in mind, as performance, reliability, and cost are closely linked.

 

Azure Networking Best Practices

To get the most out of Azure networking, follow these practices where possible:

  • Design early: plan your network before deploying workloads to prevent problems later. Retrofitting isolation and security is far from easy.
  • Plan tenants and subscriptions first: Set governance and account structure before designing VNets.
  • Centralise control points: Keep monitoring and traffic filtering at hub locations; avoid scattering controls across multiple VNets.
  • Separate ingress and egress flows: Use different firewalls or gateways 
  • Be mindful of cost: Track inter-region traffic and high-speed links to prevent unexpected bills.
  • Use selective logging: Logging everything is expensive and not always necessary. If so, leverage diagnostics and Network Watcher for specific issues.
  • Regional planning matters: Place chatty or dependent workloads in the same region to minimise latency and cost.

 

Closing thoughts

All said, networking remains essential in today’s cloud infrastructure. Networking was complex, and maybe it became easier. But it can still be very complex, especially with the endless ways you can approach it, not to mention all the different features.

The tools changed, and you don’t see routers as you did in the early days anymore. But the logic didn’t.

And even now, it’s still the backbone of everything in Azure.

If you understand networking (routes, subnets, isolation, and security) you’ll understand the cloud faster and design better environments. It’s still valuable knowledge and always will be.

Deni visual

Get in touch!

Intercept can help you secure your Azure cloud so you can focus on delivering value to your customers and driving business.