Skip to content

Hub and Spoke Routing with DNS

In this lab you will configure a Hub‑and‑Spoke network so that spoke↔spoke traffic is routed through a pre‑configured Linux NVA in the hub using User‑Defined Routes (UDRs). You will also create a Private DNS Zone and link all VNets to enable name resolution across spokes. You will validate routing and name resolution using the two spoke VMs over SSH.


Pre‑Provisioned Resources

The following are already deployed for you: pre-provisioned environment

  • VNets & Subnets
    • vnet-hub (10.0.0.0/16) with subnet snet-hub-shared (10.0.1.0/24)
    • vnet-spoke1 (10.1.0.0/16) with subnet snet-spoke1 (10.1.1.0/24)
    • vnet-spoke2 (10.2.0.0/16) with subnet snet-spoke2 (10.2.1.0/24)
  • Linux NVA (hubnva) in snet-hub-shared
    • IP forwarding: Enabled (pre‑configured)
    • Public IP: None (you do not connect to it)
  • Spoke VMs (Ubuntu 22.04)
    • spk1vm in snet-spoke1
    • spk2vm in snet-spoke2
    • Tools pre‑installed: nginx, curl, dnsutils, traceroute, netcat
  • NSGs: Spoke subnet NSGs allow inbound SSH (TCP 22) and HTTP (TCP 80) from any source, so students can access the spoke VMs for lab tasks.

Connection Details

Use SSH to connect to the spoke VMs:

Terminal window
ssh labuser@<spk1vm-public-ip>ssh labuser@<spk2vm-public-ip>
  • Password: GravelCore2024!

Your Tasks


Task 1: Create Hub↔Spoke VNet Peerings (enable forwarded traffic)

Create bi‑directional peerings between HubVNet and each spoke. You will do this Hub → Spoke and Spoke → Hub for both spokes.

A. HubVNet ↔ Spoke1VNet

  1. Go to vnet-hub → Peerings → + Add.
    • Peering link name (Hub to Spoke1): peer-hub-to-spoke1
    • Remote virtual network: vnet-spoke1
    • **Allow virtual network access: Enabled
    • Allow forwarded traffic: Enabled
    • Allow gateway transit / Use remote gateways: Disabled
    • Create
  2. Go to vnet-spoke1 → Peerings → + Add.
    • Peering link name (Spoke1 to Hub): peer-spoke1-to-hub
    • Remote virtual network: vnet-hub
    • Allow virtual network access: Enabled
    • Allow forwarded traffic: Enabled
    • Create

B. HubVNet ↔ Spoke2VNet

Repeat the same steps for Hub ↔ Spoke2:

  • peer-hub-to-spoke2 and peer-spoke2-to-hub with Allow forwarded traffic = Enabled.

Result: The hub is permitted to forward traffic between spokes once UDRs send packets to the NVA.

lab with vnetpeering


Task 2: Create UDRs in Each Spoke (next hop = Hub NVA 10.0.1.4)

Create a route table per spoke, add a route for the other spoke’s address space, and associate the table to the spoke subnet.

A. Spoke1 → Spoke2 via NVA

  1. Create Route Table named rt-spoke1 (location = same as resource group).
  2. In rt-spoke1 → Routes → + Add:
    • Route name: spoke2-via-nva
    • Address prefix destination: CIDR block
    • Address prefix: 10.2.0.0/16
    • Next hop type: Virtual appliance
    • Next hop address: 10.0.1.4
  3. In rt-spoke1 → Subnets → + Associate:
    • Virtual network: vnet-spoke1
    • Subnet: snet-spoke1

B. Spoke2 → Spoke1 via NVA

  1. Create Route Table named rt-spoke2.
  2. Add route:
    • Route name: spoke1-via-nva
    • Address prefix: 10.1.0.0/16
    • Next hop type: Virtual appliance
    • Next hop address: 10.0.1.4
  3. Associate rt-spoke2 to vnet-spoke2 / snet-spoke2.

Routing result:
Spoke1 → Spoke2: Spoke1 VM → snet-spoke1 (rt-spoke1) → NVA 10.0.1.4 → vnet-spoke2 → Spoke2 VM
Spoke2 → Spoke1: Spoke2 VM → snet-spoke2 (rt-spoke2) → NVA 10.0.1.4 → vnet-spoke1 → Spoke1 VM

lab with routes


You will create a Private DNS Zone and link all three VNets, then add A records for both spoke VMs.

  1. Create Private DNS Zone:
    • Name: lab.internal
    • Resource group: RG-HubSpoke-Lab
  2. In the zone, go to Virtual network links → + Add (repeat for each VNet):
    • Link name: link-hub → Virtual network: vnet-hub → Enable auto-registration: Disabled → OK
    • Link name: link-spoke1 → Virtual network: vnet-spoke1 → Auto-registration: Disabled → OK
    • Link name: link-spoke2 → Virtual network: vnet-spoke2 → Auto-registration: Disabled → OK
  3. Add A records:
    • Record set name: spk1vm → IP address: 10.1.1.4 → TTL: 300 → OK
    • Record set name: spk2vm → IP address: 10.2.1.4 → TTL: 300 → OK

Result: You can resolve spk1vm.lab.internal and spk2vm.lab.internal across VNets.

lab with DNS


Task 4: Validate

A. Validate Spoke1 → Spoke2 via Hub NVA

  1. SSH into spk1vm using its Public IP and the provided password.
  2. Run: nslookup spk2vm.lab.internalcurl -sI http://spk2vm.lab.internaltraceroute spk2vm.lab.internal Expected:
    • nslookup returns 10.2.1.4
    • curl returns HTTP headers from nginx on spk2vm
    • traceroute first hop = 10.0.1.4 (the NVA)

B. Validate Spoke2 → Spoke1 via Hub NVA

  1. SSH into spk2vm using its Public IP and the provided password.
  2. Run: nslookup spk1vm.lab.internalcurl -sI http://spk1vm.lab.internaltraceroute spk1vm.lab.internal Expected:
    • nslookup returns 10.1.1.4
    • curl returns HTTP headers from nginx on spk1vm
    • traceroute first hop = 10.0.1.4

C. Verify Effective Routes in the Portal

  • Open NIC → Effective routes for each spoke VM:
    • On spk1vm NIC, confirm a user route for 10.2.0.0/16 with Next hop type = Virtual appliance and Next hop = 10.0.1.4.
    • On spk2vm NIC, confirm a user route for 10.1.0.0/16 with Next hop = 10.0.1.4.

Success Criteria

  • Peerings exist both ways for Hub↔Spoke1 and Hub↔Spoke2 with Allow forwarded traffic = Enabled.
  • Route tables are associated with each spoke subnet and contain a route to the other spoke CIDR with Next hop = Virtual appliance (10.0.1.4).
  • Private DNS lab.internal resolves spk1vm and spk2vm to their private IPs from either spoke.
  • End‑to‑end tests succeed:
    • curl http://spk2vm.lab.internal from spk1vm
    • curl http://spk1vm.lab.internal from spk2vm
    • traceroute first hop shows 10.0.1.4 in both directions.

Optional Challenge (Stretch)

  • Blackhole Test (Routing control):
    On rt-spoke1, add a higher‑priority route for 10.2.0.0/16 with Next hop = None.
    • Re‑run curl from spk1vm to spk2vm: the request should fail.
    • Remove the blackhole route to restore connectivity.
  • Add a third spoke:
    Add a new VNet/subnet/VM (vnet-spoke3, 10.3.0.0/16, VM at 10.3.1.4), create a route in each existing spoke to 10.3.0.0/16 via 10.0.1.4, add DNS spk3vm.lab.internal, and validate from all spokes.

Exact Routing Summary

  • Spoke1 → Spoke2: Spoke1 VM → snet-spoke1 (rt-spoke1) → NVA 10.0.1.4 → vnet-spoke2 → Spoke2 VM
  • Spoke2 → Spoke1: Spoke2 VM → snet-spoke2 (rt-spoke2) → NVA 10.0.1.4 → vnet-spoke1 → Spoke1 VM
  • NVA configuration is pre‑done; students never touch it.
  • No Bastion. Only spk1vm and spk2vm have Public IPs.