简体   繁体   中英

Azure AKS vnet to another vnet communication

We have managed AKS Cluster and it has a few applications PODS. In the same subscription, we have a few servers in the different Resource Group and different VNET. We have a requirement to happen a communication between these two VNET's. I have configured vnet peering between two VNET's but we can see that the communication is not happening.

When I add a rule like "Allow port 443 from all networks" on to the NSG of Virtual machines then everything works fine.

Troubleshooting steps are done.

  1. VNET Peering
  2. Got an API Server IP Address from the "kubeconfig" file and added in the NSG of VM's in a diff RG.

But did not resolve an issue. Could you please help me to fix the issue.

I would suggest to try connecting the VNET's through VPN gateways.

From an Azure virtual network, connecting to another virtual network is essentially the same as connecting to an on premises network via site-to-site (S2S) VPN.

You will need to go through the below listed steps:

  1. Create VNetA and VNetB and the Corresponding Local Networks.
  2. Create the Dynamic Routing VPN Gateways for each virtual network.
  3. Connect the VPN Gateways.

Please find the referred document for implementing the same solution I have mentioned above.

For more information on difference of vnet peering and vnet gateway you can refer this document .

AKS Resources are behind the Internal Load Balancer, so peering did not help. I had to use the Public IP Address provisioned during the AKS Creation process in the NSG. After adding PIP(Available in MC_rg-*** resource group) everything started working.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM