Cloud Networking: Virtual Networks, VPNs, and Connectivity

Cloud networking encompasses the logical and physical mechanisms through which cloud-hosted resources communicate with each other, with on-premises infrastructure, and with end users. This page describes the structural components of cloud networking — virtual private clouds, routing constructs, VPN configurations, and dedicated connectivity options — along with the scenarios that drive architecture decisions and the classification boundaries separating major connectivity approaches. For practitioners evaluating cloud architecture design or organizations assessing migration paths, networking topology is a foundational constraint that shapes performance, security, and cost simultaneously.


Definition and scope

Cloud networking refers to the set of virtualized and physical constructs that enable IP-based communication within and between cloud environments. Unlike traditional data center networking, cloud networking is defined almost entirely through software — routing tables, firewall rules, access control lists, and network address translation are configured via APIs rather than physical device provisioning.

NIST SP 800-145, the foundational US government definition of cloud computing, identifies "broad network access" as one of five essential characteristics of cloud services — meaning that cloud resources are inherently network-dependent from inception. This distinguishes cloud networking from ancillary infrastructure: it is a core service attribute, not an optional add-on.

The scope of cloud networking covers four primary domains:

  1. Virtual network segmentation — logical isolation of workloads within a provider's infrastructure
  2. Internet connectivity — inbound and outbound traffic management through gateways and load balancers
  3. VPN connectivity — encrypted tunnels linking cloud environments to remote or on-premises networks
  4. Dedicated/private interconnects — physical or carrier-provisioned links that bypass the public internet entirely

These domains are addressed in NIST SP 800-146, Cloud Computing Synopsis and Recommendations, which treats network boundaries as a primary risk surface requiring explicit architectural attention. Cloud networking decisions also intersect directly with cloud security posture and cloud identity and access management policy enforcement.


How it works

Cloud virtual networks are instantiated as Virtual Private Clouds (VPCs) — isolated logical network partitions within a provider's physical infrastructure. Each VPC receives a user-defined private IP address range (typically drawn from RFC 1918 space: 10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16) and is subdivided into subnets mapped to availability zones or regions.

Traffic routing within a VPC follows these structural steps:

  1. Subnet creation — Address space is divided into subnets, each associated with a routing table.
  2. Route table configuration — Rules specify next-hop destinations for traffic matching each IP prefix.
  3. Gateway attachment — An internet gateway (for public traffic) or NAT gateway (for private outbound traffic) is attached to enable external connectivity.
  4. Security layer enforcement — Network access control lists (NACLs) operate at the subnet level; security groups operate at the instance level, providing stateful packet inspection.
  5. Peering or transit connections — Two or more VPCs can exchange traffic directly through VPC peering relationships or via a central transit gateway acting as a regional hub.

VPN connectivity extends this topology off-premises. A site-to-site VPN establishes an IPsec-encrypted tunnel between the cloud VPC and a corporate data center or branch office. A client VPN (sometimes called a remote-access VPN) connects individual endpoints to the cloud network using certificate-based or provider network-integrated authentication. The Internet Engineering Task Force (IETF) documents IPsec standards in RFC 4301, which specifies the Security Architecture for the Internet Protocol governing tunnel establishment and key exchange.

Dedicated interconnects — marketed under names like AWS Direct Connect, Azure ExpressRoute, and Google Cloud Interconnect — provision physical circuits through carrier partners, delivering throughput options that typically range from 50 Mbps to 100 Gbps with predictable latency profiles unavailable over shared internet paths. The broadband infrastructure standards referenced by the FCC provide a public baseline for distinguishing carrier-grade connectivity from consumer-grade alternatives.

For workloads at the network edge, the architectural considerations covered under edge computing and cloud introduce additional routing complexity where latency-sensitive traffic must be handled before reaching a central cloud region.


Common scenarios

Cloud networking configurations align to four recurring deployment patterns:

Single-VPC application hosting — A self-contained application stack deployed within one VPC, using public subnets for load balancers and private subnets for application and database tiers. This model is standard for SaaS applications where no on-premises dependency exists.

Hybrid cloud connectivity — An enterprise links its on-premises Active Provider Network, storage systems, or ERP infrastructure to cloud workloads via site-to-site VPN or dedicated interconnect. This scenario dominates cloud migration projects, where legacy systems cannot be moved immediately. Cloud migration planning requires resolving routing, DNS, and firewall policy before workloads transition.

Multi-region or multi-cloud mesh — Organizations operating in geographically distributed regions or across 2 or more providers implement transit gateway architectures or SD-WAN overlays to manage inter-region and inter-provider routing from a central policy plane. This scenario is common in cloud disaster recovery architectures where failover regions must maintain live routing adjacency.

Regulated workload isolation — Compliance requirements under frameworks such as NIST SP 800-53 (for federal systems) or HIPAA mandate network segmentation between data classifications. Isolated VPCs with no peering routes between production and development environments, combined with VPC flow log auditing, satisfy boundary enforcement controls. These controls are directly related to the posture described in cloud compliance and regulations.


Decision boundaries

Selecting among VPN, dedicated interconnect, and VPC-native connectivity involves discrete tradeoffs across five dimensions:

Dimension Site-to-Site VPN Dedicated Interconnect VPC Peering / Transit Gateway
Traffic path Public internet (encrypted) Private carrier circuit Provider backbone only
Setup time Minutes to hours Days to weeks Minutes
Bandwidth ceiling ~1.25 Gbps per tunnel (typical) Up to 100 Gbps Provider-limited
Latency predictability Variable (internet-dependent) High High (intra-provider)
Cost structure Per-hour + data transfer Port fee + data transfer Data transfer only

The NIST Cybersecurity Framework (CSF), Identify and Protect functions, places network segmentation and traffic encryption among core protective measures — a framing that makes VPN a minimum baseline rather than an advanced capability for any workload handling sensitive data.

VPN vs. dedicated interconnect is primarily a bandwidth and compliance question. Organizations subject to FedRAMP High authorization must use connections that meet specific confidentiality and integrity controls; a shared internet path, even when encrypted, may not satisfy boundary protection requirements under NIST SP 800-53 Rev 5, control SC-7 (Boundary Protection). A dedicated interconnect eliminates the public internet segment entirely.

VPC peering vs. transit gateway resolves to scale. Peering relationships are non-transitive — if VPC A peers with VPC B, and VPC B peers with VPC C, VPC A cannot reach VPC C through that chain. Transit gateway introduces a hub-and-spoke topology that scales to hundreds of VPCs under a single routing domain, at additional per-attachment and per-GB cost.

Cloud cost management analysis consistently identifies data egress charges as the largest variable in cloud networking budgets — a structural pricing dynamic across all major providers. Cloud performance optimization strategies frequently target network topology first, since misrouted traffic introduces latency that cannot be corrected at the application layer.

The cloudcomputingauthority.com reference landscape covers the full range of cloud service disciplines, from cloud service models to cloud monitoring and observability, with cloud networking serving as the connectivity substrate underlying every other service category.


References