♊️ GemiNews 🗞️
(dev)
🏡
📰 Articles
🏷️ Tags
🧠 Queries
📈 Graphs
☁️ Stats
💁🏻 Assistant
💬
🎙️
Demo 1: Embeddings + Recommendation
Demo 2: Bella RAGa
Demo 3: NewRetriever
Demo 4: Assistant function calling
Editing article
Title
Summary
Content
<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*y-wMkHFcpMgOJtYixfAy_w.png" /></figure><p>Google Cloud is one of the remarkable cloud “hyperscalers”. Hyperscalers are designed for massive capacity. They possess immense data center networks spread globally, allowing them to handle the enormous computing demands of large enterprises and applications with vast user bases. With that, Hyperscalers can enable applications with unsurpassed capabilities in scalability, reliability and global reach.</p><p>In this article we will try to explore the levels of possible application availability in Google Cloud with a focus on private internal networks. We’ll also provide actual infrastructure configuration examples.</p><p>Let’s imagine a business critical web application or API that provides its important service to the, potentially internal, business customers or end users. Often the business needs require the application to minimize its downtime, make it accessible to the users and responsive most of the time. A common measure of success of such metric is the application service uptime metric often aiming for targets like “99.99%” (“four nines”) or even “99.999%” (“five nines”) which translate into very few minutes of allowed downtime per year.</p><p>The typical mechanisms that the application design can rely upon to improve application Availability (as measured by uptime) are</p><ul><li><strong>Redundancy</strong> — run application on multiple independent hardware instances</li><li><strong>Load Balancing </strong>— distribute incoming network traffic across multiple application instances running on multiple independent hardware instances</li><li><strong>Failover</strong> — mechanisms to automatically detect failures and switch operation to a working application instance seamlessly</li><li><strong>Monitoring & Alerting</strong> — robust monitoring systems to detect problems quickly and preferably proactively notify the team responsible for addressing them</li><li><strong>Self-healing —</strong> ability of the application components restart themselves or re-provision failing resources with minimal manual intervention</li></ul><p>In this article we will concentrate on how Google Cloud can help with the first three means of improving cloud application availability: redundancy, load balancing, failover.</p><h3><strong>Redundancy</strong></h3><p>A single application instance or application running in a single failure domain cannot sustain underlying hardware failure and hence the application would not be available to the end users in case of an underlying hardware outage:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*mW4jv8zKfyr1rRv9xf-t7g.png" /><figcaption>Fig. 1: Single application instance on single GCE VM</figcaption></figure><p>If our business objectives require addressing only a single Google Compute Engine (GCE) VM outage we would need to apply <strong>Redundancy</strong> and <strong>Load Balancing</strong> in order to improve application availability and resilience to that failure scenario:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DpDuW6cKQnoVzTmeuk35Vw.png" /><figcaption>Fig. 2: Multiple application instances in single GCE Zone</figcaption></figure><p>This setup is addressing the single GCE VM or application instance outage failure scenario.</p><p>Google Cloud hardware is organized into <a href="https://cloud.google.com/compute/docs/regions-zones/zone-virtualization"><em>clusters</em></a>. A cluster represents a set of compute, network, and storage resources supported by building, power, and cooling infrastructure. Infrastructure components typically support a single cluster, ensuring that clusters share few dependencies. However, components with highly demonstrated reliability and downstream redundancy can be shared between clusters. For example, multiple clusters typically share a utility grid substation because substations are extremely reliable and clusters use redundant power systems.</p><p>A <a href="https://cloud.google.com/compute/docs/regions-zones"><em>zone</em></a> is a deployment area within a region and Compute Engine implements a layer of abstraction between zones and the physical clusters where the zones are hosted. Each zone is hosted in one or more clusters and you can check the <a href="https://cloud.google.com/compute/docs/regions-zones/zone-virtualization">Zone virtualization</a> article for more details about that mapping.</p><p>To simplify reasoning without sacrificing accuracy it would be fair to assume that a GCE zone is a deployment area within a geographic region mapped to one or more clusters that can fail together, e.g. because of the power supply outage.</p><p>GCE zone outage is <a href="https://status.cloud.google.com/summary">not an impossible scenario</a> and a highly reliable application on Google Cloud typically seeks to sustain its service during such unfortunate event by running application replicas in multiple zones:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*BGaUrlTA6NCfc0RbYaClkQ.png" /><figcaption>Fig. 3: Multiple application instances in multiple GCE Zones</figcaption></figure><p>With high zone availability <a href="https://cloud.google.com/compute/sla">SLA levels</a> provided by the Google Cloud Compute engine, the application setup in Figure 3 should be sufficient for majority of business use cases even for very demanding customers requiring high application service SLA levels.</p><p>Unfortunately, a full region outage is also <a href="https://www.businessinsider.com/google-cloud-data-center-london-outage-hottest-day-record-uk-2022-7">not an impossible scenario</a>.</p><p>The power of cloud hyperscalers is especially in that they provide customers with significantly better tools to survive disasters similar to <a href="https://www.reuters.com/article/idUSKBN2B20NT/">this one</a>, for example, than other cloud providers. Amongst other things, that is what differentiates “Hyperscalers” from small-scale or localized cloud service providers. In Google Cloud an application can run its replicas not only on power independent hardware within one data center or geographic location (probably connected to the same power plant in the neighborhood) but also across geographic location and even across continents!</p><p>So we are coming to the next level of application redundancy that is possible with Google Cloud: multi-regional application deployment.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*jE7J83TUsHoguxvrK5OOMg.png" /><figcaption>Fig. 4: Multiple application instances in multiple GCE Regions with global external load balancing</figcaption></figure><p>With that, a business critical application can now have a strategy for the entire site (region) failure and promise uptime to its critical clients even in that unlikely case.</p><h3>Load Balancing</h3><p>There needs to be some magic happening in order to seamlessly direct clients from around the world to the application instance replicas running in multiple geographic locations. And not only that. Whenever a VM, GCE zone or even full region goes down that magic needs to seamlessly redirect application clients to the healthy locations in other surviving region.</p><p>What are the options for load balancing that Google Cloud provides?</p><p>On the picture in Figure 4 the load balancer is located in Google Cloud but outside of any particular region. That kind of a global service can be provided by the following <a href="https://cloud.google.com/load-balancing/docs/application-load-balancer">types</a> of Google Cloud Load balancers:</p><ul><li>Global External Application Load Balancer</li><li>Classic Application Load Balancer in Premium Tier</li><li>Global External proxy Network Load Balancer</li><li>Classic Proxy Network Load Balancer</li></ul><p>Load balancers of all of these listed types load balancing traffic coming from the clients on the internet to the workloads running on Google Cloud.</p><p>An enterprise organization on Google Cloud would keep VPC networks private and expose application workloads to the internal company clients, which are also often located across the world.</p><p><em>Internal</em> load balancers on Google Cloud restrict access to the application to the clients in internal networks only. Unlike global external, <em>internal</em> load balancers on Google Cloud currently rely on the regional infrastructure. Availability of the applications exposed by internal load balancers can hence be affected by a single cloud region outage.</p><p>That means that for the internal clients the multi-regional application deployment depicted in Figure 4 logically changes to:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*xauNw4H1lLTDmhzblFpk2g.png" /><figcaption>Fig. 5: Multiple application instances in multiple GCE Regions with internal load balancing</figcaption></figure><p>The choice of internal load balancers on Google Cloud is even bigger:</p><ul><li>Regional Internal Application Load Balancer</li><li>Cross-region Internal Application Load Balancer</li><li>Regional internal proxy Network Load Balancer</li><li>Cross-region internal proxy Network Load Balancer</li><li>Internal passthrough Network Load Balancer</li></ul><p>There is an open question with the regional internal load balancing though. How would application clients know and seamlessly failover to the healthy region in case of a full region outage (a high bar challenge we have set us up to)?</p><p>To address that challenge we can revert to a well known technique called <a href="https://en.wikipedia.org/wiki/Round-robin_DNS">DNS load balancing</a> (or round-robin DNS).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*17RU8OR_tQTt7IOethwokQ.png" /><figcaption>Fig. 6: Internal DNS load balancing with regional application backends</figcaption></figure><p>Fully managed Google Cloud DNS service offers an important and convenient tool to setup such cross-regional client access, it is called <a href="https://cloud.google.com/dns/docs/policies-overview#geolocation-policy">Geolocation routing policies</a> and it “<em>lets you map traffic originating from source geographies (Google Cloud regions) to specific DNS targets. Use this policy to distribute incoming requests to different service instances based on the traffic’s origin. You can use this feature with the internet, with external traffic, or with traffic originating within Google Cloud and bound for internal passthrough Network Load Balancers. Cloud DNS uses the region where the queries enter Google Cloud as the source geography.</em>”</p><p>Using Cloud DNS Geolocation routing policies in the setup depicted in Figure 6 application clients will automatically receive IP address of the Internal Load Balancer nearest to their geographic location from the Cloud DNS server.</p><p>Please note, that the Google Cloud DNS is a fully managed <em>global</em> service offering impressive <a href="https://cloud.google.com/dns/sla">SLO targets</a>. DNS cache on the application client side helps sustaining the application service available in the rare case of possible Cloud DNS service outage.</p><p>In fact, many parts of the <a href="https://cloud.google.com/load-balancing/docs/l7-internal/setting-up-l7-cross-reg-internal">Cross-region Internal Application Load Balancers </a>are <em>global</em> Google Cloud resources as well. Here is a more detailed diagram borrowed from the public Google Cloud pages:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*PFjluXOPFbz_x475" /><figcaption>Fig. 7: Global resources of the internal cross-region load balancer</figcaption></figure><h3>Failover</h3><p>But what exactly happens to the client connections and overall application availability in case of an individual GCE VM, zone or region outage? How would a DNS service know that it needs to resolve application hostnames to a different IP address to direct clients to another (healthy) region?</p><p>The Cloud DNS service and its Geolocation routing policies in particular has yet another feature which completes the multi-regional application deployment puzzle. It is <a href="https://cloud.google.com/dns/docs/zones/manage-routing-policies#health-checks">Health Checks</a>.</p><p>For Internal Passthrough Network Load Balancers (L4), Cloud DNS checks the health information on the load balancer’s individual backend instances to determine if the load balancer is healthy or unhealthy. Cloud DNS applies a default 20% threshold, and if at least 20% of backend instances are healthy, the load balancer endpoint is considered healthy. DNS routing policies mark the endpoint as healthy or unhealthy based on this threshold, routing traffic accordingly.</p><p>For Internal Application Load Balancers and Cross-region Internal Application Load Balancers, Cloud DNS checks the overall health of the internal Application Load Balancer, and lets the internal Application Load Balancer itself check the health of its backend instances.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*m6FRNR7-8iegZykNfqZcSA.png" /><figcaption>Fig. 8: Internal DNS load balancing with cross-region application backends with health checks</figcaption></figure><p>Cloud DNS health checks are a crucial solution component for achieving maximum application uptime. Without Cloud DNS ability to test the current status of the internal load balancers and application backends it would not be able to reason about which IP address exactly should be returned to the clients on their DNS requests in case of a region outage. Hence the seamless client failover to the health application instances would not be possible.</p><p>Please note that in order to achieve best results, the Time-To-Live parameter of the application A-record in Cloud DNS needs to be set to a minimal value. It could even be zero, in which case applications would contact DNS for a current IP before every call to the application service. The choice of the DNS record TTL value is a tradeoff between the application availability requirements and DNS service load and client response latency.</p><p>Internal load balancers maintain their own application backends health checks (Cloud DNS health checks are using a different mechanism) and in case of the cross-region internal application load balancers a load balancer operating in a particular region can automatically failover and redirect client requests to the application replicas running in another healthy region.</p><p>This setup addresses the “partial” region outage scenario. That is when only application backend instances are not available (e.g. GCE VMs are down or there is a error in the application preventing it from accepting incoming connections) but other services in the affected region (such as networking and load balancing) continue working.</p><h3>Configuration with Managed Instance Groups</h3><p>Let’s combine all pieces of an HA solution discussed before into a single picture and see how the Google Cloud resources need to be configured together to achieve the desired effect.</p><p>GCE Managed Instance Groups based scenarios, discussed in this article, are also relevant to the managed Kubernetes service on Google Cloud, GKE, as well. Kubernetes node pools in GKE are implemented as GCE MIGs. Hence, Kubernetes workload deployed to GKE on Google Cloud can be made multi-regional by deploying the application service to several GKE clusters in distinct regions. The load balancer resources for such set up can be provisioned using</p><ul><li><a href="https://gateway-api.sigs.k8s.io/">Kubernetes Gateway API</a> and <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/gateway-api#gateway_controller">GKE Gateway Controller</a> in GKE clusters</li><li><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/multi-cluster-ingress">Multi Cluster Ingress</a> resources in GKE clusters</li><li>Terraform resources (outside of GKE cluster)</li></ul><p>In this example we will use Terraform to set up load balancers, a common tool for declarative cloud infrastructure definition and provisioning, but it is also possible to achieve the same setup using the other two approaches. You can find the full Terraform example in <a href="https://github.com/GoogleCloudPlatform/professional-services/blob/main/examples/cloud-dns-load-balancing">this</a> GitHub project.</p><p>Our first example will be based on the regional <a href="https://cloud.google.com/load-balancing/docs/internal">Internal Network Passthrough Load Balancers</a> and Google Compute Engine (GCE) <a href="https://cloud.google.com/compute/docs/instance-groups#managed_instance_groups">Managed Instance Groups</a> (MIGs).</p><p>In the last section of this article we’ll discus pros and cons of different load balancer and application backends combinations.</p><p><a href="https://github.com/GoogleCloudPlatform/professional-services/blob/main/examples/cloud-dns-load-balancing/mig.tf">First</a> we define the GCE MIGs in two Google Cloud regions:</p><pre>// modules/mig/mig.tf:<br>module "gce-container" {<br> source = "terraform-google-modules/container-vm/google"<br> container = {<br> image = var.image<br> env = [<br> {<br> name = "NAME"<br> value = "hello"<br> }<br> ]<br> }<br>}<br><br>data "google_compute_default_service_account" "default" {<br>}<br>module "mig_template" {<br> source = "terraform-google-modules/vm/google//modules/instance_template"<br> version = "~> 10.1"<br> network = var.network_id<br> subnetwork = var.subnetwork_id<br> name_prefix = "mig-l4rilb"<br> service_account = {<br> email = data.google_compute_default_service_account.default.email<br> scopes = ["cloud-platform"]<br> }<br> source_image_family = "cos-stable"<br> source_image_project = "cos-cloud"<br> machine_type = "e2-small"<br> source_image = reverse(split("/", module.gce-container.source_image))[0]<br> metadata = merge(var.additional_metadata, { "gce-container-declaration" = module.gce-container.metadata_value })<br> tags = [<br> "container-vm-test-mig"<br> ]<br> labels = {<br> "container-vm" = module.gce-container.vm_container_label<br> }<br>}<br><br>module "mig" {<br> source = "terraform-google-modules/vm/google//modules/mig"<br> version = "~> 10.1"<br> project_id = var.project_id<br><br> region = var.location<br> instance_template = module.mig_template.self_link<br> hostname = "${var.name}"<br> target_size = "1"<br> <br> autoscaling_enabled = "true"<br> min_replicas = "1"<br> max_replicas = "1"<br> named_ports = [{<br> name = var.lb_proto<br> port = var.lb_port<br> }] <br><br> health_check_name = "${var.name}-http-healthcheck"<br> health_check = {<br> type = "http"<br> initial_delay_sec = 10<br> check_interval_sec = 2<br> healthy_threshold = 1<br> timeout_sec = 1<br> unhealthy_threshold = 1<br> port = 8080<br> response = ""<br> proxy_header = "NONE"<br> request = ""<br> request_path = "/"<br> host = ""<br> enable_logging = true<br> }<br>}<br><br>// mig.tf<br>module "mig-l4" {<br> for_each = var.locations<br> source = "./mig"<br> project_id = var.project_id<br> location = each.key<br> network_id = data.google_compute_network.lb_network.id<br> subnetwork_id = data.google_compute_subnetwork.lb_subnetwork[each.key].id<br> name = "failover-l4-${each.key}"<br> image = var.image<br>}</pre><p><a href="https://github.com/GoogleCloudPlatform/professional-services/blob/main/examples/cloud-dns-load-balancing/l4-rilb-mig.tf">Then</a>, let’s define two Cross-regional Internal Network Passthrough Load Balancers (L4 ILBs), each in respective region:</p><pre>// modules/l4rilb/l4-rilb.tf<br>locals {<br> named_ports = [{<br> name = var.lb_proto<br> port = var.lb_port<br> }]<br> health_check = {<br> type = var.lb_proto<br> check_interval_sec = 1<br> healthy_threshold = 4<br> timeout_sec = 1<br> unhealthy_threshold = 5<br> response = ""<br> proxy_header = "NONE"<br> port = var.lb_port<br> port_name = "health-check-port"<br> request = ""<br> request_path = "/"<br> host = "1.2.3.4"<br> enable_log = false<br> }<br>}<br><br>module "l4rilb" {<br> source = "GoogleCloudPlatform/lb-internal/google"<br> project = var.project_id<br> region = var.location<br> name = "${var.lb_name}"<br> ports = [local.named_ports[0].port]<br> source_tags = ["allow-group1"]<br> target_tags = ["container-vm-test-mig"]<br> health_check = local.health_check<br> global_access = true<br><br> backends = [<br> {<br> group = var.mig_instance_group<br> description = ""<br> failover = false<br> },<br> ]<br>}<br><br>// l4-rilb-mig.tf<br>module "l4-rilb" {<br> for_each = var.locations<br> source = "./modules/l4rilb"<br> project_id = var.project_id<br> location = each.key<br> lb_name = "l4-rilb-${each.key}"<br> mig_instance_group = module.mig-l4[each.key].instance_group<br> image = var.image<br> network_id = data.google_compute_network.lb_network.id<br> subnetwork_id = data.google_compute_subnetwork.lb_subnetwork[each.key].name<br><br> depends_on = [ <br> google_compute_subnetwork.proxy_subnetwork <br> ]<br>}</pre><p>And now let’s also <a href="https://github.com/GoogleCloudPlatform/professional-services/blob/main/examples/cloud-dns-load-balancing/dns-l4-rilb-mig.tf">add</a> the global Cloud DNS record set configuration:</p><pre>// dns-l4-rilb-mig.tf<br>resource "google_dns_record_set" "a_l4_rilb_mig_hello" {<br> name = "l4-rilb-mig.${google_dns_managed_zone.hello_zone.dns_name}"<br> managed_zone = google_dns_managed_zone.hello_zone.name<br> type = "A"<br> ttl = 1<br><br> routing_policy {<br> dynamic "geo" {<br> for_each = var.locations<br> content {<br> location = geo.key<br> health_checked_targets {<br> internal_load_balancers {<br> ip_address = module.l4-rilb[geo.key].lb_ip_address<br> ip_protocol = "tcp"<br> load_balancer_type = "regionalL4ilb"<br> network_url = data.google_compute_network.lb_network.id<br> port = "8080"<br> region = geo.key<br> project = var.project_id<br> }<br> }<br> }<br> }<br> } <br>}</pre><p>After we apply the Terraform configuration to the target Google Cloud project:</p><pre>terraform init<br>terraform plan<br>terraform apply</pre><p>we get all solution infrastructure components including a test application running in the GCE VMs in two distinct Google Cloud regions needed to perform end-to-end testing.</p><p>Let’s see how the clients can now access our application.</p><p>For testing of continuous request flow we can use the <a href="https://fortio.org/">Fortio</a> tool, which is a common tool for testing service mesh application performance. We will run it from a GCE VM attached to the same VPC where the load balancers are installed:</p><pre>gcloud compute ssh jumpbox<br><br>docker run fortio/fortio load --https-insecure -t 1m -qps 1 http://l4mig.hello.zone:8080</pre><p>The results after a minute of execution should look similar to the following:</p><pre>IP addresses distribution:<br>10.156.0.11:8080: 1<br>Code 200 : 258 (100.0 %)<br>Response Header Sizes : count 258 avg 390 +/- 0 min 390 max 390 sum 100620<br>Response Body/Total Sizes : count 258 avg 7759.624 +/- 1.497 min 7758 max 7763 sum 2001983<br>All done 258 calls (plus 4 warmup) 233.180 ms avg, 17.1 qps</pre><p>Please note the IP address of the L4 internal regional load balancer in the nearest region that is getting all of the calls.</p><p>In the second console window SSH into the VM in the GCE MIG group in the nearest region</p><pre>export MIG_VM=$(gcloud compute instances list --format="value[](name)" --filter="name~l4-europe-west3")<br>export MIG_VM_ZONE=$(gcloud compute instances list --format="value[](zone)" --filter="name=${MIG_VM}")<br><br>gcloud compute ssh --zone $MIG_VM_ZONE $MIG_VM --tunnel-through-iap --project $PROJECT_ID<br><br>docker ps</pre><p>Now let’s run the load test in the first console window again.</p><p>While the test is running switch to the second console window and execute</p><pre>docker stop ${CONTAINER}</pre><p>Switch to the first console window and notice the failover happening. The output at the end of the execution should look like following</p><pre>IP addresses distribution:<br>10.156.0.11:8080: 16<br>10.199.0.48:8080: 4<br>Code -1 : 12 (10.0 %)<br>Code 200 : 108 (90.0 %)<br>Response Header Sizes : count 258 avg 390 +/- 0 min 390 max 390 sum 100620<br>Response Body/Total Sizes : count 258 avg 7759.624 +/- 1.497 min 7758 max 7763 sum 2001983<br>All done 120 calls (plus 4 warmup) 83.180 ms avg, 2.0 qps</pre><p>Please note that the service VM in the Managed Instance has been automatically restarted. This functionality is provided by the Google Compute Engine Managed Instance groups and implements the forth component of application high availability posture, as you remember from the beginning of the article, it is <strong><em>self-healing</em></strong>.</p><h3>Cloud Run Backends</h3><p>Let’s consider a second scenario and assume that our cloud-native application is implemented as a <a href="https://cloud.google.com/run">Cloud Run</a> service.</p><p>The Cloud Run based scenarios, discussed in this article, are relevant for the <a href="https://cloud.google.com/functions/docs/concepts/version-comparison#new-in-2nd-gen">Cloud Functions</a> (2nd generation) application backends as well. Cloud Functions can be configured as load balancer backends similarly to the Cloud Run instances using the same Serverless Network Endpoint Groups resources.</p><p>The overall multi-region application deployment changes slightly.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*tBWHPTGg6H1Tne0vW6Ip0w.png" /><figcaption>Fig. 9: Internal DNS load balancing with cross-region application backends in Cloud Run</figcaption></figure><p>We <a href="https://github.com/GoogleCloudPlatform/professional-services/blob/main/examples/cloud-dns-load-balancing/modules/cr/cr2.tf">start</a> with defining two regional Cloud Run instances allowing invocations from unauthenticated clients:</p><pre>// modules/cr/cr2.tf<br>resource "google_cloud_run_v2_service" "cr_service" {<br> project = var.project_id<br> name = "cr2-service" <br> location = var.location<br> launch_stage = "GA"<br><br> ingress = "INGRESS_TRAFFIC_INTERNAL_LOAD_BALANCER"<br> custom_audiences = [ "cr-service" ]<br><br> template {<br> containers {<br> image = "gcr.io/cloudrun/hello" # public image for your service<br> }<br> }<br> traffic {<br> percent = 100<br> type = "TRAFFIC_TARGET_ALLOCATION_TYPE_LATEST"<br> }<br>}<br><br>resource "google_compute_region_network_endpoint_group" "cloudrun_v2_sneg" {<br> name = "cloudrun-sneg"<br> network_endpoint_type = "SERVERLESS"<br> region = var.location<br> cloud_run {<br> service = google_cloud_run_v2_service.cr_service.name<br> }<br>}<br><br>resource "google_cloud_run_v2_service_iam_member" "public-access" {<br> name = google_cloud_run_v2_service.cr_service.name<br> location = google_cloud_run_v2_service.cr_service.location<br> project = google_cloud_run_v2_service.cr_service.project<br> role = "roles/run.invoker"<br> member = "allUsers"<br>}<br><br>// cr.tf<br>module "cr-service" {<br> for_each = var.locations<br> source = "./modules/cr"<br> project_id = var.project_id<br> location = each.key<br> image = var.image<br>}</pre><p>And <a href="https://github.com/GoogleCloudPlatform/professional-services/blob/main/examples/cloud-dns-load-balancing/l7-crilb-cr.tf">then</a> define global Internal Cross-Region Application Load Balancer resources:</p><pre>// modules/l7crilb/l7-crilb.tf<br>resource "google_compute_global_forwarding_rule" "forwarding_rule" {<br> for_each = var.subnetwork_ids<br> project = var.project_id<br> <br> name = "${var.lb_name}-${each.key}"<br><br> ip_protocol = "TCP"<br> load_balancing_scheme = "INTERNAL_MANAGED"<br> port_range = var.lb_port<br> target = google_compute_target_https_proxy.https_proxy.self_link<br> network = var.network_id<br> subnetwork = each.value<br>}<br><br>resource "google_compute_target_https_proxy" "https_proxy" {<br> project = var.project_id<br><br> name = "${var.lb_name}"<br> url_map = google_compute_url_map.url_map.self_link<br><br> certificate_manager_certificates = [<br> var.certificate_id<br> ]<br> lifecycle {<br> ignore_changes = [<br> certificate_manager_certificates<br> ]<br> }<br>}<br><br>resource "google_compute_url_map" "url_map" {<br> project = var.project_id<br><br> name = "${var.lb_name}"<br> default_service = google_compute_backend_service.backend_service.self_link<br>}<br><br>resource "google_compute_backend_service" "backend_service" {<br> project = var.project_id<br><br> load_balancing_scheme = "INTERNAL_MANAGED"<br> session_affinity = "NONE"<br> <br> dynamic "backend" {<br> for_each = var.backend_group_ids<br> content {<br> group = backend.value<br> balancing_mode = var.balancing_mode<br> capacity_scaler = 1.0 <br> }<br> }<br><br> name = "${var.lb_name}"<br> protocol = var.backend_protocol<br> timeout_sec = 30<br><br> // "A backend service cannot have a healthcheck with Serverless network endpoint group backends"<br> health_checks = var.is_sneg ? null : [google_compute_health_check.health_check.self_link]<br><br> outlier_detection {<br> base_ejection_time {<br> nanos = 0<br> seconds = 1<br> }<br> consecutive_errors = 3<br> enforcing_consecutive_errors = 100<br> interval {<br> nanos = 0<br> seconds = 1<br> }<br> max_ejection_percent = 50<br> }<br><br>}<br><br>resource "google_compute_health_check" "health_check" {<br> project = var.project_id<br><br> name = "${var.lb_name}"<br> http_health_check {<br> port_specification = "USE_SERVING_PORT"<br> }<br>}<br><br>// l7-crilb-cr.tf<br>module "l7-crilb-cr" {<br> source = "./modules/l7crilb"<br> project_id = var.project_id<br> lb_name = "l7-crilb-cr"<br><br> network_id = data.google_compute_network.lb_network.name<br> subnetwork_ids = { for k, v in data.google_compute_subnetwork.lb_subnetwork : k => v.id }<br> certificate_id = google_certificate_manager_certificate.ccm-cert.id<br> backend_group_ids = [ for k, v in module.cr-service : v.sneg_id ]<br> is_sneg = true<br>}</pre><p>Please note that all load balancer related resources in this case are global (not regional).</p><p>In this demo case we need to <a href="https://github.com/sshcherbakov/professional-services/blob/cloud-dns-load-balancing/examples/cloud-dns-load-balancing/dns-l7-crilb-cr.tf">define</a> <a href="https://github.com/GoogleCloudPlatform/professional-services/blob/main/examples/cloud-dns-load-balancing/dns-l7-crilb-cr.tf">define</a> the Cloud DNS resources as well:</p><pre>// dns-l7-crilb-cr.tf<br>resource "google_dns_record_set" "a_l7_crilb_cr_hello" {<br> name = "l7-crilb-cr.${google_dns_managed_zone.hello_zone.dns_name}"<br> managed_zone = google_dns_managed_zone.hello_zone.name<br> type = "A"<br> ttl = 1<br><br> routing_policy {<br> dynamic "geo" {<br> for_each = var.locations<br> content {<br> location = geo.key<br> health_checked_targets {<br> internal_load_balancers {<br> ip_address = module.l7-crilb-cr.lb_ip_address[geo.key]<br> ip_protocol = "tcp"<br> load_balancer_type = "globalL7ilb"<br> network_url = data.google_compute_network.lb_network.id<br> port = "443"<br> project = var.project_id<br> }<br> }<br> }<br> }<br> } <br>}</pre><p>For each region where the Cloud Run instance with our application is running we need to create a dedicated Cloud DNS routing policy.</p><p>Let’s now apply the Terraform to the target Google Cloud project and see how the clients can access our Cloud Run application.</p><p>Similarly to the Network Passthrough load balancer case described in the previous section, we’ll call our application endpoint exposed by the Cloud Run via the configured FQDN hostname:</p><pre>gcloud compute ssh jumpbox</pre><pre>docker run fortio/fortio load --https-insecure \<br> -t 5m -qps 1 <a href="https://l7-crilb-cr.hello.zone">https://l7-crilb-cr.hello.zone</a></pre><p>The results after a minute of execution should look similar to the following:</p><pre>IP addresses distribution:<br>10.156.0.55:443: 4<br>Code 200 : 8 (100.0 %)<br>Response Header Sizes : count 8 avg 216 +/- 0 min 216 max 216 sum 1728<br>Response Body/Total Sizes : count 8 avg 226 +/- 0 min 226 max 226 sum 1808<br>All done 8 calls (plus 4 warmup) 17.066 ms avg, 1.4 qps</pre><p>With our Fortio tool setup of one call per second, all calls have reached their destination.</p><p>The IP address that shows up in the output is the IP of the L7 internal cross-regional load balancer in the nearest region that is receiving all of our calls at the moment.</p><p>To simulate Cloud Run backend service outage, while running Fortio tool started in the previous step, in the second console window we can delete the backend resource in the nearest region from the load balancer backend service definition, e.g.:</p><pre>gcloud compute backend-services remove-backend l7-crilb-cr \<br> --network-endpoint-group=cloudrun-sneg \<br> --network-endpoint-group-region=europe-west3 \<br> --global</pre><p>We can also check, to which regions the load balancer sends the traffic using:</p><pre>gcloud compute backend-services list --filter="name:l7-crilb-cr"<br><br>NAME BACKENDS PROTOCOL<br>l7-crilb-cr us-central1/networkEndpointGroups/cloudrun-sneg HTTPS</pre><p>There is only one backend left running in the remote region. Yet, the Fortio results in the first console session show no hiccup or interruption:</p><pre>IP addresses distribution:<br>10.156.0.55:443: 4<br>Code 200 : 300 (100.0 %)<br>Response Header Sizes : count 300 avg 216.33333 +/- 1.106 min 216 max 220 sum 64900<br>Response Body/Total Sizes : count 300 avg 226.33333 +/- 1.106 min 226 max 230 sum 67900<br>All done 300 calls (plus 4 warmup) 193.048 ms avg, 1.0 qps</pre><p>What we have seen so far was the failover at the Internal Cross-Regional Application load balancer backend side. That is, the client application (Fortio) was still accessing the load balancer IP address in the nearest europe-west3 region. That can also be verified by running host l7-crilb-cr.hello.zone which will return the internal load balancer IP address from the subnetwork in the europe-west3 region.</p><p>What would happen in case of a full local region outage?</p><p>The first use case discussed above (Network Passthrough Load Balancer with MIG backends) illustrates that case. The Cloud DNS L4 health checks for the Network Passthrough load balancer test connection all way through to the actual application process running in GCE VMs (it is not possible to configure this type of load balancer with <a href="https://cloud.google.com/load-balancing/docs/negs/serverless-neg-concepts">Serverless Network Endpoint Groups</a> backends for Cloud Run instances) and flips the IP address for the application service host name to the load balancer IP address in another region automatically.</p><p>Unfortunately, the <a href="https://cloud.google.com/dns/docs/zones/manage-routing-policies#health-checks">Cloud DNS health checks</a> for application (L7) load balancers cannot detect the outage of the application backend service with that fidelity level yet. Regional and Cross-regional Application load balancers are built on Envoy proxies internally and Envoy proxy based load balancers are only health checking the state and availability of the Envoy proxy instances, not the application backends themselves.</p><p>If an application running in Cloud Run experiences malfunction (e.g. as a result of internal program error) and returns 500 response codes, the Cloud DNS won’t detect that and won’t take action switching the load balancer IPs for the application hostname. That situation would be detected by the <a href="https://cloud.google.com/load-balancing/docs/https/setting-up-global-traffic-mgmt#configure_outlier_detection">Outlier Detection</a> feature of the Internal Cross-regional Application Load Balancer and the load balancer will redirect traffic to the healthy backend by looking at the rate of successful calls towards each backend.</p><p>A missing load balancer backend is not considered as an outage by the Cloud DNS health checks though. When a Cross-regional Application Load Balancer backend resources are not properly configured, load balancer has none of them, or has malfunctioning backends, the Cloud DNS won’t take action and won’t flip load balancer IPs automatically. The Cloud DNS health checks only check the availability of the internal Google Cloud infrastructure (Envoy proxies) supporting the application (L7) load balancers.</p><p>Yet, in case a full Google Cloud region outage the load balancer infrastructure would not be available fully and the Cloud DNS health check could detect that and act as expected.</p><h3>Options Choice</h3><p>When designing a highly available application service distributed across multiple regions on Google Cloud we need to consider the currently existing constraints in the Google Cloud services and pick a combination that would support the application requirements.</p><p>Here are the constraints and trade-offs that you should consider when picking the Google Cloud load balancer type for your distributed application.</p><p><strong>1. External vs Internal Load Balancers</strong></p><p>The Global External Application Load Balancer offers the tools for building geographically distributed application service with best availability guarantees.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-l8le_X_Hupbj5M4CIqaGw.png" /><figcaption>Fig. 10: External application load balancer with backends in Cloud Run</figcaption></figure><p>Instead of relying upon the DNS load balancing trick, it provides application endpoint availability to the clients via global anycast Virtual IP addresses and smart global network infrastructure that cleverly routes client traffic to the infrastructure and services in the healthy regions.</p><p>The Google Cloud <em>internal</em> load balancers infrastructure is available via regional IP addresses instead and hence require additional mechanism, Cloud DNS load balancing suggested in this article, to address the full regional outage scenario.</p><p><strong>2. Managed Instance Groups vs Cloud Run Backends</strong></p><p>The Network Passthrough (L4) Load Balancers cannot be configured with Cloud Run backends. They don’t support Serverless Network Endpoint Groups (NEGs) required for Cloud Run and Cloud Function based backends at the moment.</p><p>Hence, if you are building a multi-regional application service that should only be available in the <em>internal</em> company VPC network and would like to address majority of possible outage scenarios (full region outage, individual regional Google Cloud service outage, application service malfunction), then Network Passthrough (L4) Load Balancer with GCE Managed Instance Groups is the only option for the application backends. Please remember, that the GCE MIGs is the mechanism supporting GKE node pools as well. Hence, the GCE MIG backends option is also applicable to the Kubernetes workloads running in GKE.</p><p>An important consideration for the Cloud Run backends in multiple regions is authentication.</p><p>In order to seamlessly continue service for the authenticated Cloud Run clients the Cloud Run instances in different regions must be configured with <a href="https://cloud.google.com/run/docs/configuring/custom-audiences">Custom Audiences</a>. In that way, the access token that client passes along with an authenticated call can be validated and accepted by the Cloud Run backends in all regions. Please note that the Custom Audiences is a feature available in Cloud Run instances of the 2nd generation. Cloud Run instances of the 1st generation can be used in the suggested multi-regional setup only when the application service does not need to authenticate clients.</p><p><strong>3. Network Passthrough (L4) vs Application (L7) Load Balancers</strong></p><p>Selecting the load balancer type also depends on the functionality that a load balancer can provide. In case of a Passthrough (L4) load balancer the application would need to implement the following tasks by itself (to name a few):</p><ul><li>terminate TLS connections</li><li>authenticate incoming calls</li><li>implement request routing</li></ul><p>The Application (L7) load balancers can help with that but their internal versions address less failure scenarios, in comparison to the Network Passthrough (L4) load balancer based solution, because of the current feature level of the Cloud DNS health check mechanism. For example, Cloud DNS would not flip an application service IP address in case if the application is experiencing internal malfunction (e.g. returning 50x error codes) or the load balancer backend is unavailable or missing altogether.</p><p>This is not a problem with External Application (L7) load balancers, since there is no need in Cloud DNS load balancing solution for exposing application in a high available way in multiple regions.</p><p>These mentioned “partial” or individual regional service infrastructure outage scenarios are handled by the Internal Cross-Regional Application Load Balancers on their backend side though. In addition, optional <a href="https://cloud.google.com/load-balancing/docs/https/setting-up-global-traffic-mgmt#configure_outlier_detection">Outlier Detection</a> load balancer configuration can help detecting application level malfunctions at the cost of wasting certain percentage of actual client requests in the case of outage.</p><h3>Conclusion</h3><p>Google Cloud goes beyond usual redundant deployments and offers architects and developers tools for building highly available application services across multiple geographic locations also for internal security restricted corporate use cases.</p><p>The choice of the particular combination of Google Cloud resources for improving multi-regional application availability depends on the individual applications requirements and features currently supported in the Google Cloud services such as network load balancers and Cloud DNS.</p><p>Enterprise security features in Google Cloud services get special attention and differentiate Google Cloud from other cloud hyperscalers. Please check one of my previous articles on <a href="https://medium.com/google-cloud/application-secrets-encryption-in-kubernetes-and-anthos-products-ae5de5905224">Application Secrets Encryption in Google Cloud Kubernetes products</a> for an example of possible with Google Cloud products.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=823b9f706578" width="1" height="1" alt=""><hr><p><a href="https://medium.com/google-cloud/multi-region-ha-in-google-cloud-823b9f706578">Multi-region HA in Google Cloud</a> was originally published in <a href="https://medium.com/google-cloud">Google Cloud - Community</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>
Author
Link
Published date
Image url
Feed url
Guid
Hidden blurb
--- !ruby/object:Feedjira::Parser::RSSEntry title: Multi-region HA in Google Cloud published: 2024-04-19 05:15:57.000000000 Z categories: - networking - infrastructure - google-cloud-platform url: https://medium.com/google-cloud/multi-region-ha-in-google-cloud-823b9f706578?source=rss----e52cf94d98af---4 entry_id: !ruby/object:Feedjira::Parser::GloballyUniqueIdentifier is_perma_link: 'false' guid: https://medium.com/p/823b9f706578 content: '<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*y-wMkHFcpMgOJtYixfAy_w.png" /></figure><p>Google Cloud is one of the remarkable cloud “hyperscalers”. Hyperscalers are designed for massive capacity. They possess immense data center networks spread globally, allowing them to handle the enormous computing demands of large enterprises and applications with vast user bases. With that, Hyperscalers can enable applications with unsurpassed capabilities in scalability, reliability and global reach.</p><p>In this article we will try to explore the levels of possible application availability in Google Cloud with a focus on private internal networks. We’ll also provide actual infrastructure configuration examples.</p><p>Let’s imagine a business critical web application or API that provides its important service to the, potentially internal, business customers or end users. Often the business needs require the application to minimize its downtime, make it accessible to the users and responsive most of the time. A common measure of success of such metric is the application service uptime metric often aiming for targets like “99.99%” (“four nines”) or even “99.999%” (“five nines”) which translate into very few minutes of allowed downtime per year.</p><p>The typical mechanisms that the application design can rely upon to improve application Availability (as measured by uptime) are</p><ul><li><strong>Redundancy</strong> — run application on multiple independent hardware instances</li><li><strong>Load Balancing </strong>— distribute incoming network traffic across multiple application instances running on multiple independent hardware instances</li><li><strong>Failover</strong> — mechanisms to automatically detect failures and switch operation to a working application instance seamlessly</li><li><strong>Monitoring & Alerting</strong> — robust monitoring systems to detect problems quickly and preferably proactively notify the team responsible for addressing them</li><li><strong>Self-healing —</strong> ability of the application components restart themselves or re-provision failing resources with minimal manual intervention</li></ul><p>In this article we will concentrate on how Google Cloud can help with the first three means of improving cloud application availability: redundancy, load balancing, failover.</p><h3><strong>Redundancy</strong></h3><p>A single application instance or application running in a single failure domain cannot sustain underlying hardware failure and hence the application would not be available to the end users in case of an underlying hardware outage:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*mW4jv8zKfyr1rRv9xf-t7g.png" /><figcaption>Fig. 1: Single application instance on single GCE VM</figcaption></figure><p>If our business objectives require addressing only a single Google Compute Engine (GCE) VM outage we would need to apply <strong>Redundancy</strong> and <strong>Load Balancing</strong> in order to improve application availability and resilience to that failure scenario:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DpDuW6cKQnoVzTmeuk35Vw.png" /><figcaption>Fig. 2: Multiple application instances in single GCE Zone</figcaption></figure><p>This setup is addressing the single GCE VM or application instance outage failure scenario.</p><p>Google Cloud hardware is organized into <a href="https://cloud.google.com/compute/docs/regions-zones/zone-virtualization"><em>clusters</em></a>. A cluster represents a set of compute, network, and storage resources supported by building, power, and cooling infrastructure. Infrastructure components typically support a single cluster, ensuring that clusters share few dependencies. However, components with highly demonstrated reliability and downstream redundancy can be shared between clusters. For example, multiple clusters typically share a utility grid substation because substations are extremely reliable and clusters use redundant power systems.</p><p>A <a href="https://cloud.google.com/compute/docs/regions-zones"><em>zone</em></a> is a deployment area within a region and Compute Engine implements a layer of abstraction between zones and the physical clusters where the zones are hosted. Each zone is hosted in one or more clusters and you can check the <a href="https://cloud.google.com/compute/docs/regions-zones/zone-virtualization">Zone virtualization</a> article for more details about that mapping.</p><p>To simplify reasoning without sacrificing accuracy it would be fair to assume that a GCE zone is a deployment area within a geographic region mapped to one or more clusters that can fail together, e.g. because of the power supply outage.</p><p>GCE zone outage is <a href="https://status.cloud.google.com/summary">not an impossible scenario</a> and a highly reliable application on Google Cloud typically seeks to sustain its service during such unfortunate event by running application replicas in multiple zones:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*BGaUrlTA6NCfc0RbYaClkQ.png" /><figcaption>Fig. 3: Multiple application instances in multiple GCE Zones</figcaption></figure><p>With high zone availability <a href="https://cloud.google.com/compute/sla">SLA levels</a> provided by the Google Cloud Compute engine, the application setup in Figure 3 should be sufficient for majority of business use cases even for very demanding customers requiring high application service SLA levels.</p><p>Unfortunately, a full region outage is also <a href="https://www.businessinsider.com/google-cloud-data-center-london-outage-hottest-day-record-uk-2022-7">not an impossible scenario</a>.</p><p>The power of cloud hyperscalers is especially in that they provide customers with significantly better tools to survive disasters similar to <a href="https://www.reuters.com/article/idUSKBN2B20NT/">this one</a>, for example, than other cloud providers. Amongst other things, that is what differentiates “Hyperscalers” from small-scale or localized cloud service providers. In Google Cloud an application can run its replicas not only on power independent hardware within one data center or geographic location (probably connected to the same power plant in the neighborhood) but also across geographic location and even across continents!</p><p>So we are coming to the next level of application redundancy that is possible with Google Cloud: multi-regional application deployment.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*jE7J83TUsHoguxvrK5OOMg.png" /><figcaption>Fig. 4: Multiple application instances in multiple GCE Regions with global external load balancing</figcaption></figure><p>With that, a business critical application can now have a strategy for the entire site (region) failure and promise uptime to its critical clients even in that unlikely case.</p><h3>Load Balancing</h3><p>There needs to be some magic happening in order to seamlessly direct clients from around the world to the application instance replicas running in multiple geographic locations. And not only that. Whenever a VM, GCE zone or even full region goes down that magic needs to seamlessly redirect application clients to the healthy locations in other surviving region.</p><p>What are the options for load balancing that Google Cloud provides?</p><p>On the picture in Figure 4 the load balancer is located in Google Cloud but outside of any particular region. That kind of a global service can be provided by the following <a href="https://cloud.google.com/load-balancing/docs/application-load-balancer">types</a> of Google Cloud Load balancers:</p><ul><li>Global External Application Load Balancer</li><li>Classic Application Load Balancer in Premium Tier</li><li>Global External proxy Network Load Balancer</li><li>Classic Proxy Network Load Balancer</li></ul><p>Load balancers of all of these listed types load balancing traffic coming from the clients on the internet to the workloads running on Google Cloud.</p><p>An enterprise organization on Google Cloud would keep VPC networks private and expose application workloads to the internal company clients, which are also often located across the world.</p><p><em>Internal</em> load balancers on Google Cloud restrict access to the application to the clients in internal networks only. Unlike global external, <em>internal</em> load balancers on Google Cloud currently rely on the regional infrastructure. Availability of the applications exposed by internal load balancers can hence be affected by a single cloud region outage.</p><p>That means that for the internal clients the multi-regional application deployment depicted in Figure 4 logically changes to:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*xauNw4H1lLTDmhzblFpk2g.png" /><figcaption>Fig. 5: Multiple application instances in multiple GCE Regions with internal load balancing</figcaption></figure><p>The choice of internal load balancers on Google Cloud is even bigger:</p><ul><li>Regional Internal Application Load Balancer</li><li>Cross-region Internal Application Load Balancer</li><li>Regional internal proxy Network Load Balancer</li><li>Cross-region internal proxy Network Load Balancer</li><li>Internal passthrough Network Load Balancer</li></ul><p>There is an open question with the regional internal load balancing though. How would application clients know and seamlessly failover to the healthy region in case of a full region outage (a high bar challenge we have set us up to)?</p><p>To address that challenge we can revert to a well known technique called <a href="https://en.wikipedia.org/wiki/Round-robin_DNS">DNS load balancing</a> (or round-robin DNS).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*17RU8OR_tQTt7IOethwokQ.png" /><figcaption>Fig. 6: Internal DNS load balancing with regional application backends</figcaption></figure><p>Fully managed Google Cloud DNS service offers an important and convenient tool to setup such cross-regional client access, it is called <a href="https://cloud.google.com/dns/docs/policies-overview#geolocation-policy">Geolocation routing policies</a> and it “<em>lets you map traffic originating from source geographies (Google Cloud regions) to specific DNS targets. Use this policy to distribute incoming requests to different service instances based on the traffic’s origin. You can use this feature with the internet, with external traffic, or with traffic originating within Google Cloud and bound for internal passthrough Network Load Balancers. Cloud DNS uses the region where the queries enter Google Cloud as the source geography.</em>”</p><p>Using Cloud DNS Geolocation routing policies in the setup depicted in Figure 6 application clients will automatically receive IP address of the Internal Load Balancer nearest to their geographic location from the Cloud DNS server.</p><p>Please note, that the Google Cloud DNS is a fully managed <em>global</em> service offering impressive <a href="https://cloud.google.com/dns/sla">SLO targets</a>. DNS cache on the application client side helps sustaining the application service available in the rare case of possible Cloud DNS service outage.</p><p>In fact, many parts of the <a href="https://cloud.google.com/load-balancing/docs/l7-internal/setting-up-l7-cross-reg-internal">Cross-region Internal Application Load Balancers </a>are <em>global</em> Google Cloud resources as well. Here is a more detailed diagram borrowed from the public Google Cloud pages:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*PFjluXOPFbz_x475" /><figcaption>Fig. 7: Global resources of the internal cross-region load balancer</figcaption></figure><h3>Failover</h3><p>But what exactly happens to the client connections and overall application availability in case of an individual GCE VM, zone or region outage? How would a DNS service know that it needs to resolve application hostnames to a different IP address to direct clients to another (healthy) region?</p><p>The Cloud DNS service and its Geolocation routing policies in particular has yet another feature which completes the multi-regional application deployment puzzle. It is <a href="https://cloud.google.com/dns/docs/zones/manage-routing-policies#health-checks">Health Checks</a>.</p><p>For Internal Passthrough Network Load Balancers (L4), Cloud DNS checks the health information on the load balancer’s individual backend instances to determine if the load balancer is healthy or unhealthy. Cloud DNS applies a default 20% threshold, and if at least 20% of backend instances are healthy, the load balancer endpoint is considered healthy. DNS routing policies mark the endpoint as healthy or unhealthy based on this threshold, routing traffic accordingly.</p><p>For Internal Application Load Balancers and Cross-region Internal Application Load Balancers, Cloud DNS checks the overall health of the internal Application Load Balancer, and lets the internal Application Load Balancer itself check the health of its backend instances.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*m6FRNR7-8iegZykNfqZcSA.png" /><figcaption>Fig. 8: Internal DNS load balancing with cross-region application backends with health checks</figcaption></figure><p>Cloud DNS health checks are a crucial solution component for achieving maximum application uptime. Without Cloud DNS ability to test the current status of the internal load balancers and application backends it would not be able to reason about which IP address exactly should be returned to the clients on their DNS requests in case of a region outage. Hence the seamless client failover to the health application instances would not be possible.</p><p>Please note that in order to achieve best results, the Time-To-Live parameter of the application A-record in Cloud DNS needs to be set to a minimal value. It could even be zero, in which case applications would contact DNS for a current IP before every call to the application service. The choice of the DNS record TTL value is a tradeoff between the application availability requirements and DNS service load and client response latency.</p><p>Internal load balancers maintain their own application backends health checks (Cloud DNS health checks are using a different mechanism) and in case of the cross-region internal application load balancers a load balancer operating in a particular region can automatically failover and redirect client requests to the application replicas running in another healthy region.</p><p>This setup addresses the “partial” region outage scenario. That is when only application backend instances are not available (e.g. GCE VMs are down or there is a error in the application preventing it from accepting incoming connections) but other services in the affected region (such as networking and load balancing) continue working.</p><h3>Configuration with Managed Instance Groups</h3><p>Let’s combine all pieces of an HA solution discussed before into a single picture and see how the Google Cloud resources need to be configured together to achieve the desired effect.</p><p>GCE Managed Instance Groups based scenarios, discussed in this article, are also relevant to the managed Kubernetes service on Google Cloud, GKE, as well. Kubernetes node pools in GKE are implemented as GCE MIGs. Hence, Kubernetes workload deployed to GKE on Google Cloud can be made multi-regional by deploying the application service to several GKE clusters in distinct regions. The load balancer resources for such set up can be provisioned using</p><ul><li><a href="https://gateway-api.sigs.k8s.io/">Kubernetes Gateway API</a> and <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/gateway-api#gateway_controller">GKE Gateway Controller</a> in GKE clusters</li><li><a href="https://cloud.google.com/kubernetes-engine/docs/concepts/multi-cluster-ingress">Multi Cluster Ingress</a> resources in GKE clusters</li><li>Terraform resources (outside of GKE cluster)</li></ul><p>In this example we will use Terraform to set up load balancers, a common tool for declarative cloud infrastructure definition and provisioning, but it is also possible to achieve the same setup using the other two approaches. You can find the full Terraform example in <a href="https://github.com/GoogleCloudPlatform/professional-services/blob/main/examples/cloud-dns-load-balancing">this</a> GitHub project.</p><p>Our first example will be based on the regional <a href="https://cloud.google.com/load-balancing/docs/internal">Internal Network Passthrough Load Balancers</a> and Google Compute Engine (GCE) <a href="https://cloud.google.com/compute/docs/instance-groups#managed_instance_groups">Managed Instance Groups</a> (MIGs).</p><p>In the last section of this article we’ll discus pros and cons of different load balancer and application backends combinations.</p><p><a href="https://github.com/GoogleCloudPlatform/professional-services/blob/main/examples/cloud-dns-load-balancing/mig.tf">First</a> we define the GCE MIGs in two Google Cloud regions:</p><pre>// modules/mig/mig.tf:<br>module "gce-container" {<br> source = "terraform-google-modules/container-vm/google"<br> container = {<br> image = var.image<br> env = [<br> {<br> name = "NAME"<br> value = "hello"<br> }<br> ]<br> }<br>}<br><br>data "google_compute_default_service_account" "default" {<br>}<br>module "mig_template" {<br> source = "terraform-google-modules/vm/google//modules/instance_template"<br> version = "~> 10.1"<br> network = var.network_id<br> subnetwork = var.subnetwork_id<br> name_prefix = "mig-l4rilb"<br> service_account = {<br> email = data.google_compute_default_service_account.default.email<br> scopes = ["cloud-platform"]<br> }<br> source_image_family = "cos-stable"<br> source_image_project = "cos-cloud"<br> machine_type = "e2-small"<br> source_image = reverse(split("/", module.gce-container.source_image))[0]<br> metadata = merge(var.additional_metadata, { "gce-container-declaration" = module.gce-container.metadata_value })<br> tags = [<br> "container-vm-test-mig"<br> ]<br> labels = {<br> "container-vm" = module.gce-container.vm_container_label<br> }<br>}<br><br>module "mig" {<br> source = "terraform-google-modules/vm/google//modules/mig"<br> version = "~> 10.1"<br> project_id = var.project_id<br><br> region = var.location<br> instance_template = module.mig_template.self_link<br> hostname = "${var.name}"<br> target_size = "1"<br> <br> autoscaling_enabled = "true"<br> min_replicas = "1"<br> max_replicas = "1"<br> named_ports = [{<br> name = var.lb_proto<br> port = var.lb_port<br> }] <br><br> health_check_name = "${var.name}-http-healthcheck"<br> health_check = {<br> type = "http"<br> initial_delay_sec = 10<br> check_interval_sec = 2<br> healthy_threshold = 1<br> timeout_sec = 1<br> unhealthy_threshold = 1<br> port = 8080<br> response = ""<br> proxy_header = "NONE"<br> request = ""<br> request_path = "/"<br> host = ""<br> enable_logging = true<br> }<br>}<br><br>// mig.tf<br>module "mig-l4" {<br> for_each = var.locations<br> source = "./mig"<br> project_id = var.project_id<br> location = each.key<br> network_id = data.google_compute_network.lb_network.id<br> subnetwork_id = data.google_compute_subnetwork.lb_subnetwork[each.key].id<br> name = "failover-l4-${each.key}"<br> image = var.image<br>}</pre><p><a href="https://github.com/GoogleCloudPlatform/professional-services/blob/main/examples/cloud-dns-load-balancing/l4-rilb-mig.tf">Then</a>, let’s define two Cross-regional Internal Network Passthrough Load Balancers (L4 ILBs), each in respective region:</p><pre>// modules/l4rilb/l4-rilb.tf<br>locals {<br> named_ports = [{<br> name = var.lb_proto<br> port = var.lb_port<br> }]<br> health_check = {<br> type = var.lb_proto<br> check_interval_sec = 1<br> healthy_threshold = 4<br> timeout_sec = 1<br> unhealthy_threshold = 5<br> response = ""<br> proxy_header = "NONE"<br> port = var.lb_port<br> port_name = "health-check-port"<br> request = ""<br> request_path = "/"<br> host = "1.2.3.4"<br> enable_log = false<br> }<br>}<br><br>module "l4rilb" {<br> source = "GoogleCloudPlatform/lb-internal/google"<br> project = var.project_id<br> region = var.location<br> name = "${var.lb_name}"<br> ports = [local.named_ports[0].port]<br> source_tags = ["allow-group1"]<br> target_tags = ["container-vm-test-mig"]<br> health_check = local.health_check<br> global_access = true<br><br> backends = [<br> {<br> group = var.mig_instance_group<br> description = ""<br> failover = false<br> },<br> ]<br>}<br><br>// l4-rilb-mig.tf<br>module "l4-rilb" {<br> for_each = var.locations<br> source = "./modules/l4rilb"<br> project_id = var.project_id<br> location = each.key<br> lb_name = "l4-rilb-${each.key}"<br> mig_instance_group = module.mig-l4[each.key].instance_group<br> image = var.image<br> network_id = data.google_compute_network.lb_network.id<br> subnetwork_id = data.google_compute_subnetwork.lb_subnetwork[each.key].name<br><br> depends_on = [ <br> google_compute_subnetwork.proxy_subnetwork <br> ]<br>}</pre><p>And now let’s also <a href="https://github.com/GoogleCloudPlatform/professional-services/blob/main/examples/cloud-dns-load-balancing/dns-l4-rilb-mig.tf">add</a> the global Cloud DNS record set configuration:</p><pre>// dns-l4-rilb-mig.tf<br>resource "google_dns_record_set" "a_l4_rilb_mig_hello" {<br> name = "l4-rilb-mig.${google_dns_managed_zone.hello_zone.dns_name}"<br> managed_zone = google_dns_managed_zone.hello_zone.name<br> type = "A"<br> ttl = 1<br><br> routing_policy {<br> dynamic "geo" {<br> for_each = var.locations<br> content {<br> location = geo.key<br> health_checked_targets {<br> internal_load_balancers {<br> ip_address = module.l4-rilb[geo.key].lb_ip_address<br> ip_protocol = "tcp"<br> load_balancer_type = "regionalL4ilb"<br> network_url = data.google_compute_network.lb_network.id<br> port = "8080"<br> region = geo.key<br> project = var.project_id<br> }<br> }<br> }<br> }<br> } <br>}</pre><p>After we apply the Terraform configuration to the target Google Cloud project:</p><pre>terraform init<br>terraform plan<br>terraform apply</pre><p>we get all solution infrastructure components including a test application running in the GCE VMs in two distinct Google Cloud regions needed to perform end-to-end testing.</p><p>Let’s see how the clients can now access our application.</p><p>For testing of continuous request flow we can use the <a href="https://fortio.org/">Fortio</a> tool, which is a common tool for testing service mesh application performance. We will run it from a GCE VM attached to the same VPC where the load balancers are installed:</p><pre>gcloud compute ssh jumpbox<br><br>docker run fortio/fortio load --https-insecure -t 1m -qps 1 http://l4mig.hello.zone:8080</pre><p>The results after a minute of execution should look similar to the following:</p><pre>IP addresses distribution:<br>10.156.0.11:8080: 1<br>Code 200 : 258 (100.0 %)<br>Response Header Sizes : count 258 avg 390 +/- 0 min 390 max 390 sum 100620<br>Response Body/Total Sizes : count 258 avg 7759.624 +/- 1.497 min 7758 max 7763 sum 2001983<br>All done 258 calls (plus 4 warmup) 233.180 ms avg, 17.1 qps</pre><p>Please note the IP address of the L4 internal regional load balancer in the nearest region that is getting all of the calls.</p><p>In the second console window SSH into the VM in the GCE MIG group in the nearest region</p><pre>export MIG_VM=$(gcloud compute instances list --format="value[](name)" --filter="name~l4-europe-west3")<br>export MIG_VM_ZONE=$(gcloud compute instances list --format="value[](zone)" --filter="name=${MIG_VM}")<br><br>gcloud compute ssh --zone $MIG_VM_ZONE $MIG_VM --tunnel-through-iap --project $PROJECT_ID<br><br>docker ps</pre><p>Now let’s run the load test in the first console window again.</p><p>While the test is running switch to the second console window and execute</p><pre>docker stop ${CONTAINER}</pre><p>Switch to the first console window and notice the failover happening. The output at the end of the execution should look like following</p><pre>IP addresses distribution:<br>10.156.0.11:8080: 16<br>10.199.0.48:8080: 4<br>Code -1 : 12 (10.0 %)<br>Code 200 : 108 (90.0 %)<br>Response Header Sizes : count 258 avg 390 +/- 0 min 390 max 390 sum 100620<br>Response Body/Total Sizes : count 258 avg 7759.624 +/- 1.497 min 7758 max 7763 sum 2001983<br>All done 120 calls (plus 4 warmup) 83.180 ms avg, 2.0 qps</pre><p>Please note that the service VM in the Managed Instance has been automatically restarted. This functionality is provided by the Google Compute Engine Managed Instance groups and implements the forth component of application high availability posture, as you remember from the beginning of the article, it is <strong><em>self-healing</em></strong>.</p><h3>Cloud Run Backends</h3><p>Let’s consider a second scenario and assume that our cloud-native application is implemented as a <a href="https://cloud.google.com/run">Cloud Run</a> service.</p><p>The Cloud Run based scenarios, discussed in this article, are relevant for the <a href="https://cloud.google.com/functions/docs/concepts/version-comparison#new-in-2nd-gen">Cloud Functions</a> (2nd generation) application backends as well. Cloud Functions can be configured as load balancer backends similarly to the Cloud Run instances using the same Serverless Network Endpoint Groups resources.</p><p>The overall multi-region application deployment changes slightly.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*tBWHPTGg6H1Tne0vW6Ip0w.png" /><figcaption>Fig. 9: Internal DNS load balancing with cross-region application backends in Cloud Run</figcaption></figure><p>We <a href="https://github.com/GoogleCloudPlatform/professional-services/blob/main/examples/cloud-dns-load-balancing/modules/cr/cr2.tf">start</a> with defining two regional Cloud Run instances allowing invocations from unauthenticated clients:</p><pre>// modules/cr/cr2.tf<br>resource "google_cloud_run_v2_service" "cr_service" {<br> project = var.project_id<br> name = "cr2-service" <br> location = var.location<br> launch_stage = "GA"<br><br> ingress = "INGRESS_TRAFFIC_INTERNAL_LOAD_BALANCER"<br> custom_audiences = [ "cr-service" ]<br><br> template {<br> containers {<br> image = "gcr.io/cloudrun/hello" # public image for your service<br> }<br> }<br> traffic {<br> percent = 100<br> type = "TRAFFIC_TARGET_ALLOCATION_TYPE_LATEST"<br> }<br>}<br><br>resource "google_compute_region_network_endpoint_group" "cloudrun_v2_sneg" {<br> name = "cloudrun-sneg"<br> network_endpoint_type = "SERVERLESS"<br> region = var.location<br> cloud_run {<br> service = google_cloud_run_v2_service.cr_service.name<br> }<br>}<br><br>resource "google_cloud_run_v2_service_iam_member" "public-access" {<br> name = google_cloud_run_v2_service.cr_service.name<br> location = google_cloud_run_v2_service.cr_service.location<br> project = google_cloud_run_v2_service.cr_service.project<br> role = "roles/run.invoker"<br> member = "allUsers"<br>}<br><br>// cr.tf<br>module "cr-service" {<br> for_each = var.locations<br> source = "./modules/cr"<br> project_id = var.project_id<br> location = each.key<br> image = var.image<br>}</pre><p>And <a href="https://github.com/GoogleCloudPlatform/professional-services/blob/main/examples/cloud-dns-load-balancing/l7-crilb-cr.tf">then</a> define global Internal Cross-Region Application Load Balancer resources:</p><pre>// modules/l7crilb/l7-crilb.tf<br>resource "google_compute_global_forwarding_rule" "forwarding_rule" {<br> for_each = var.subnetwork_ids<br> project = var.project_id<br> <br> name = "${var.lb_name}-${each.key}"<br><br> ip_protocol = "TCP"<br> load_balancing_scheme = "INTERNAL_MANAGED"<br> port_range = var.lb_port<br> target = google_compute_target_https_proxy.https_proxy.self_link<br> network = var.network_id<br> subnetwork = each.value<br>}<br><br>resource "google_compute_target_https_proxy" "https_proxy" {<br> project = var.project_id<br><br> name = "${var.lb_name}"<br> url_map = google_compute_url_map.url_map.self_link<br><br> certificate_manager_certificates = [<br> var.certificate_id<br> ]<br> lifecycle {<br> ignore_changes = [<br> certificate_manager_certificates<br> ]<br> }<br>}<br><br>resource "google_compute_url_map" "url_map" {<br> project = var.project_id<br><br> name = "${var.lb_name}"<br> default_service = google_compute_backend_service.backend_service.self_link<br>}<br><br>resource "google_compute_backend_service" "backend_service" {<br> project = var.project_id<br><br> load_balancing_scheme = "INTERNAL_MANAGED"<br> session_affinity = "NONE"<br> <br> dynamic "backend" {<br> for_each = var.backend_group_ids<br> content {<br> group = backend.value<br> balancing_mode = var.balancing_mode<br> capacity_scaler = 1.0 <br> }<br> }<br><br> name = "${var.lb_name}"<br> protocol = var.backend_protocol<br> timeout_sec = 30<br><br> // "A backend service cannot have a healthcheck with Serverless network endpoint group backends"<br> health_checks = var.is_sneg ? null : [google_compute_health_check.health_check.self_link]<br><br> outlier_detection {<br> base_ejection_time {<br> nanos = 0<br> seconds = 1<br> }<br> consecutive_errors = 3<br> enforcing_consecutive_errors = 100<br> interval {<br> nanos = 0<br> seconds = 1<br> }<br> max_ejection_percent = 50<br> }<br><br>}<br><br>resource "google_compute_health_check" "health_check" {<br> project = var.project_id<br><br> name = "${var.lb_name}"<br> http_health_check {<br> port_specification = "USE_SERVING_PORT"<br> }<br>}<br><br>// l7-crilb-cr.tf<br>module "l7-crilb-cr" {<br> source = "./modules/l7crilb"<br> project_id = var.project_id<br> lb_name = "l7-crilb-cr"<br><br> network_id = data.google_compute_network.lb_network.name<br> subnetwork_ids = { for k, v in data.google_compute_subnetwork.lb_subnetwork : k => v.id }<br> certificate_id = google_certificate_manager_certificate.ccm-cert.id<br> backend_group_ids = [ for k, v in module.cr-service : v.sneg_id ]<br> is_sneg = true<br>}</pre><p>Please note that all load balancer related resources in this case are global (not regional).</p><p>In this demo case we need to <a href="https://github.com/sshcherbakov/professional-services/blob/cloud-dns-load-balancing/examples/cloud-dns-load-balancing/dns-l7-crilb-cr.tf">define</a> <a href="https://github.com/GoogleCloudPlatform/professional-services/blob/main/examples/cloud-dns-load-balancing/dns-l7-crilb-cr.tf">define</a> the Cloud DNS resources as well:</p><pre>// dns-l7-crilb-cr.tf<br>resource "google_dns_record_set" "a_l7_crilb_cr_hello" {<br> name = "l7-crilb-cr.${google_dns_managed_zone.hello_zone.dns_name}"<br> managed_zone = google_dns_managed_zone.hello_zone.name<br> type = "A"<br> ttl = 1<br><br> routing_policy {<br> dynamic "geo" {<br> for_each = var.locations<br> content {<br> location = geo.key<br> health_checked_targets {<br> internal_load_balancers {<br> ip_address = module.l7-crilb-cr.lb_ip_address[geo.key]<br> ip_protocol = "tcp"<br> load_balancer_type = "globalL7ilb"<br> network_url = data.google_compute_network.lb_network.id<br> port = "443"<br> project = var.project_id<br> }<br> }<br> }<br> }<br> } <br>}</pre><p>For each region where the Cloud Run instance with our application is running we need to create a dedicated Cloud DNS routing policy.</p><p>Let’s now apply the Terraform to the target Google Cloud project and see how the clients can access our Cloud Run application.</p><p>Similarly to the Network Passthrough load balancer case described in the previous section, we’ll call our application endpoint exposed by the Cloud Run via the configured FQDN hostname:</p><pre>gcloud compute ssh jumpbox</pre><pre>docker run fortio/fortio load --https-insecure \<br> -t 5m -qps 1 <a href="https://l7-crilb-cr.hello.zone">https://l7-crilb-cr.hello.zone</a></pre><p>The results after a minute of execution should look similar to the following:</p><pre>IP addresses distribution:<br>10.156.0.55:443: 4<br>Code 200 : 8 (100.0 %)<br>Response Header Sizes : count 8 avg 216 +/- 0 min 216 max 216 sum 1728<br>Response Body/Total Sizes : count 8 avg 226 +/- 0 min 226 max 226 sum 1808<br>All done 8 calls (plus 4 warmup) 17.066 ms avg, 1.4 qps</pre><p>With our Fortio tool setup of one call per second, all calls have reached their destination.</p><p>The IP address that shows up in the output is the IP of the L7 internal cross-regional load balancer in the nearest region that is receiving all of our calls at the moment.</p><p>To simulate Cloud Run backend service outage, while running Fortio tool started in the previous step, in the second console window we can delete the backend resource in the nearest region from the load balancer backend service definition, e.g.:</p><pre>gcloud compute backend-services remove-backend l7-crilb-cr \<br> --network-endpoint-group=cloudrun-sneg \<br> --network-endpoint-group-region=europe-west3 \<br> --global</pre><p>We can also check, to which regions the load balancer sends the traffic using:</p><pre>gcloud compute backend-services list --filter="name:l7-crilb-cr"<br><br>NAME BACKENDS PROTOCOL<br>l7-crilb-cr us-central1/networkEndpointGroups/cloudrun-sneg HTTPS</pre><p>There is only one backend left running in the remote region. Yet, the Fortio results in the first console session show no hiccup or interruption:</p><pre>IP addresses distribution:<br>10.156.0.55:443: 4<br>Code 200 : 300 (100.0 %)<br>Response Header Sizes : count 300 avg 216.33333 +/- 1.106 min 216 max 220 sum 64900<br>Response Body/Total Sizes : count 300 avg 226.33333 +/- 1.106 min 226 max 230 sum 67900<br>All done 300 calls (plus 4 warmup) 193.048 ms avg, 1.0 qps</pre><p>What we have seen so far was the failover at the Internal Cross-Regional Application load balancer backend side. That is, the client application (Fortio) was still accessing the load balancer IP address in the nearest europe-west3 region. That can also be verified by running host l7-crilb-cr.hello.zone which will return the internal load balancer IP address from the subnetwork in the europe-west3 region.</p><p>What would happen in case of a full local region outage?</p><p>The first use case discussed above (Network Passthrough Load Balancer with MIG backends) illustrates that case. The Cloud DNS L4 health checks for the Network Passthrough load balancer test connection all way through to the actual application process running in GCE VMs (it is not possible to configure this type of load balancer with <a href="https://cloud.google.com/load-balancing/docs/negs/serverless-neg-concepts">Serverless Network Endpoint Groups</a> backends for Cloud Run instances) and flips the IP address for the application service host name to the load balancer IP address in another region automatically.</p><p>Unfortunately, the <a href="https://cloud.google.com/dns/docs/zones/manage-routing-policies#health-checks">Cloud DNS health checks</a> for application (L7) load balancers cannot detect the outage of the application backend service with that fidelity level yet. Regional and Cross-regional Application load balancers are built on Envoy proxies internally and Envoy proxy based load balancers are only health checking the state and availability of the Envoy proxy instances, not the application backends themselves.</p><p>If an application running in Cloud Run experiences malfunction (e.g. as a result of internal program error) and returns 500 response codes, the Cloud DNS won’t detect that and won’t take action switching the load balancer IPs for the application hostname. That situation would be detected by the <a href="https://cloud.google.com/load-balancing/docs/https/setting-up-global-traffic-mgmt#configure_outlier_detection">Outlier Detection</a> feature of the Internal Cross-regional Application Load Balancer and the load balancer will redirect traffic to the healthy backend by looking at the rate of successful calls towards each backend.</p><p>A missing load balancer backend is not considered as an outage by the Cloud DNS health checks though. When a Cross-regional Application Load Balancer backend resources are not properly configured, load balancer has none of them, or has malfunctioning backends, the Cloud DNS won’t take action and won’t flip load balancer IPs automatically. The Cloud DNS health checks only check the availability of the internal Google Cloud infrastructure (Envoy proxies) supporting the application (L7) load balancers.</p><p>Yet, in case a full Google Cloud region outage the load balancer infrastructure would not be available fully and the Cloud DNS health check could detect that and act as expected.</p><h3>Options Choice</h3><p>When designing a highly available application service distributed across multiple regions on Google Cloud we need to consider the currently existing constraints in the Google Cloud services and pick a combination that would support the application requirements.</p><p>Here are the constraints and trade-offs that you should consider when picking the Google Cloud load balancer type for your distributed application.</p><p><strong>1. External vs Internal Load Balancers</strong></p><p>The Global External Application Load Balancer offers the tools for building geographically distributed application service with best availability guarantees.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-l8le_X_Hupbj5M4CIqaGw.png" /><figcaption>Fig. 10: External application load balancer with backends in Cloud Run</figcaption></figure><p>Instead of relying upon the DNS load balancing trick, it provides application endpoint availability to the clients via global anycast Virtual IP addresses and smart global network infrastructure that cleverly routes client traffic to the infrastructure and services in the healthy regions.</p><p>The Google Cloud <em>internal</em> load balancers infrastructure is available via regional IP addresses instead and hence require additional mechanism, Cloud DNS load balancing suggested in this article, to address the full regional outage scenario.</p><p><strong>2. Managed Instance Groups vs Cloud Run Backends</strong></p><p>The Network Passthrough (L4) Load Balancers cannot be configured with Cloud Run backends. They don’t support Serverless Network Endpoint Groups (NEGs) required for Cloud Run and Cloud Function based backends at the moment.</p><p>Hence, if you are building a multi-regional application service that should only be available in the <em>internal</em> company VPC network and would like to address majority of possible outage scenarios (full region outage, individual regional Google Cloud service outage, application service malfunction), then Network Passthrough (L4) Load Balancer with GCE Managed Instance Groups is the only option for the application backends. Please remember, that the GCE MIGs is the mechanism supporting GKE node pools as well. Hence, the GCE MIG backends option is also applicable to the Kubernetes workloads running in GKE.</p><p>An important consideration for the Cloud Run backends in multiple regions is authentication.</p><p>In order to seamlessly continue service for the authenticated Cloud Run clients the Cloud Run instances in different regions must be configured with <a href="https://cloud.google.com/run/docs/configuring/custom-audiences">Custom Audiences</a>. In that way, the access token that client passes along with an authenticated call can be validated and accepted by the Cloud Run backends in all regions. Please note that the Custom Audiences is a feature available in Cloud Run instances of the 2nd generation. Cloud Run instances of the 1st generation can be used in the suggested multi-regional setup only when the application service does not need to authenticate clients.</p><p><strong>3. Network Passthrough (L4) vs Application (L7) Load Balancers</strong></p><p>Selecting the load balancer type also depends on the functionality that a load balancer can provide. In case of a Passthrough (L4) load balancer the application would need to implement the following tasks by itself (to name a few):</p><ul><li>terminate TLS connections</li><li>authenticate incoming calls</li><li>implement request routing</li></ul><p>The Application (L7) load balancers can help with that but their internal versions address less failure scenarios, in comparison to the Network Passthrough (L4) load balancer based solution, because of the current feature level of the Cloud DNS health check mechanism. For example, Cloud DNS would not flip an application service IP address in case if the application is experiencing internal malfunction (e.g. returning 50x error codes) or the load balancer backend is unavailable or missing altogether.</p><p>This is not a problem with External Application (L7) load balancers, since there is no need in Cloud DNS load balancing solution for exposing application in a high available way in multiple regions.</p><p>These mentioned “partial” or individual regional service infrastructure outage scenarios are handled by the Internal Cross-Regional Application Load Balancers on their backend side though. In addition, optional <a href="https://cloud.google.com/load-balancing/docs/https/setting-up-global-traffic-mgmt#configure_outlier_detection">Outlier Detection</a> load balancer configuration can help detecting application level malfunctions at the cost of wasting certain percentage of actual client requests in the case of outage.</p><h3>Conclusion</h3><p>Google Cloud goes beyond usual redundant deployments and offers architects and developers tools for building highly available application services across multiple geographic locations also for internal security restricted corporate use cases.</p><p>The choice of the particular combination of Google Cloud resources for improving multi-regional application availability depends on the individual applications requirements and features currently supported in the Google Cloud services such as network load balancers and Cloud DNS.</p><p>Enterprise security features in Google Cloud services get special attention and differentiate Google Cloud from other cloud hyperscalers. Please check one of my previous articles on <a href="https://medium.com/google-cloud/application-secrets-encryption-in-kubernetes-and-anthos-products-ae5de5905224">Application Secrets Encryption in Google Cloud Kubernetes products</a> for an example of possible with Google Cloud products.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=823b9f706578" width="1" height="1" alt=""><hr><p><a href="https://medium.com/google-cloud/multi-region-ha-in-google-cloud-823b9f706578">Multi-region HA in Google Cloud</a> was originally published in <a href="https://medium.com/google-cloud">Google Cloud - Community</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>' rss_fields: - title - published - categories - url - entry_id - content - author author: Sergey Shcherbakov carlessian_info: news_filer_version: 2 newspaper: Google Cloud - Medium macro_region: Blogs
Language
Active
Ricc internal notes
Imported via /Users/ricc/git/gemini-news-crawler/webapp/db/seeds.d/import-feedjira.rb on 2024-04-19 19:32:35 +0200. Content is EMPTY here. Entried: title,published,categories,url,entry_id,content,author. TODO add Newspaper: filename = /Users/ricc/git/gemini-news-crawler/webapp/db/seeds.d/../../../crawler/out/feedjira/Blogs/Google Cloud - Medium/2024-04-19-Multi-region_HA_in_Google_Cloud-v2.yaml
Ricc source
Show this article
Back to articles