"title"=>"Connect to Non-PSC AlloyDB or Non-PSC Cloud SQL from a different VPC",
"summary"=>nil,
"content"=>"
Introduction
When it comes to managed RDBMS solutions, Google Cloud Platform (GCP) offers two powerful solutions: Cloud SQL for managed relational databases and AlloyDB for PostgreSQL compatibility with high performance. Understanding how to connect to these databases is essential for any developer or administrator working with GCP.
This blog dives into the world of private connectivity for AlloyDB, focusing on the benefits of Private Service Connect (PSC). Before we delve into the specific steps of creating a PSC endpoint for non-PSC enabled instances, let’s explore the scenarios where utilizing PSC with AlloyDB proves advantageous. Understanding these use cases will equip you with the foundational knowledge necessary to grasp the methods employed in specific situations.
Firstly, we’ll differentiate between PSA (Private Service Access) and PSC, highlighting the evolution of private connectivity options for AlloyDB. This clarifies why PSC is the preferred approach for modern deployments.
Note : For brevity, we’ll focus on AlloyDB, as networking concepts apply similarly to CloudSQL.
Networking Essentials for Connecting Applications to AlloyDB
Connecting your applications to AlloyDB involves careful consideration of your network architecture. Let’s break down the key scenarios and how you can establish secure and reliable connections. We will only discuss methods to access the database using Private IP.
Scenario 1: Application and AlloyDB in the Same Customer VPC Network
This is the most straightforward setup. You can use PSA to connect to your AlloyDB Instance. Private services access is implemented as a VPC peering connection between your VPC network and the underlying Google Cloud VPC network where your AlloyDB instance resides. Any service in Customer VPC network can use the Private IP of AlloyDB to connect to it.
Scenario 2: Application and Cloud SQL in a Shared VPC Network
Shared VPC networks are designed to enable resource sharing across projects within an organization. This is the most commonly used network architecture in larger, multi-project environments.
This simplifies the network architecture. Similar to the same VPC scenario, PSA is configured and services within Customer Shared VPC network can use the Private IP address, language connectors or AlloyDB Auth Proxy.
From above Illustration, We can see that GKE which is a separate Customer project (AlloyDB is in separate customer project) is able to access the AlloyDB using Private IP (PSA) as we are using shared VPC network and Shared VPC network is Peered to Google Managed AlloyDB VPC network.
Scenario 3: Application and Cloud SQL in Different VPC Networks with VPC Peering
This is when you want to manage separate VPC networks for separate projects or may be multiple Shared VPC networks.
For example consider the diagram below
How would GKE in Customer GKE VPC connect to AlloyDB in AlloyDB VPC (PSA configured) ?
The first solution that comes to mind to VPC Network peering.
Even If we peer Customer GKE VPC network to Customer AlloyDB VPC which is already peered to Google AlloyDB VPC through PSA, GKE pods using Customer GKE VPC network won’t be able to connect to AlloyDB Private IP because VPC Network Peering isn’t transitive.
In Such a setup (Fig4. ), you can use the following ways to connect your AlloyDB instance to multiple VPCs using private IP :
- Connect using custom route advertisements
- Connect using an intermediate proxy (SOCKS5) or Connect using a TCP proxy such as simpleproxy
While setting up a proxy for PSC with AlloyDB is straightforward, managing a dedicated GCE VM within the customer’s AlloyDB project (VPC network) introduces additional operational overhead.
You can’t use TCP Proxy if you intend to use AlloyDB Auth Proxy or language connectors for encryption and IAM authentication.
Scenario 4: Application and Cloud SQL in Different VPC Networks without VPC Peering
This scenario is the main motivation for this post. PSC solves this challenge for us. PSC provides another connectivity option for AlloyDB users, with improvements on the legacy Service Networking (or PSA) framework, such as:
- Ability to make direct connections from multiple projects easily into AlloyDB resources, enabling new architectures.
- Efficient use of IP address space, as a single IP address is required from a consumer VPC to connect to an AlloyDB instance. PSA requires a minimum of a /24 IP range.
- More secure as consumer and producer VPCs are isolated, and only inbound connectivity to AlloyDB is allowed. PSA requires bidirectional connectivity between the consumer VPC and AlloyDB by default, which is a blocker for some customer use cases.
Private Service Connect (PSC) for AlloyDB
Private Service Connect provides a powerful way to consume Google-managed services privately, even if they reside in a different project or network. PSC creates an internal DNS alias for your Cloud SQL instance. Your application can access it using this alias, and traffic is routed securely over Google’s private network.
In Fig6, You can see resources from Customer GKE VPC network and Customer AlloyDB VPC network can connect to AlloyDB using a Private service connect endpoints (forwarding rules) in respective VPC networks. These forwarding rules use the service attachment which is created as part of Private service connect. We can whitelist projects where our applications or clients will reside, in the AlloyDB service attachment.
This doesn’t requires Private service Access setup in Customer AlloyDB Project. AlloyDB Instance can be created with either PSC or PSA enabled.
As of May 2024, both methods of private connectivity can’t be configured simultaneously. Same is true for CloudSQL as well. You can’t switch between PSC and PSA as of May 2024.
You can also use Private service connect for AlloyDB when Customer VPC networks don’t have PSA configured i.e. for all the above scenarios.
You must create this endpoint in each customer VPC network where database access is needed.
Private Service Connect endpoints that are used to access services are regional resources. However, you can make an endpoint available in other regions by configuring global access.
Can you enable PSC for existing AlloyDB Instance?
So if you have an existing AlloyDB Instance with PSA enabled and you have an application in a separate VPC which is not peered to Customer AlloyDB VPC network. And you want to connect to AlloyDB using private IP, what options you do have?
You may do what we did in Scenario 3 i.e. create VPC peering between Customer GKE VPC and Customer AlloyDB VPC and use TCP Proxy or use custom route advertisements. But there is an overhead of managing routes or GCE VM running proxy.
A solution like PSC enabled AlloyDB would be a good fit for such a scenario. As you can’t simply switch between PSA and PSC, you have below options:
- You can create a DMS job to migrate data between PSA enabled AlloyDB to PSC enabled AlloyDB. That means some extra work in setting up DMS and a small downtime as well.
- You can export all you data and import into a new PSC enabled Instance and that means downtime.
- You can create a PSC endpoint for a Non-PSC (PSA enabled) AlloyDB Instance. That will require some work but you will be able to use both PSA and PSC endpoints to connect the you AlloyDB Instance.
Create PSC endpoint for a Non-PSC (PSA enabled) AlloyDB Instance
In this section, we will discuss what are the ways we can use to create a PSC endpoint for a PSA enabled AlloyDB Instance.
Two Methods
- Method 1: Create a service attachment in Customer AlloyDB VPC network which already has PSA enabled. This service attachment will use a producer forwarding rule in Customer AlloyDB VPC network that has a “target instance” as backend and that target instance points to the VM where we have a TCP proxy or SOCKS Proxy running (Fig 8.). Create a forwarding rule in Customer GKE VPC network with service attachment in Customer AlloyDB VPC network as target. Applications deployed on GCE/GKE in Customer GKE VPC network can connect to AlloyDB Instance using the private IP assigned to outgoing fowarding rule.
- Method 2: Create a service attachment in Customer AlloyDB VPC network which already has PSA enabled. This service attachment will use a producer forwarding rule in Customer AlloyDB VPC network which has a backend service with Zonal hybrid network endpoint groups as targets. Zonal NEG has AlloyDB Private IP and port as endpoint. Create a forwarding rule in Customer GKE VPC network with service attachment in Customer AlloyDB VPC network as target. Applications deployed on GCE/GKE in Customer GKE VPC network can connect to AlloyDB Instance using the private IP assigned to outgoing fowarding rule.
Method 2 is a better option because
- It uses all the managed services rather than a Proxy running on VM.
- You can use AlloyDB Auth proxy for encryption and IAM authentication
Assumptions before Implementation
- You have a AlloyDB Cluster with primary Instance in Customer AlloyDB project which has PSA enabled
- You have a customer AlloyDB project with customer AlloyDB VPC network and a Customer GKE/GCE project with Customer GKE/GCE VPC network
- Your account has appropriate privileges to create and managed resources
Steps to implement Method 2
- Read for environment variables
read -p "region : " REGION
read -p "projectid : " DB_PROJECT
read -p "GCE_SUBNET : " GCE_SUBNET
read -p "DB_VPC_NET : " DB_VPC_NET
read -p "CIDR_TCPNAT : " CIDR_TCPNAT
read -p "clientprojectid : " CLIENT_PROJECT
read -p "AlloyDB Cluster : " ADB_CLUSTER
read -p "AlloyDB Instance : " ADB_INSTANCE
read -p "CLIENT_VPC_NET : " CLIENT_VPC_NET
read -p "GCE_SUBNET_CLIENT : " GCE_SUBNET_CLIENT
read -p "Resrverip : " ADDR
read -p "PORT :" PORT
DB_VPC_NET — Customer AlloyDB VPC Network name
DB_PROJECT — Customer AlloyDB VPC Project ID
GCE_SUBNET — Subnet in Customer AlloyDB VPC Network
CIDR_TCPNAT — CIDR for PSC subnet in Customer AlloyDB VPC Network
CLIENT_PROJECT — Customer GKE/GCE VPC Project ID
ADB_CLUSTER — AlloyDB Cluster name
ADB_INSTANCE — AlloyDB Instance
REGION — Region in which your AlloyDB Instance is created
CLIENT_VPC_NET — Customer GKE/GCE VPC Network name
GCE_SUBNET_CLIENT — Subnet in Customer AlloyDB VPC Network
ADDR — IP address to be used by outgoing Forwarding rule
PORT — PORT on which DB or Auth proxy is listening
- Authenticate and create a subnet for Private service connect in Customer AlloyDB VPC Network
# Authenticate
gcloud auth login
# Create a TCP NAT subnet.
gcloud compute networks subnets create dms-psc-nat-${REGION}-tcp \\
--network=${DB_VPC_NET} \\
--project=${DB_PROJECT} \\
--region=${REGION} \\
--range=${CIDR_TCPNAT} \\
--purpose=private-service-connect
- Create a Zonal Network endpoint group of Hybrid type and add an endpoint which is private IP of the AlloyDB Instance. Port can be either be AlloyDB port (5432) or AlloyDB Auth proxy port(5433).
- For Cloud SQL ports would be different. Cloud SQL Auth Proxy uses 3307.
### create NEG
gcloud compute network-endpoint-groups create neg-$(date +%d%m%Y) --default-port=$PORT --network=${DB_VPC_NET} \\
--network-endpoint-type=non-gcp-private-ip-port \\
--project=${DB_PROJECT} \\
--zone=${REGION}-a
### get private IP for AlloyDB or cloudSQL
DB_PRIVATE_IP=$(gcloud beta alloydb instances describe $ADB_INSTANCE --cluster=$ADB_CLUSTER --region=$REGION --format json --project=${DB_PROJECT} | jq .ipAddress|tr -d '"')
#neg endpoint
gcloud compute network-endpoint-groups update neg-$(date +%d%m%Y) \\
--zone=${REGION}-a \\
--add-endpoint="ip="${DB_PRIVATE_IP}",port="$PORT --project=${DB_PROJECT}
- Configure the load balancer i.e. a backend service with Hybrid NEG as backend, TCP proxy, and regional health check for the backends.
### Health check probes for hybrid NEG backends originate from Envoy proxies in the proxy-only subnet.
gcloud compute health-checks create tcp lb-hc-$(date +%d%m%Y) \\
--region=${REGION} \\
--use-serving-port --project=${DB_PROJECT}
##### Create a backend service.
gcloud compute backend-services create bs-lb-$(date +%d%m%Y) \\
--load-balancing-scheme=INTERNAL_MANAGED \\
--protocol=TCP \\
--region=${REGION} \\
--health-checks=lb-hc-$(date +%d%m%Y) \\
--health-checks-region=${REGION} --project=${DB_PROJECT}
## Add the hybrid NEG backend to the backend service.
gcloud compute backend-services add-backend bs-lb-$(date +%d%m%Y) \\
--network-endpoint-group=neg-$(date +%d%m%Y) \\
--network-endpoint-group-zone=${REGION}-a \\
--region=${REGION} \\
--balancing-mode=CONNECTION \\
--max-connections=100 --project=${DB_PROJECT}
### For MAX_CONNECTIONS, enter the maximum concurrent connections
### that the backend should handle.
###Create the target TCP proxy.
gcloud compute target-tcp-proxies create tcp-proxy-$(date +%d%m%Y) \\
--backend-service=bs-lb-$(date +%d%m%Y) \\
--region=${REGION} \\
--project=${DB_PROJECT}
- Create the forwarding rule which has target tcp proxy which we created in previous step. The forwarding rule only forwards packets with a matching destination port.
## create incoming forwarding rule which acts a frontend for LB
gcloud compute forwarding-rules create fr-psc-$(date +%d%m%Y) \\
--load-balancing-scheme=INTERNAL_MANAGED \\
--network=${DB_VPC_NET} \\
--subnet=${GCE_SUBNET} \\
--ports=$PORT \\
--region=${REGION} \\
--target-tcp-proxy=tcp-proxy-$(date +%d%m%Y) \\
--target-tcp-proxy-region=${REGION} --project=${DB_PROJECT}
- Create Service attachment in Customer AlloyDB VPC network which points to the forwarding rule we created in previous step. We whitelisted Customer AlloyDB VPC network and Customer GCE/GKE VPC network for the service attachment.
# Create a service attachment.
gcloud compute service-attachments create dms-psc-svc-att-${REGION} \\
--project=${DB_PROJECT} \\
--region=${REGION} \\
--producer-forwarding-rule=fr-psc-$(date +%d%m%Y) \\
--connection-preference=ACCEPT_MANUAL \\
--nat-subnets=dms-psc-nat-${REGION}-tcp \\
--consumer-accept-list=${DB_PROJECT}=2000,${CLIENT_PROJECT}=2000
- Create Firewall rule to allow ingress from service attachment subnet
gcloud compute \\
--project=${DB_PROJECT} firewall-rules create fwr-dms-allow-psc-tcp \\
--direction=INGRESS \\
--priority=1000 \\
--network=${DB_VPC_NET} \\
--action=ALLOW \\
--rules=all \\
--source-ranges=${CIDR_TCPNAT} \\
--enable-logging
- Reserve an internal IP from GCE_SUBNET_CLIENT subnet. This will be used by forwarding rule. This IP will be used by client in Customer GCE/GKE VPC network to connect to AlloyDB.
If you want to use AlloyDB Auth proxy or language connector then this IP has to be same as PSA Private IP. That means you need to have a Subnet in Customer GCE VPC network with CIDR overlapping with part of your Private Service Access Allocated IP range in your Customer AlloyDB VPC network.
### Reserve a Private IP address
gcloud compute addresses create addr-$(date +%d%m%Y) \\
--project=${CLIENT_PROJECT} \\
--region=${REGION} \\
--subnet=${GCE_SUBNET_CLIENT} \\
--addresses=${ADDR}
- Create a forwarding rule in Customer GCE/GKE VPC network with target as service attachment in Customer AlloyDB VPC network
## create PSC endpoint (forwarding rule)
gcloud compute forwarding-rules create fr-client-$(date +%d%m%Y) \\
--address=addr-$(date +%d%m%Y) \\
--project=${CLIENT_PROJECT} \\
--region=${REGION} \\
--network=${CLIENT_VPC_NET} \\
--target-service-attachment=projects/${DB_PROJECT}/regions/${REGION}/serviceAttachments/dms-psc-svc-att-${REGION}
- To test connectivity, Create a GCE VM in Customer GCE/GKE project using GCE_SUBNET_CLIENT of Customer GCE/GKE VPC network. Install postgresql-client
## create a Client VM
gcloud compute instances create instance-$(date +%d%m%Y) \\
--project=${CLIENT_PROJECT} \\
--zone=${REGION}-a \\
--image-family=debian-12 \\
--image-project=debian-cloud \\
--network-interface=network-tier=PREMIUM,stack-type=IPV4_ONLY,subnet=${GCE_SUBNET_CLIENT} \\
--metadata=startup-script='#! /bin/bash
apt-get install postgresql-client wget -y
wget https://storage.googleapis.com/alloydb-auth-proxy/v1.7.1/alloydb-auth-proxy.linux.amd64 -O alloydb-auth-proxy
chmod +x alloydb-auth-proxy
'
- On the GCE VM in Customer GKE/GCE VPC network
### If Using AlloyDB Auth Proxy
gcloud beta alloydb instances describe alloydb-ins-primary-$(date +%d%m%Y) --project=${DB_PROJECT} --cluster=alloydb-cls-001 --region=${REGION} --format json | jq .name
./alloydb-auth-proxy INST_URI
- If you are configuring for Cloud SQL, use this public document to start Cloud SQL Auth Proxy.
- On the GCE VM in Customer GKE/GCE VPC network
### If using AlloyDB Auth Proxy
psql -h 127.0.0.1 -U postgres postgres
### If using Private IP
psql -h <Private IP of forwarding rule in Customer GCE/GKE VPC network> -U postgres postgres
This method works well. I have tested this multiple times.
Performance testing is advised before production deployment.
Other notable key points
- Prefer managed PSC enabled AlloyDB over such a manual solution
- NEG used are hybrid zonal NEG, so in case of a failover you have to add new zonal NEG to backend service of load balancer.
- You can use DMS service to migrate a PSA enabled AlloyDB to PSC AlloyDB.
References
- Connect to a cluster from outside its VPC | AlloyDB for PostgreSQL | Google Cloud
- Connect your instance to multiple VPCs | Cloud SQL for MySQL | Google Cloud
- Connect to an instance using Private Service Connect | Cloud SQL for MySQL | Google Cloud
Connect to Non-PSC AlloyDB or Non-PSC Cloud SQL from a different VPC was originally published in Google Cloud - Community on Medium, where people are continuing the conversation by highlighting and responding to this story.
","author"=>"Harinderjit Singh",
"link"=>"https://medium.com/google-cloud/connect-to-non-psc-alloydb-or-non-psc-cloud-sql-from-a-different-vpc-3f8eeed51d2a?source=rss----e52cf94d98af---4",
"published_date"=>Tue, 14 May 2024 04:21:54.000000000 UTC +00:00,
"image_url"=>nil,
"feed_url"=>"https://medium.com/google-cloud/connect-to-non-psc-alloydb-or-non-psc-cloud-sql-from-a-different-vpc-3f8eeed51d2a?source=rss----e52cf94d98af---4",
"language"=>nil,
"active"=>true,
"ricc_source"=>"feedjira::v1",
"created_at"=>Tue, 14 May 2024 04:31:43.208693000 UTC +00:00,
"updated_at"=>Tue, 14 May 2024 04:31:43.208693000 UTC +00:00,
"newspaper"=>"Google Cloud - Medium",
"macro_region"=>"Blogs"}