♊️ GemiNews 🗞️
(dev)
🏡
📰 Articles
🏷️ Tags
🧠 Queries
📈 Graphs
☁️ Stats
💁🏻 Assistant
💬
🎙️
Demo 1: Embeddings + Recommendation
Demo 2: Bella RAGa
Demo 3: NewRetriever
Demo 4: Assistant function calling
Editing article
Title
Summary
Content
<figure><img alt="" src="https://cdn-images-1.medium.com/max/430/1*wTbr8sMHT1BlgQDIINWymQ.jpeg" /><figcaption>DMS now performs Parallel Full Load and Parallel CDC from PostgreSQL</figcaption></figure><p><a href="https://cloud.google.com/database-migration">Database Migration Service</a>(DMS) makes it easier for you to migrate your data to Google Cloud. This service helps you lift and shift your MySQL and PostgreSQL workloads into Cloud SQL and AlloyDB for PostgreSQL. In addition, you can lift and modernize your Oracle workloads into Cloud SQL for PostgreSQL or AlloyDB for PostgreSQL.</p><p>In this document we will discuss how you can optimize the DMS Initial Load and CDC when migrating to Cloud SQL for PostgreSQL Instance. The source can be either Oracle or PostgreSQL.</p><h4>The Cloud SQL for PostgreSQL Parameters</h4><p><strong><em>The suggested parameters need to be properly tested and verified against the chosen target Cloud SQL Instance type. They should not be set in your actual production workload.</em></strong></p><p>1. <strong>max_wal_size</strong>= 20GB.</p><p>This will make sure that Database checkpoints happen when 20GB worth of WAL data is generated. If 20 GB worth of WAL data is generated > 5 minutes then checkpoints will happen every 5 minutes as per checkpoint_timeout setting.</p><blockquote>Given that, during DMS Load with the default max_wal_size which is 1.5GB for Cloud SQL PostgreSQL Enterprise and 5GB for Enterprise Plus editions, checkpoints are happening every few seconds which increases the I/O and CPU. Higher value will increase the checkpoint frequency which will reduce the I/O footprint.</blockquote><p>Also monitor the following wait events from Cloud SQL Console <strong>System Insights</strong> <strong>“WALWrite”</strong>- and the <strong>event_type</strong> will be <strong>“LWlock”</strong>. Be aware that <strong><em>WALWrite can be both a LWLock and an IO event_type.</em></strong> For frequent checkpoints it will manifest as LWLock event type as the CKPT process will wait for a latch on the WAL Segments to write the Checkpoint_change# (aka Oracle’s redo latches). High commits will manifest WALWrite as an IO wait event_type where the WAL Writer will be busy writing changes from WAL buffers to WAL Files.</p><p>There can be waits on <strong>“DataFileWrite”</strong> and <strong>“DataFileFlush”</strong> events also during very frequent and aggressive checkpoints.</p><p>2. <strong>commit_delay</strong> = 1000 (start with this and go upto 50000)</p><p>commit_delay sets the <strong>delay in microseconds between transaction commit and flushing WALs to disk</strong>. Basically it will help improve transaction throughput by performing batch commits during Bulk Inserts as it delays the WAL flush (which by default happens in every transaction commit) by 1000 microseconds after transaction commits , provided load is high enough to accumulate more transactions in that delay (which will be the case during DMS initial load)</p><blockquote>Monitor the following wait events in <strong>System Insights</strong> <strong>“WALSync” , “WALWrite” </strong>which are IO wait event_types for waits related to high commits and also the <strong>‘Transaction count’</strong> metric in System Insights.</blockquote><p>3. <strong>wal_buffers</strong> = 32–64 MB in 4 vCPU machines and 64–128 MB in 8–16 vCPU machines. It can be set to even 256MB for higher vCPU targets.</p><p>Smaller wal_buffers increase commit frequency, so increasing the value will help in initial load.</p><p>Again monitor the wait events as mentioned in (2) above.</p><p>4. <strong>Parallelism</strong>- As Postgres <strong>does not support parallel DMLs</strong> Bulk Inserts will not benefit.</p><p>5. <strong>autovacuum</strong>- Turn it to off</p><p>Note after the Initial Load is complete, make sure the autovacuum is turned On after running manual vacuum.</p><p>But run a <strong><em>manual vacuum first before releasing the database for actual production usage</em></strong> and set the following to make manual vacuum fast as it will have a lot of work to do first time.</p><p><strong>max_parallel_maintenance_workers=4</strong> (set it to number of vCPUs of the Cloud SQL Instance)</p><p><strong>maintenance_work_mem=10GB</strong></p><p>Note that manual vacuum will take memory from maintenance_work_mem.</p><p><strong>Subsequently to make autovacuum faster,</strong> set</p><p><strong><em>autovacuum_work_mem to 1GB, otherwise autovacuum workers will consume memory from maintenance_work_mem</em></strong></p><p>From Cloud SQL PostgreSQL database parameter perspective we need to <strong>tune Checkpoints and Commits during DMS Initial Load </strong>(in general for any Bulk Load operations) as they significantly affect IOs and also to an extent CPU.</p><blockquote>6. The below recommendation is very specific when the source is PostgreSQL as DMS now supports Parallel Full Load and Parallel CDC when migrating from PostgreSQL to Cloud SQL PostgreSQL or AlloyDB- <a href="https://cloud.google.com/database-migration/docs/postgres/create-migration-job#specify-source-connection-profile-info">faster PostgreSQL migrations</a></blockquote><p>The following parameter settings will help in more optimized Initial Data Copy and CDC when using PGMS (PostgreSQL Multiple Subscriptions)</p><p>In the source PostgreSQL database</p><p><strong>max_replication_slots</strong>- Set it to at least 20. It must be set to at least the number of subscriptions expected to connect which is max 10 subscriptions when DMS Parallelism is configured to Maximum (4 subscriptions per database), plus some reserve for table synchronization.</p><p><strong>max_wal_senders</strong>- Set it to higher value, preferably 20 and at least same as max_replication_slots.This controls the maximum number of concurrent connections from the target Cloud SQL PostgreSQL. With DMS Parallelism configured to Maximum there can be 4 subscriptions created per database with a max of 10 subscriptions for the PostgreSQL Cluster.</p><p>Assuming the target Instance has enough vCPU and memory available</p><p><strong>max_worker_processes</strong>- Set it to number of vCPUs in the target.</p><p>max_replication_slots- Set it to 20. It must be set to at least the number of subscriptions that will be added to the subscriber which can be upto 10, plus some reserve for table synchronization.</p><blockquote>Even with PGMS(PostgreSQL Multiple Subscriptions), when the subscription is initialized , there can be only one synchronization worker per table.(which means a table cannot be copied in parallel). Tables are copied in parallel across the subscriptions/replication sets.</blockquote><blockquote>max_logical_replication_workers and max_sync_workers_per_subscription will not affect the DMS Parallelism as these parameters influence Native Logical Replication and DMS uses pglogical.</blockquote><blockquote>7. This is very specific when you are migrating from Oracle that has many and large LOB segments. If your target Cloud SQL PostgreSQL or AlloyDB is in Version 14 and above. To make the initial load faster by 3x times, <br>change the default_toast_compression in target Cloud SQL PostgreSQL or AlloyDB to LZ4.</blockquote><p>The CLOBs and BLOBs in Oracle are converted to TEXT and BYTEA respectively in PostgreSQL. If the LOBs are large then it is extremely likely that the size of tuple/row > 2KB and they will be spilled to TOAST segments (store out-of-line) in PostgreSQL (they will be stored in pg_toast schema as pg_toast_<OIDoftable>). TOAST data is compressed/decompressed while being inserted/queried. The default compression technique that PostgreSQL uses is PGLZ which is CPU Intensive and not as performant as LZ4 which is available from PostgreSQL 14 onwards. Using LZ4 the SELECTs speed is close to that of uncompressed data, and the speed of data insertion is upto 80% faster compared to PGLZ. Additionally you will get faster performance during SELECTs.</p><h4>Target Cloud SQL PostgreSQL Instance Sizing and Storage</h4><p>More resources you give to the target Cloud SQL Instance, the better the performance of DMS will be.</p><p><strong>Network Throughput, Disk Throughput and Disk IOPS</strong>. Network Throughput is limited by the number of vCPUs- we get <strong>250MBps Network throughput per vCPU </strong>and the <strong>Disk Throughput (0.48MBps per GB</strong>) is limited by Network Throughput. For <strong>Disk IOPS</strong> we get <strong>30 IOPS/GB</strong>.</p><p>So, the <strong><em>correct Instance size, along with the storage size will help you improve the DMS Initial Load performance</em></strong>. In general DMS will need more IOPS and decent Disk throughput and you can configure your disk size in such a way that you utilize as much as Network throughput Bandwidth for Disk Throughput (as most of the network bandwidth consumed will be from Database VM to underlying storage).</p><p>Take for example, for a 4 vCPU Cloud SQL Enterprise Instance you will get 1000 MB/s as Network throughput. So if you allocate a 600GB Disk you will get Disk Throughput close to 300 MB/s and 18000 IOPS. (I am not taking into account your Database size, of course you need to allocate more storage than your database size)</p><blockquote>So do not size the initial storage based on the source database size only, take into account the Throughput and IOPS requirement of the workload.</blockquote><p><strong><em>You can always later reduce storage using either a request with Google Support team or via Self Service Storage Shrink which is in Preview mode now. Target Cloud SQL Instance can be downscaled before the application cut-over.</em></strong></p><h4>Few More Tips</h4><p>Do not create a Regional Cloud SQL Instance during the Migration time. Enable High Availability, if you need so, after the migration is done and before application cut-over.</p><p>Do not enable Automated Backups during the time of migration.</p><p>DMS does not create Secondary Indexes and Constraints during Initial Load; it creates after the initial load completes and before CDC.</p><p>Install<strong> pg_wait_sampling extension</strong> which will be helpful in diagnosing wait events related to PostgreSQL slow performance during Migration and even after production cut-over. Query <strong>pg_stat_bgwriter</strong>, <strong>pg_stat_wal</strong> for information on Checkpoints and Commits which can be used to diagnose further. Enable the log based alerts and log based Metrics related to Frequent checkpoints.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=68ba35ec3040" width="1" height="1" alt=""><hr><p><a href="https://medium.com/google-cloud/cloudsql-for-postgresql-optimization-during-migration-using-database-migration-service-68ba35ec3040">Cloud SQL for PostgreSQL Optimization during Migration using Database Migration Service.</a> was originally published in <a href="https://medium.com/google-cloud">Google Cloud - Community</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>
Author
Link
Published date
Image url
Feed url
Guid
Hidden blurb
--- !ruby/object:Feedjira::Parser::RSSEntry title: Cloud SQL for PostgreSQL Optimization during Migration using Database Migration Service. published: 2024-04-12 01:28:01.000000000 Z categories: - data-science - cloud-sql - data - google-cloud-partner - database-migration url: https://medium.com/google-cloud/cloudsql-for-postgresql-optimization-during-migration-using-database-migration-service-68ba35ec3040?source=rss----e52cf94d98af---4 entry_id: !ruby/object:Feedjira::Parser::GloballyUniqueIdentifier is_perma_link: 'false' guid: https://medium.com/p/68ba35ec3040 carlessian_info: news_filer_version: 2 newspaper: Google Cloud - Medium macro_region: Blogs content: <figure><img alt="" src="https://cdn-images-1.medium.com/max/430/1*wTbr8sMHT1BlgQDIINWymQ.jpeg" /><figcaption>DMS now performs Parallel Full Load and Parallel CDC from PostgreSQL</figcaption></figure><p><a href="https://cloud.google.com/database-migration">Database Migration Service</a>(DMS) makes it easier for you to migrate your data to Google Cloud. This service helps you lift and shift your MySQL and PostgreSQL workloads into Cloud SQL and AlloyDB for PostgreSQL. In addition, you can lift and modernize your Oracle workloads into Cloud SQL for PostgreSQL or AlloyDB for PostgreSQL.</p><p>In this document we will discuss how you can optimize the DMS Initial Load and CDC when migrating to Cloud SQL for PostgreSQL Instance. The source can be either Oracle or PostgreSQL.</p><h4>The Cloud SQL for PostgreSQL Parameters</h4><p><strong><em>The suggested parameters need to be properly tested and verified against the chosen target Cloud SQL Instance type. They should not be set in your actual production workload.</em></strong></p><p>1. <strong>max_wal_size</strong>= 20GB.</p><p>This will make sure that Database checkpoints happen when 20GB worth of WAL data is generated. If 20 GB worth of WAL data is generated > 5 minutes then checkpoints will happen every 5 minutes as per checkpoint_timeout setting.</p><blockquote>Given that, during DMS Load with the default max_wal_size which is 1.5GB for Cloud SQL PostgreSQL Enterprise and 5GB for Enterprise Plus editions, checkpoints are happening every few seconds which increases the I/O and CPU. Higher value will increase the checkpoint frequency which will reduce the I/O footprint.</blockquote><p>Also monitor the following wait events from Cloud SQL Console <strong>System Insights</strong> <strong>“WALWrite”</strong>- and the <strong>event_type</strong> will be <strong>“LWlock”</strong>. Be aware that <strong><em>WALWrite can be both a LWLock and an IO event_type.</em></strong> For frequent checkpoints it will manifest as LWLock event type as the CKPT process will wait for a latch on the WAL Segments to write the Checkpoint_change# (aka Oracle’s redo latches). High commits will manifest WALWrite as an IO wait event_type where the WAL Writer will be busy writing changes from WAL buffers to WAL Files.</p><p>There can be waits on <strong>“DataFileWrite”</strong> and <strong>“DataFileFlush”</strong> events also during very frequent and aggressive checkpoints.</p><p>2. <strong>commit_delay</strong> = 1000 (start with this and go upto 50000)</p><p>commit_delay sets the <strong>delay in microseconds between transaction commit and flushing WALs to disk</strong>. Basically it will help improve transaction throughput by performing batch commits during Bulk Inserts as it delays the WAL flush (which by default happens in every transaction commit) by 1000 microseconds after transaction commits , provided load is high enough to accumulate more transactions in that delay (which will be the case during DMS initial load)</p><blockquote>Monitor the following wait events in <strong>System Insights</strong> <strong>“WALSync” , “WALWrite” </strong>which are IO wait event_types for waits related to high commits and also the <strong>‘Transaction count’</strong> metric in System Insights.</blockquote><p>3. <strong>wal_buffers</strong> = 32–64 MB in 4 vCPU machines and 64–128 MB in 8–16 vCPU machines. It can be set to even 256MB for higher vCPU targets.</p><p>Smaller wal_buffers increase commit frequency, so increasing the value will help in initial load.</p><p>Again monitor the wait events as mentioned in (2) above.</p><p>4. <strong>Parallelism</strong>- As Postgres <strong>does not support parallel DMLs</strong> Bulk Inserts will not benefit.</p><p>5. <strong>autovacuum</strong>- Turn it to off</p><p>Note after the Initial Load is complete, make sure the autovacuum is turned On after running manual vacuum.</p><p>But run a <strong><em>manual vacuum first before releasing the database for actual production usage</em></strong> and set the following to make manual vacuum fast as it will have a lot of work to do first time.</p><p><strong>max_parallel_maintenance_workers=4</strong> (set it to number of vCPUs of the Cloud SQL Instance)</p><p><strong>maintenance_work_mem=10GB</strong></p><p>Note that manual vacuum will take memory from maintenance_work_mem.</p><p><strong>Subsequently to make autovacuum faster,</strong> set</p><p><strong><em>autovacuum_work_mem to 1GB, otherwise autovacuum workers will consume memory from maintenance_work_mem</em></strong></p><p>From Cloud SQL PostgreSQL database parameter perspective we need to <strong>tune Checkpoints and Commits during DMS Initial Load </strong>(in general for any Bulk Load operations) as they significantly affect IOs and also to an extent CPU.</p><blockquote>6. The below recommendation is very specific when the source is PostgreSQL as DMS now supports Parallel Full Load and Parallel CDC when migrating from PostgreSQL to Cloud SQL PostgreSQL or AlloyDB- <a href="https://cloud.google.com/database-migration/docs/postgres/create-migration-job#specify-source-connection-profile-info">faster PostgreSQL migrations</a></blockquote><p>The following parameter settings will help in more optimized Initial Data Copy and CDC when using PGMS (PostgreSQL Multiple Subscriptions)</p><p>In the source PostgreSQL database</p><p><strong>max_replication_slots</strong>- Set it to at least 20. It must be set to at least the number of subscriptions expected to connect which is max 10 subscriptions when DMS Parallelism is configured to Maximum (4 subscriptions per database), plus some reserve for table synchronization.</p><p><strong>max_wal_senders</strong>- Set it to higher value, preferably 20 and at least same as max_replication_slots.This controls the maximum number of concurrent connections from the target Cloud SQL PostgreSQL. With DMS Parallelism configured to Maximum there can be 4 subscriptions created per database with a max of 10 subscriptions for the PostgreSQL Cluster.</p><p>Assuming the target Instance has enough vCPU and memory available</p><p><strong>max_worker_processes</strong>- Set it to number of vCPUs in the target.</p><p>max_replication_slots- Set it to 20. It must be set to at least the number of subscriptions that will be added to the subscriber which can be upto 10, plus some reserve for table synchronization.</p><blockquote>Even with PGMS(PostgreSQL Multiple Subscriptions), when the subscription is initialized , there can be only one synchronization worker per table.(which means a table cannot be copied in parallel). Tables are copied in parallel across the subscriptions/replication sets.</blockquote><blockquote>max_logical_replication_workers and max_sync_workers_per_subscription will not affect the DMS Parallelism as these parameters influence Native Logical Replication and DMS uses pglogical.</blockquote><blockquote>7. This is very specific when you are migrating from Oracle that has many and large LOB segments. If your target Cloud SQL PostgreSQL or AlloyDB is in Version 14 and above. To make the initial load faster by 3x times, <br>change the default_toast_compression in target Cloud SQL PostgreSQL or AlloyDB to LZ4.</blockquote><p>The CLOBs and BLOBs in Oracle are converted to TEXT and BYTEA respectively in PostgreSQL. If the LOBs are large then it is extremely likely that the size of tuple/row > 2KB and they will be spilled to TOAST segments (store out-of-line) in PostgreSQL (they will be stored in pg_toast schema as pg_toast_<OIDoftable>). TOAST data is compressed/decompressed while being inserted/queried. The default compression technique that PostgreSQL uses is PGLZ which is CPU Intensive and not as performant as LZ4 which is available from PostgreSQL 14 onwards. Using LZ4 the SELECTs speed is close to that of uncompressed data, and the speed of data insertion is upto 80% faster compared to PGLZ. Additionally you will get faster performance during SELECTs.</p><h4>Target Cloud SQL PostgreSQL Instance Sizing and Storage</h4><p>More resources you give to the target Cloud SQL Instance, the better the performance of DMS will be.</p><p><strong>Network Throughput, Disk Throughput and Disk IOPS</strong>. Network Throughput is limited by the number of vCPUs- we get <strong>250MBps Network throughput per vCPU </strong>and the <strong>Disk Throughput (0.48MBps per GB</strong>) is limited by Network Throughput. For <strong>Disk IOPS</strong> we get <strong>30 IOPS/GB</strong>.</p><p>So, the <strong><em>correct Instance size, along with the storage size will help you improve the DMS Initial Load performance</em></strong>. In general DMS will need more IOPS and decent Disk throughput and you can configure your disk size in such a way that you utilize as much as Network throughput Bandwidth for Disk Throughput (as most of the network bandwidth consumed will be from Database VM to underlying storage).</p><p>Take for example, for a 4 vCPU Cloud SQL Enterprise Instance you will get 1000 MB/s as Network throughput. So if you allocate a 600GB Disk you will get Disk Throughput close to 300 MB/s and 18000 IOPS. (I am not taking into account your Database size, of course you need to allocate more storage than your database size)</p><blockquote>So do not size the initial storage based on the source database size only, take into account the Throughput and IOPS requirement of the workload.</blockquote><p><strong><em>You can always later reduce storage using either a request with Google Support team or via Self Service Storage Shrink which is in Preview mode now. Target Cloud SQL Instance can be downscaled before the application cut-over.</em></strong></p><h4>Few More Tips</h4><p>Do not create a Regional Cloud SQL Instance during the Migration time. Enable High Availability, if you need so, after the migration is done and before application cut-over.</p><p>Do not enable Automated Backups during the time of migration.</p><p>DMS does not create Secondary Indexes and Constraints during Initial Load; it creates after the initial load completes and before CDC.</p><p>Install<strong> pg_wait_sampling extension</strong> which will be helpful in diagnosing wait events related to PostgreSQL slow performance during Migration and even after production cut-over. Query <strong>pg_stat_bgwriter</strong>, <strong>pg_stat_wal</strong> for information on Checkpoints and Commits which can be used to diagnose further. Enable the log based alerts and log based Metrics related to Frequent checkpoints.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=68ba35ec3040" width="1" height="1" alt=""><hr><p><a href="https://medium.com/google-cloud/cloudsql-for-postgresql-optimization-during-migration-using-database-migration-service-68ba35ec3040">Cloud SQL for PostgreSQL Optimization during Migration using Database Migration Service.</a> was originally published in <a href="https://medium.com/google-cloud">Google Cloud - Community</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p> rss_fields: - title - published - categories - url - entry_id - content - author author: Somdyuti
Language
Active
Ricc internal notes
Imported via /Users/ricc/git/gemini-news-crawler/webapp/db/seeds.d/import-feedjira.rb on 2024-04-16 21:08:36 +0200. Content is EMPTY here. Entried: title,published,categories,url,entry_id,content,author. TODO add Newspaper: filename = /Users/ricc/git/gemini-news-crawler/webapp/db/seeds.d/../../../crawler/out/feedjira/Blogs/Google Cloud - Medium/2024-04-12-Cloud_SQL_for_PostgreSQL_Optimization_during_Migration_using_Dat-v2.yaml
Ricc source
Show this article
Back to articles