Analytical query uses 1 CPU on 32-core server. SELECT COUNT(*) FROM big_table takes 60 seconds. Enable parallelism to use 32 cores; target 5 seconds.
Parallel query execution distributes work across multiple workers. Setup: (1) Enable parallel: ALTER SYSTEM SET max_parallel_workers_per_gather = 8; ALTER SYSTEM SET max_parallel_workers = 32; ALTER SYSTEM SET max_worker_processes = 32; SELECT pg_reload_conf(); RESTART. (2) Tuning: max_parallel_workers_per_gather = 8 (per query max workers, balance with latency). max_parallel_workers = 32 (total workers available). max_worker_processes = 32 (system limit). (3) Lower cost threshold: ALTER SYSTEM SET parallel_tuple_cost = 0.01; ALTER SYSTEM SET parallel_setup_cost = 500. Lower = encourage parallel more often. (4) Verify parallelism enabled: EXPLAIN SELECT COUNT(*) FROM big_table. Should show "Gather" node (parallel). (5) Measure: EXPLAIN (ANALYZE, BUFFERS) SELECT COUNT(*) FROM big_table. Single worker: 60s. With 8 workers: ~10-15s (some overhead). Why not 60/8 = 7.5s? Overhead: coordination, data transfer between workers. Typical speedup: 4-6x for 8 workers. Tuning targets: (1) For 5s target with 8 workers: need underlying query ~40s on single. If already 60s, may need query optimization (index) first. (2) Check query plan: EXPLAIN SELECT COUNT(*) FROM big_table WHERE col > 100. If SeqScan, add index: CREATE INDEX idx_col ON big_table(col). Then parallelize. (3) Partition table: PARTITION BY HASH(col). Each partition scanned by worker; better scale to 32 workers. (4) Set work_mem high: ALTER SYSTEM SET work_mem = '2GB'. Parallel workers need memory for sorts/hashes. Monitoring: (1) Check if parallel: EXPLAIN (ANALYZE, VERBOSE) SELECT ... shows "Parallel Seq Scan", "Gather", etc. (2) Monitor worker CPU: top -p (show threads). Should see multiple threads at 100% if parallelized. (3) Measure speedup: Track query time. Acceptable speedup for N workers: 3-5x (overhead unavoidable). If <2x, parallelism not worth it (overhead exceeds benefit).
Follow-up: Explain Gather node in parallel plans. Why overhead exists for parallelism?
Parallel query uses workers but queries get slower. Workers contend on locks/memory. Root cause: Too many workers for available resources. Tune worker count and work_mem.
Excessive workers cause contention, slowing queries. Root causes: (1) max_parallel_workers_per_gather too high (e.g., 32 workers on 4-core box). (2) work_mem too small; workers fight for memory. (3) I/O bottleneck; parallelism doesn't help disk-bound queries. Diagnose: (1) Check settings: SHOW max_parallel_workers_per_gather; SHOW max_worker_processes; SHOW work_mem. (2) Measure single vs. parallel: Run query with parallelism disabled: SET max_parallel_workers_per_gather = 0; EXPLAIN (ANALYZE) SELECT .... Compare time. If parallel slower, likely overhead. (3) Monitor worker CPU: top during query. If workers stuck waiting (low CPU), likely locks or I/O. Tuning: (1) Reduce workers: ALTER SYSTEM SET max_parallel_workers_per_gather = 4; SELECT pg_reload_conf(). Fewer workers = less contention. Re-test. (2) Increase work_mem: ALTER SYSTEM SET work_mem = '4GB'; SELECT pg_reload_conf(). More memory per worker = less contention. (3) Check for I/O bottleneck: iostat -x 1 5 during query. If %util close to 100%, disk saturated. Parallelism won't help; upgrade disk (SSD) or optimize query (add index). (4) Enable parallel aggregation: ALTER SYSTEM SET enable_partitionwise_aggregate = on (PG11+). Aggregate per partition, then combine. Faster. (5) Use partial indexes: CREATE INDEX idx_hot ON table(col) WHERE status='active'. Parallel scans only active rows. Optimization: (1) Measure speedup per worker: 1 worker = 60s. 2 workers = 45s (25% improvement). 4 workers = 35s (42%). 8 workers = 32s (47%). Diminishing returns; sweet spot around 4-8 workers. (2) For analytics: ALTER SESSION SET max_parallel_workers_per_gather = 16 (session-level, more aggressive). (3) Exclude from parallel: Small queries not worth overhead: SET min_parallel_table_scan_size = '1GB' (only parallelize scans > 1GB). Recommendation: Start with 4 workers; increase gradually while monitoring performance improvement. Rule of thumb: 2-4 workers per socket (if 2-socket, 4-8 workers). Test and measure; parallelism is not always faster.
Follow-up: Explain when parallel queries are beneficial vs. harmful. I/O vs. CPU-bound?
Mixed OLTP/OLAP workload: OLTP queries stall because parallel workers hog resources. Design resource isolation: prioritize OLTP, limit OLAP parallelism.
Mixed workload conflict: OLAP needs many workers; OLTP needs low latency. Resource contention causes OLTP latency spikes. Solutions: (1) Separate resource pools (cgroups/namespaces): Allocate CPU/memory limits per workload. Example: OLTP = 50% CPU, OLAP = 40%, System = 10%. (2) Session-level settings: Tag sessions by application: (a) OLTP app: Connect with application_name='oltp_app'; ALTER SESSION SET max_parallel_workers_per_gather = 1; SET work_mem = '64MB'. (b) OLAP app: application_name='olap_app'; ALTER SESSION SET max_parallel_workers_per_gather = 16; SET work_mem = '2GB'. (3) Query priorities: Use pg_prioritize extension or custom logic: Detect OLTP (short queries) vs. OLAP (long queries). Limit workers for long queries: IF query_time > 10s THEN disable parallelism (heuristic). (4) Read replicas: Run OLAP on standby; OLTP on primary. Zero interference. (5) Time-based throttling: OLAP parallelism only during maintenance window (night). During business hours, disable: IF HOUR(NOW()) BETWEEN 8 AND 18 THEN max_parallel_workers_per_gather=1; ELSE max_parallel_workers_per_gather=16. (6) Monitoring and alerts: Track OLTP latency: SELECT P95(query_time) FROM oltp_queries. Alert if >100ms (threshold). If alert fires, reduce OLAP parallel workers. Design for resilience: (1) Primary OLTP: No parallelism (single-threaded queries, low latency). Config: max_parallel_workers_per_gather = 1. (2) OLAP replica: Parallelism enabled. Config: max_parallel_workers_per_gather = 16. (3) Monitoring: SELECT application_name, COUNT(*), AVG(query_time) FROM active_queries GROUP BY application_name. Alert if one dominates. (4) Auto-scaling: Kubernetes/cloud: Scale OLAP pod replicas based on queue depth. Recommended: Separate clusters (primary for OLTP, replica for OLAP). If single cluster required, limit OLAP parallelism during business hours. SOA: Reserve resources for critical OLTP; use remainder for OLAP.
Follow-up: Design resource governance for mixed workload. How to prioritize critical queries?
Parallel aggregation (COUNT, SUM) on 100M-row table slow: Planner chose inefficient parallel plan. Analyze and optimize; compare serial, parallel, partial aggregation.
Parallel aggregation (PG11+) can parallelize grouping/aggregate work. Example: SELECT status, COUNT(*) FROM orders WHERE created_at >= '2026-01-01' GROUP BY status. Serial: One worker groups all 100M rows, counts. Parallel: Multiple workers group subsets, combine results. Planner may choose serial if estimated cost lower (due to setup overhead). Force parallel: ALTER SYSTEM SET enable_partitionwise_aggregate = on; ALTER SYSTEM SET parallel_setup_cost = 100; SELECT pg_reload_conf(). Enable partitionwise agg (PG11+); lower setup cost. Check plan: EXPLAIN SELECT status, COUNT(*) FROM orders GROUP BY status. Should show "Parallel Aggregate" node. Measure: Serial: 30s. Parallel (4 workers): 12s. Speedup ~2.5x. Tuning: (1) If Parallel Aggregate not used: Increase enable_partitionwise_aggregate. (2) Partition table: PARTITION BY HASH(status). Aggregation per partition, then combine. Naturally parallel. (3) Use partial aggregation: Agg on subset, combine: WITH partial AS (SELECT status, COUNT(*) cnt FROM orders WHERE created_at >= '2026-01-01' GROUP BY status) SELECT status, SUM(cnt) FROM partial GROUP BY status (manual; forces 2-stage agg, more parallelizable). (4) Index: CREATE INDEX idx_status_date ON orders(status, created_at). Index scan faster than seq scan, even without parallelism. Recommendation: For <100M rows, serial often faster (setup overhead). For >100M rows, parallel usually wins. Test on actual data. For <10s query latency needed, consider denormalization or materialization instead of relying on parallelism.
Follow-up: Explain partial aggregate and parallel aggregation pipeline. How is result combined?