Skip to content

feat(config): auto-tune pool size for async workers#114

Merged
mhenrixon merged 3 commits intomainfrom
feature/async-pool-size-tuning
Apr 10, 2026
Merged

feat(config): auto-tune pool size for async workers#114
mhenrixon merged 3 commits intomainfrom
feature/async-pool-size-tuning

Conversation

@mhenrixon
Copy link
Copy Markdown
Owner

@mhenrixon mhenrixon commented Apr 10, 2026

Summary

Auto-tune the PGMQ connection pool size for async workers. Fibers share connections (only one runs at a time per reactor thread), so async workers need 3 connections instead of N-per-fiber.

Worker Mode Capacity Pool Connections
Thread (50 threads) 50 50
Async (50 fibers) 50 3
Mixed (50 async + 5 thread) 55 8

The constant ASYNC_POOL_CONNECTIONS = 3 covers: one for the reactor's serial execution, one for polling, one for headroom. This matches Solid Queue's approach where Rails 7.2+ async workers use 3-5 connections regardless of fiber capacity.

Explicit pool_size still overrides auto-tuning for advanced cases.

Test plan

  • Async worker uses 3 connections instead of N
  • Mixed async + thread workers sum correctly
  • :fiber alias works (normalizes to async)
  • All 157 configuration spec examples pass
  • RuboCop clean

Summary by CodeRabbit

  • New Features

    • Optimized connection pool auto-tuning for async/fiber execution to reduce connection usage and improve resource efficiency
  • Tests

    • Added specs covering async and mixed-worker pool sizing scenarios
    • Hardened memory-leak detection tests with an extended warmup, two-pass measurement and explicit GC to improve reliability

Async workers use fibers that share connections (only one fiber runs
at a time per reactor thread), so they need far fewer database
connections than thread-based workers. The pool size auto-tuning now
contributes 3 connections per async worker (reactor + polling +
headroom) instead of N connections per N fibers.

Example impact:
  - Thread worker (50 threads): 50 connections
  - Async worker (50 fibers):    3 connections
  - Mixed (50 async + 5 thread): 8 connections

This matches Solid Queue's approach where Rails 7.2+ async workers
use 3-5 connections regardless of fiber capacity.
@mhenrixon mhenrixon self-assigned this Apr 10, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 10, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Repository UI (base), Organization UI (inherited)

Review profile: ASSERTIVE

Plan: Pro

Run ID: 1487f2cc-7137-4305-b10f-9e6121b3a0eb

📥 Commits

Reviewing files that changed from the base of the PR and between a631ab8 and 5beb1b1.

📒 Files selected for processing (3)
  • lib/pgbus/configuration.rb
  • spec/pgbus/allocation_budget_spec.rb
  • spec/pgbus/configuration_spec.rb

📝 Walkthrough

Walkthrough

Introduces a fixed async connection constant and async-mode detection into pool-size auto-tuning. sum_thread_counts now counts async/fiber workers as a constant ASYNC_POOL_CONNECTIONS rather than their threads value. Specs updated to cover async/fiber cases and a memory-profiling test's warmup/report flow was adjusted.

Changes

Cohort / File(s) Summary
Async Pool Configuration
lib/pgbus/configuration.rb
Added ASYNC_POOL_CONNECTIONS = 3 and async_execution_mode? helper; sum_thread_counts now uses the constant for async/fiber workers instead of per-worker threads.
Configuration tests
spec/pgbus/configuration_spec.rb
Added auto-tune specs for execution_mode: :async, mixed async/thread sets, :fiber alias, and global async config—expecting resolved pool sizes reflecting fixed async connections.
Memory/leak test update
spec/pgbus/allocation_budget_spec.rb
Reworked leak-detection flow: extended warmup, a first MemoryProfiler pass to absorb one-time retained objects, explicit GC.start, then the measurement run (100 sends) to assert zero retained objects.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested labels

performance

Poem

🐰 Hop hop, connections trim and lean,
Async fibers share the scene.
Three hops counted, threads step light,
Tests warmed up, GC set right.
I nibble code and dance with glee — hooray for pooled harmony! 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly and accurately summarizes the main change: auto-tuning connection pool size specifically for async workers, which is the primary feature introduced across the modified configuration files.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feature/async-pool-size-tuning

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@lib/pgbus/configuration.rb`:
- Around line 641-645: The method async_execution_mode?(entry) currently reads
execution_mode only from the entry and returns false if missing, ignoring the
global fallback; change it to call execution_mode_for(entry) to obtain the
effective mode (which honors global config), then normalize via
ExecutionPools.normalize_mode(...) and compare to :async so async pool sizing
uses the same logic as the rest of the configuration API (ensure you still
handle nil safely before normalization).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI (base), Organization UI (inherited)

Review profile: ASSERTIVE

Plan: Pro

Run ID: 3d412c00-3055-4aff-9994-29c4eb499ee6

📥 Commits

Reviewing files that changed from the base of the PR and between ee37ad6 and a631ab8.

📒 Files selected for processing (2)
  • lib/pgbus/configuration.rb
  • spec/pgbus/configuration_spec.rb

Comment thread lib/pgbus/configuration.rb Outdated
@mhenrixon mhenrixon added the enhancement New feature or request label Apr 10, 2026
async_execution_mode? only checked per-entry execution_mode and
returned false when unset, ignoring the global config.execution_mode
fallback. A user setting config.execution_mode = :async globally
(without per-worker overrides) would still get N connections per
worker instead of ASYNC_POOL_CONNECTIONS (3).

Reuse execution_mode_for(entry) which already handles the global
fallback. Added test for the global path.
The allow_files filter still attributed gem-internal retained objects
to lib/pgbus/ paths on Ruby 3.3 (MemoryProfiler tracks allocation
source, not retention cause). Replace with a two-pass approach:

1. First pass: MemoryProfiler.report captures any one-time lazy
   initialization (gem globals, JSON caches, connection_pool singletons)
2. GC.start to collect the first-pass artifacts
3. Second pass: measures only steady-state behavior — should retain 0

This is immune to Ruby version differences in GC timing, gem loading
order, and MemoryProfiler attribution heuristics.
@mhenrixon mhenrixon merged commit e292746 into main Apr 10, 2026
9 checks passed
@mhenrixon mhenrixon deleted the feature/async-pool-size-tuning branch April 10, 2026 09:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant