-
Notifications
You must be signed in to change notification settings - Fork 73
🐛 Make deployment replicas configurable via Helm values #2371
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -8,6 +8,7 @@ options: | |||||
| enabled: true | ||||||
| deployment: | ||||||
| image: quay.io/operator-framework/operator-controller:devel | ||||||
| replicas: 1 | ||||||
| extraArguments: [] | ||||||
|
Comment on lines
10
to
12
|
||||||
| features: | ||||||
| enabled: [] | ||||||
|
|
@@ -19,6 +20,7 @@ options: | |||||
| enabled: true | ||||||
| deployment: | ||||||
| image: quay.io/operator-framework/catalogd:devel | ||||||
| replicas: 1 | ||||||
|
||||||
| replicas: 1 | |
| replicas: 2 |
Copilot
AI
Mar 23, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR description says the chart will default both control-plane Deployments to 2 replicas to resolve the PDB deadlock, but the new replicas values added here default to 1. The rendered manifests/standard*.yaml and manifests/experimental.yaml also remain at 1 replica, so the deadlock scenario still exists by default. Either change these defaults to 2 (and regenerate manifests), or update the PR description/docs to clarify that HA requires an explicit values override (e.g., helm/high-availability.yaml).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we already have node anti affinity configured to make sure these replicas do not end up on the same node? If not, we need that as well (but only when replicas > 1).
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
However, I will point out that this may cause an issue on our single-node
kindexperimental-e2e tests where we have two replicas (such that we are validating that two replicas does not cause issues with the e2e tests).There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point! I added
podAntiAffinityand used thepreferredrule. Besides, I created openshift/release#72395 to addSNOupgrade test for the downstream OLMv1 and OLMv0, please take a look, thanks!