OCPBUGS-81452: Synchronize From Upstream Repositories#696
OCPBUGS-81452: Synchronize From Upstream Repositories#696openshift-bot wants to merge 96 commits intoopenshift:mainfrom
Conversation
Bumps [go.opentelemetry.io/otel/sdk](https://github.com/open-telemetry/opentelemetry-go) from 1.40.0 to 1.43.0. - [Release notes](https://github.com/open-telemetry/opentelemetry-go/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-go/blob/main/CHANGELOG.md) - [Commits](open-telemetry/opentelemetry-go@v1.40.0...v1.43.0) --- updated-dependencies: - dependency-name: go.opentelemetry.io/otel/sdk dependency-version: 1.43.0 dependency-type: indirect ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 5.5.2 to 6.0.0. - [Release notes](https://github.com/codecov/codecov-action/releases) - [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md) - [Commits](codecov/codecov-action@v5.5.2...v6.0.0) --- updated-dependencies: - dependency-name: codecov/codecov-action dependency-version: 6.0.0 dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
…d (#2637) Boxcutter v0.13.1 includes the fix from package-operator/boxcutter#501 which ensures collision detection runs before revision linearity checks. This allows us to remove the foreignRevisionController workaround that was manually detecting ActionProgressed objects owned by foreign ClusterExtensions. Assisted-by: Claude
|
@openshift-bot: GitHub didn't allow me to request PR reviews from the following users: openshift/openshift-team-operator-framework. Note that only openshift members and repo collaborators can review this PR, and authors cannot review their own PRs. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
@openshift-bot: This pull request references Jira Issue OCPBUGS-81452, which is invalid:
Comment The bug has been updated to refer to the pull request using the external bug tracker. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
WalkthroughUpdated dependencies; moved cert-manager Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
|
/re-title OCPBUGS-77972, OCPBUGS-81452: Synchronize From Upstream Repositories |
|
/test openshift-e2e-aws |
|
/retitle OCPBUGS-77972, OCPBUGS-81452: Synchronize From Upstream Repositories |
|
@openshift-bot: This pull request references Jira Issue OCPBUGS-77972, which is invalid:
Comment The bug has been updated to refer to the pull request using the external bug tracker. This pull request references Jira Issue OCPBUGS-81452, which is invalid:
Comment The bug has been updated to refer to the pull request using the external bug tracker. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/jira refresh |
|
@camilamacedo86: This pull request references Jira Issue OCPBUGS-77972, which is valid. 3 validation(s) were run on this bug
Requesting review from QA contact: This pull request references Jira Issue OCPBUGS-81452, which is invalid:
Comment DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@openshift-ci-robot: GitHub didn't allow me to request PR reviews from the following users: kuiwang02. Note that only openshift members and repo collaborators can review this PR, and authors cannot review their own PRs. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
/jira refresh |
|
@camilamacedo86: This pull request references Jira Issue OCPBUGS-77972, which is valid. 3 validation(s) were run on this bug
Requesting review from QA contact: This pull request references Jira Issue OCPBUGS-81452, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
No GitHub users were found matching the public email listed for the QA contact in Jira (bandrade@redhat.com), skipping review request. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@openshift-ci-robot: GitHub didn't allow me to request PR reviews from the following users: kuiwang02. Note that only openshift members and repo collaborators can review this PR, and authors cannot review their own PRs. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
The test operator's httpd script uses python3's http.server which binds to 0.0.0.0 (IPv4 only) by default. On IPv6-only networks (e.g. metal-ipi-ovn-ipv6-techpreview), the startup/liveness/readiness probes connect to the pod's IPv6 address but nothing is listening, causing the operator pod to never become Ready and the OLMv1 ClusterExtension install test to time out. Adding --bind :: makes python3 http.server listen on all interfaces including IPv6, fixing the test on dual-stack and IPv6-only clusters. This resolves the 0% pass rate on: - periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ipi-ovn-ipv6-techpreview Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…g for CE install tests With BoxcutterRuntime, Installed=True is only set after all availability probes pass, which can take longer on TechPreview clusters (IPv6, multi-arch). Increases install-specific timeout from 5m to 10m and logs condition state on each poll to aid debugging flaky failures.
Add 7 Ginkgo tests under [sig-olmv1][OCPFeatureGate:NewOLMDeploymentConfig] covering the spec.config.inline.deploymentConfig feature: Positive tests (verify applied customisations): - environment variables - resource requirements - tolerations - node selector - annotations on deployment and pod template Negative tests (verify terminal validation errors): - invalid deploymentConfig.env type (string instead of array) - unknown field inside deploymentConfig (additionalProperties:false) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Todd Short <tshort@redhat.com>
Signed-off-by: Todd Short <tshort@redhat.com>
…64 support Signed-off-by: Daniel Franz <dfranz@redhat.com>
Signed-off-by: Todd Short <tshort@redhat.com>
…t in OTE tests Update all remaining references to ClusterExtensionRevision in openshift/tests-extension to use ClusterObjectSet, matching the upstream rename in operator-framework/operator-controller#2589. Files updated: - test/qe/specs/olmv1_ce.go: RBAC resource names and comments - test/olmv1-preflight.go: scenario constants, test names, RBAC rules - .openshift-tests-extension/openshift_payload_olmv1.json: test name - pkg/bindata/qe/bindata.go: embedded RBAC templates - test/qe/testdata/olm/sa-nginx-limited-boxcutter.yaml: RBAC resources - test/qe/testdata/olm/sa-nginx-insufficient-operand-rbac-boxcutter.yaml: RBAC resources Signed-off-by: Camila Macedo <cmacedo@redhat.com> Made-with: Cursor
…s ClusterObjectSet The upstream rename of ClusterExtensionRevision to ClusterObjectSet (operator-framework/operator-controller#2589) breaks the incompatible operator detection in cluster-olm-operator. The cluster-olm-operator binary still reads ClusterExtensionRevision resources to find operators with olm.maxOpenShiftVersion, so after the rename it never detects incompatible operators and InstalledOLMOperatorsUpgradeable stays True. Skip this test when NewOLMBoxCutterRuntime feature gate is enabled until cluster-olm-operator is updated to read ClusterObjectSet. Signed-off-by: Camila Macedo <cmacedo@redhat.com> Made-with: Cursor
Signed-off-by: Francesco Giudici <fgiudici@redhat.com>
296fab7 to
9f00e24
Compare
|
[APPROVALNOTIFIER] This PR is APPROVED Approval requirements bypassed by manually added approval. This pull-request has been approved by: openshift-bot The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
internal/shared/util/tlsprofiles/mozilla_data.go (1)
90-97: Derive the minimum TLS version explicitly.Using
cfg.TLSVersions[0]makesMinVersiondepend on upstream array order. If Mozilla reorders that list, this changes the effective security policy silently. Parse all entries and pick the lowest supported version instead.Proposed fix
if len(cfg.TLSVersions) == 0 { panic(fmt.Sprintf("tlsprofiles: profile %q has no tls_versions in embedded mozilla_data.json", name)) } var version tlsVersion - if err := version.Set(cfg.TLSVersions[0]); err != nil { - panic(fmt.Sprintf("tlsprofiles: profile %q has unrecognized tls_versions[0] %q: %v", name, cfg.TLSVersions[0], err)) + for i, raw := range cfg.TLSVersions { + var parsed tlsVersion + if err := parsed.Set(raw); err != nil { + panic(fmt.Sprintf("tlsprofiles: profile %q has unrecognized tls_versions[%d] %q: %v", name, i, raw, err)) + } + if i == 0 || uint16(parsed) < uint16(version) { + version = parsed + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/shared/util/tlsprofiles/mozilla_data.go` around lines 90 - 97, The current code sets MinVersion from cfg.TLSVersions[0], which relies on upstream ordering; instead parse every entry in cfg.TLSVersions with tlsVersion.Set (reporting/panicking on any unrecognized value as before), keep the numerically lowest tlsVersion value found while iterating, and assign that minimum to MinVersion (replace the single-parse of cfg.TLSVersions[0] and the single version variable usage with a loop that computes the minimum). Ensure the initial empty-check on cfg.TLSVersions remains and preserve the same error behavior for invalid entries using the profile name for context.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@hack/tools/update-tls-profiles.sh`:
- Around line 5-11: The script currently writes directly to OUTPUT (variable
internal/shared/util/tlsprofiles/mozilla_data.json) which can be truncated on
partial download; change the download to write to a temporary file (use mktemp)
and only move it into place atomically (mv) after curl succeeds, ensuring you
still check curl's exit code; also ensure the temp file is removed on failure
(use a trap or explicit cleanup) and preserve final file permissions if needed
so that the atomic replace prevents leaving a corrupt mozilla_data.json and
avoids init() panics in the code that reads it.
In `@internal/shared/util/tlsprofiles/mozilla_data.go`:
- Around line 68-103: parseProfile currently drops unresolved cipher/curve names
(when cipherSuiteId or curveId return 0) which weakens profiles; change
parseProfile to treat those as errors instead of silently skipping: collect any
unknown cipher or curve names and return a non-nil error (update the return
signature accordingly) so callers must handle it; update the caller that loads
embedded profiles to check the returned error and panic/fail closed if parsing
fails (as suggested for the caller at the embedded-profile load site), ensuring
unresolved names never silently degrade the TLS profile.
---
Nitpick comments:
In `@internal/shared/util/tlsprofiles/mozilla_data.go`:
- Around line 90-97: The current code sets MinVersion from cfg.TLSVersions[0],
which relies on upstream ordering; instead parse every entry in cfg.TLSVersions
with tlsVersion.Set (reporting/panicking on any unrecognized value as before),
keep the numerically lowest tlsVersion value found while iterating, and assign
that minimum to MinVersion (replace the single-parse of cfg.TLSVersions[0] and
the single version variable usage with a loop that computes the minimum). Ensure
the initial empty-check on cfg.TLSVersions remains and preserve the same error
behavior for invalid entries using the profile name for context.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0540802b-40ee-4e00-a717-da6b871c46ac
⛔ Files ignored due to path filters (106)
.bingo/gojq.sumis excluded by!**/*.sumgo.sumis excluded by!**/*.sumopenshift/tests-extension/go.sumis excluded by!**/*.sumopenshift/tests-extension/vendor/github.com/fxamacker/cbor/v2/.golangci.ymlis excluded by!**/vendor/**openshift/tests-extension/vendor/github.com/fxamacker/cbor/v2/README.mdis excluded by!**/vendor/**openshift/tests-extension/vendor/github.com/fxamacker/cbor/v2/cache.gois excluded by!**/vendor/**openshift/tests-extension/vendor/github.com/fxamacker/cbor/v2/decode.gois excluded by!**/vendor/**openshift/tests-extension/vendor/github.com/fxamacker/cbor/v2/decode_map_utils.gois excluded by!**/vendor/**openshift/tests-extension/vendor/github.com/fxamacker/cbor/v2/diagnose.gois excluded by!**/vendor/**openshift/tests-extension/vendor/github.com/fxamacker/cbor/v2/encode.gois excluded by!**/vendor/**openshift/tests-extension/vendor/github.com/fxamacker/cbor/v2/simplevalue.gois excluded by!**/vendor/**openshift/tests-extension/vendor/github.com/fxamacker/cbor/v2/stream.gois excluded by!**/vendor/**openshift/tests-extension/vendor/github.com/fxamacker/cbor/v2/structfields.gois excluded by!**/vendor/**openshift/tests-extension/vendor/github.com/fxamacker/cbor/v2/valid.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/.golangci.ymlis excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/CHANGELOG.mdis excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/Makefileis excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/RELEASING.mdis excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/attribute/encoder.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/attribute/hash.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/attribute/internal/attribute.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/attribute/kv.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/attribute/type_string.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/attribute/value.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/dependencies.Dockerfileis excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/sdk/internal/x/features.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/sdk/resource/builtin.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/sdk/resource/config.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/sdk/resource/container.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/sdk/resource/env.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/sdk/resource/host_id.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/sdk/resource/host_id_readfile.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/sdk/resource/os.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/sdk/resource/process.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/sdk/resource/resource.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/sdk/trace/batch_span_processor.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/sdk/trace/internal/observ/batch_span_processor.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/sdk/trace/internal/observ/simple_span_processor.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/sdk/trace/internal/observ/tracer.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/sdk/trace/provider.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/sdk/trace/sampling.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/sdk/trace/span.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/sdk/version.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/semconv/v1.40.0/otelconv/metric.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/trace/trace.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/version.gois excluded by!**/vendor/**openshift/tests-extension/vendor/go.opentelemetry.io/otel/versions.yamlis excluded by!**/vendor/**openshift/tests-extension/vendor/modules.txtis excluded by!**/vendor/**vendor/github.com/fxamacker/cbor/v2/.golangci.ymlis excluded by!vendor/**,!**/vendor/**vendor/github.com/fxamacker/cbor/v2/README.mdis excluded by!vendor/**,!**/vendor/**vendor/github.com/fxamacker/cbor/v2/cache.gois excluded by!vendor/**,!**/vendor/**vendor/github.com/fxamacker/cbor/v2/decode.gois excluded by!vendor/**,!**/vendor/**vendor/github.com/fxamacker/cbor/v2/decode_map_utils.gois excluded by!vendor/**,!**/vendor/**vendor/github.com/fxamacker/cbor/v2/diagnose.gois excluded by!vendor/**,!**/vendor/**vendor/github.com/fxamacker/cbor/v2/encode.gois excluded by!vendor/**,!**/vendor/**vendor/github.com/fxamacker/cbor/v2/simplevalue.gois excluded by!vendor/**,!**/vendor/**vendor/github.com/fxamacker/cbor/v2/stream.gois excluded by!vendor/**,!**/vendor/**vendor/github.com/fxamacker/cbor/v2/structfields.gois excluded by!vendor/**,!**/vendor/**vendor/github.com/fxamacker/cbor/v2/valid.gois excluded by!vendor/**,!**/vendor/**vendor/github.com/mattn/go-sqlite3/sqlite3-binding.cis excluded by!vendor/**,!**/vendor/**vendor/github.com/mattn/go-sqlite3/sqlite3-binding.his excluded by!vendor/**,!**/vendor/**vendor/github.com/mattn/go-sqlite3/sqlite3.gois excluded by!vendor/**,!**/vendor/**vendor/github.com/mattn/go-sqlite3/sqlite3_opt_serialize.gois excluded by!vendor/**,!**/vendor/**vendor/github.com/operator-framework/operator-registry/alpha/declcfg/model_to_declcfg.gois excluded by!vendor/**,!**/vendor/**vendor/github.com/operator-framework/operator-registry/alpha/model/model.gois excluded by!vendor/**,!**/vendor/**vendor/github.com/operator-framework/operator-registry/pkg/lib/bundle/validate.gois excluded by!vendor/**,!**/vendor/**vendor/github.com/operator-framework/operator-registry/pkg/registry/conversion.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/.golangci.ymlis excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/CHANGELOG.mdis excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/Makefileis excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/RELEASING.mdis excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/attribute/encoder.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/attribute/hash.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/attribute/internal/attribute.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/attribute/kv.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/attribute/type_string.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/attribute/value.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/dependencies.Dockerfileis excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/sdk/internal/x/features.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/sdk/resource/builtin.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/sdk/resource/config.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/sdk/resource/container.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/sdk/resource/env.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/sdk/resource/host_id.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/sdk/resource/host_id_readfile.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/sdk/resource/os.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/sdk/resource/process.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/sdk/resource/resource.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/sdk/trace/batch_span_processor.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/sdk/trace/internal/observ/batch_span_processor.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/sdk/trace/internal/observ/simple_span_processor.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/sdk/trace/internal/observ/tracer.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/sdk/trace/provider.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/sdk/trace/sampling.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/sdk/trace/span.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/sdk/version.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/semconv/v1.40.0/otelconv/metric.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/trace/trace.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/version.gois excluded by!vendor/**,!**/vendor/**vendor/go.opentelemetry.io/otel/versions.yamlis excluded by!vendor/**,!**/vendor/**vendor/helm.sh/helm/v3/pkg/chart/metadata.gois excluded by!vendor/**,!**/vendor/**vendor/helm.sh/helm/v3/pkg/chartutil/expand.gois excluded by!vendor/**,!**/vendor/**vendor/modules.txtis excluded by!vendor/**,!**/vendor/**vendor/pkg.package-operator.run/boxcutter/boxcutter.gois excluded by!vendor/**,!**/vendor/**vendor/pkg.package-operator.run/boxcutter/machinery/objects.gois excluded by!vendor/**,!**/vendor/**vendor/pkg.package-operator.run/boxcutter/managedcache/objectboundaccess.gois excluded by!vendor/**,!**/vendor/**
📒 Files selected for processing (15)
.bingo/Variables.mk.bingo/gojq.mod.bingo/variables.envMakefilego.modhack/tools/update-tls-profiles.shinternal/operator-controller/applier/phase.gointernal/operator-controller/applier/phase_test.gointernal/operator-controller/controllers/clusterobjectset_controller.gointernal/operator-controller/controllers/clusterobjectset_controller_test.gointernal/operator-controller/controllers/revision_engine_factory.gointernal/shared/util/tlsprofiles/mozilla_data.gointernal/shared/util/tlsprofiles/mozilla_data.jsoninternal/shared/util/tlsprofiles/tlsprofiles_test.goopenshift/tests-extension/go.mod
💤 Files with no reviewable changes (3)
- .bingo/gojq.mod
- .bingo/variables.env
- .bingo/Variables.mk
✅ Files skipped from review due to trivial changes (3)
- internal/shared/util/tlsprofiles/tlsprofiles_test.go
- internal/shared/util/tlsprofiles/mozilla_data.json
- openshift/tests-extension/go.mod
🚧 Files skipped from review as they are similar to previous changes (5)
- internal/operator-controller/controllers/revision_engine_factory.go
- Makefile
- internal/operator-controller/applier/phase.go
- internal/operator-controller/controllers/clusterobjectset_controller_test.go
- go.mod
| OUTPUT=internal/shared/util/tlsprofiles/mozilla_data.json | ||
| INPUT=https://ssl-config.mozilla.org/guidelines/latest.json | ||
|
|
||
| TMPFILE="$(mktemp)" | ||
| trap 'rm -rf "$TMPFILE"' EXIT | ||
|
|
||
| if ! curl -L -s -f "${INPUT}" > "${TMPFILE}"; then | ||
| if ! curl -L -s -f "${INPUT}" -o "${OUTPUT}"; then | ||
| echo "ERROR: Failed to download ${INPUT} (HTTP error or connection failure)" >&2 | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
Write mozilla_data.json atomically.
curl -o "${OUTPUT}" truncates the checked-in file before the transfer completes. If the download fails mid-stream, internal/shared/util/tlsprofiles/mozilla_data.json can be left corrupt, and internal/shared/util/tlsprofiles/mozilla_data.go will panic during init() on the next build/test.
Proposed fix
OUTPUT=internal/shared/util/tlsprofiles/mozilla_data.json
INPUT=https://ssl-config.mozilla.org/guidelines/latest.json
+tmp="$(mktemp "${OUTPUT}.tmp.XXXXXX")"
+trap 'rm -f "${tmp}"' EXIT
-if ! curl -L -s -f "${INPUT}" -o "${OUTPUT}"; then
+if ! curl -L -s -f "${INPUT}" -o "${tmp}"; then
echo "ERROR: Failed to download ${INPUT} (HTTP error or connection failure)" >&2
exit 1
fi
+
+mv "${tmp}" "${OUTPUT}"
+trap - EXIT📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| OUTPUT=internal/shared/util/tlsprofiles/mozilla_data.json | |
| INPUT=https://ssl-config.mozilla.org/guidelines/latest.json | |
| TMPFILE="$(mktemp)" | |
| trap 'rm -rf "$TMPFILE"' EXIT | |
| if ! curl -L -s -f "${INPUT}" > "${TMPFILE}"; then | |
| if ! curl -L -s -f "${INPUT}" -o "${OUTPUT}"; then | |
| echo "ERROR: Failed to download ${INPUT} (HTTP error or connection failure)" >&2 | |
| exit 1 | |
| fi | |
| OUTPUT=internal/shared/util/tlsprofiles/mozilla_data.json | |
| INPUT=https://ssl-config.mozilla.org/guidelines/latest.json | |
| tmp="$(mktemp "${OUTPUT}.tmp.XXXXXX")" | |
| trap 'rm -f "${tmp}"' EXIT | |
| if ! curl -L -s -f "${INPUT}" -o "${tmp}"; then | |
| echo "ERROR: Failed to download ${INPUT} (HTTP error or connection failure)" >&2 | |
| exit 1 | |
| fi | |
| mv "${tmp}" "${OUTPUT}" | |
| trap - EXIT |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@hack/tools/update-tls-profiles.sh` around lines 5 - 11, The script currently
writes directly to OUTPUT (variable
internal/shared/util/tlsprofiles/mozilla_data.json) which can be truncated on
partial download; change the download to write to a temporary file (use mktemp)
and only move it into place atomically (mv) after curl succeeds, ensuring you
still check curl's exit code; also ensure the temp file is removed on failure
(use a trap or explicit cleanup) and preserve final file permissions if needed
so that the atomic replace prevents leaving a corrupt mozilla_data.json and
avoids init() panics in the code that reads it.
| func parseProfile(name string, cfg mozillaConfiguration) (tlsProfile, []string, []string) { | ||
| var skippedC, skippedK []string | ||
| var cipherNums []uint16 | ||
| for _, c := range append(cfg.Ciphersuites, cfg.Ciphers.IANA...) { | ||
| id := cipherSuiteId(c) | ||
| if id == 0 { | ||
| skippedC = append(skippedC, c) | ||
| continue | ||
| } | ||
| cipherNums = append(cipherNums, id) | ||
| } | ||
|
|
||
| var curveNums []tls.CurveID | ||
| for _, c := range cfg.TLSCurves { | ||
| id := curveId(c) | ||
| if id == 0 { | ||
| skippedK = append(skippedK, c) | ||
| continue | ||
| } | ||
| curveNums = append(curveNums, id) | ||
| } | ||
|
|
||
| if len(cfg.TLSVersions) == 0 { | ||
| panic(fmt.Sprintf("tlsprofiles: profile %q has no tls_versions in embedded mozilla_data.json", name)) | ||
| } | ||
|
|
||
| var version tlsVersion | ||
| if err := version.Set(cfg.TLSVersions[0]); err != nil { | ||
| panic(fmt.Sprintf("tlsprofiles: profile %q has unrecognized tls_versions[0] %q: %v", name, cfg.TLSVersions[0], err)) | ||
| } | ||
|
|
||
| return tlsProfile{ | ||
| ciphers: cipherSlice{cipherNums: cipherNums}, | ||
| curves: curveSlice{curveNums: curveNums}, | ||
| minTLSVersion: version, | ||
| }, skippedC, skippedK |
There was a problem hiding this comment.
Fail closed on unsupported ciphers and curves.
Lines 73-76 and 83-86 silently drop names that Go cannot resolve. internal/shared/util/tlsprofiles/tlsprofiles_test.go:170-186 catches that only when tests run; the production code would otherwise ship a weaker/incomplete TLS profile.
Proposed fix
-func parseProfile(name string, cfg mozillaConfiguration) (tlsProfile, []string, []string) {
- var skippedC, skippedK []string
+func parseProfile(name string, cfg mozillaConfiguration) (tlsProfile, error) {
var cipherNums []uint16
for _, c := range append(cfg.Ciphersuites, cfg.Ciphers.IANA...) {
id := cipherSuiteId(c)
if id == 0 {
- skippedC = append(skippedC, c)
- continue
+ return tlsProfile{}, fmt.Errorf("tlsprofiles: profile %q contains unsupported cipher %q", name, c)
}
cipherNums = append(cipherNums, id)
}
var curveNums []tls.CurveID
for _, c := range cfg.TLSCurves {
id := curveId(c)
if id == 0 {
- skippedK = append(skippedK, c)
- continue
+ return tlsProfile{}, fmt.Errorf("tlsprofiles: profile %q contains unsupported curve %q", name, c)
}
curveNums = append(curveNums, id)
}
if len(cfg.TLSVersions) == 0 {
- panic(fmt.Sprintf("tlsprofiles: profile %q has no tls_versions in embedded mozilla_data.json", name))
+ return tlsProfile{}, fmt.Errorf("tlsprofiles: profile %q has no tls_versions in embedded mozilla_data.json", name)
}
var version tlsVersion
if err := version.Set(cfg.TLSVersions[0]); err != nil {
- panic(fmt.Sprintf("tlsprofiles: profile %q has unrecognized tls_versions[0] %q: %v", name, cfg.TLSVersions[0], err))
+ return tlsProfile{}, fmt.Errorf("tlsprofiles: profile %q has unrecognized tls_versions[0] %q: %w", name, cfg.TLSVersions[0], err)
}
return tlsProfile{
ciphers: cipherSlice{cipherNums: cipherNums},
curves: curveSlice{curveNums: curveNums},
minTLSVersion: version,
- }, skippedC, skippedK
+ }, nil
}Also update Line 55 to handle the returned error and panic there if needed.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func parseProfile(name string, cfg mozillaConfiguration) (tlsProfile, []string, []string) { | |
| var skippedC, skippedK []string | |
| var cipherNums []uint16 | |
| for _, c := range append(cfg.Ciphersuites, cfg.Ciphers.IANA...) { | |
| id := cipherSuiteId(c) | |
| if id == 0 { | |
| skippedC = append(skippedC, c) | |
| continue | |
| } | |
| cipherNums = append(cipherNums, id) | |
| } | |
| var curveNums []tls.CurveID | |
| for _, c := range cfg.TLSCurves { | |
| id := curveId(c) | |
| if id == 0 { | |
| skippedK = append(skippedK, c) | |
| continue | |
| } | |
| curveNums = append(curveNums, id) | |
| } | |
| if len(cfg.TLSVersions) == 0 { | |
| panic(fmt.Sprintf("tlsprofiles: profile %q has no tls_versions in embedded mozilla_data.json", name)) | |
| } | |
| var version tlsVersion | |
| if err := version.Set(cfg.TLSVersions[0]); err != nil { | |
| panic(fmt.Sprintf("tlsprofiles: profile %q has unrecognized tls_versions[0] %q: %v", name, cfg.TLSVersions[0], err)) | |
| } | |
| return tlsProfile{ | |
| ciphers: cipherSlice{cipherNums: cipherNums}, | |
| curves: curveSlice{curveNums: curveNums}, | |
| minTLSVersion: version, | |
| }, skippedC, skippedK | |
| func parseProfile(name string, cfg mozillaConfiguration) (tlsProfile, error) { | |
| var cipherNums []uint16 | |
| for _, c := range append(cfg.Ciphersuites, cfg.Ciphers.IANA...) { | |
| id := cipherSuiteId(c) | |
| if id == 0 { | |
| return tlsProfile{}, fmt.Errorf("tlsprofiles: profile %q contains unsupported cipher %q", name, c) | |
| } | |
| cipherNums = append(cipherNums, id) | |
| } | |
| var curveNums []tls.CurveID | |
| for _, c := range cfg.TLSCurves { | |
| id := curveId(c) | |
| if id == 0 { | |
| return tlsProfile{}, fmt.Errorf("tlsprofiles: profile %q contains unsupported curve %q", name, c) | |
| } | |
| curveNums = append(curveNums, id) | |
| } | |
| if len(cfg.TLSVersions) == 0 { | |
| return tlsProfile{}, fmt.Errorf("tlsprofiles: profile %q has no tls_versions in embedded mozilla_data.json", name) | |
| } | |
| var version tlsVersion | |
| if err := version.Set(cfg.TLSVersions[0]); err != nil { | |
| return tlsProfile{}, fmt.Errorf("tlsprofiles: profile %q has unrecognized tls_versions[0] %q: %w", name, cfg.TLSVersions[0], err) | |
| } | |
| return tlsProfile{ | |
| ciphers: cipherSlice{cipherNums: cipherNums}, | |
| curves: curveSlice{curveNums: curveNums}, | |
| minTLSVersion: version, | |
| }, nil | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/shared/util/tlsprofiles/mozilla_data.go` around lines 68 - 103,
parseProfile currently drops unresolved cipher/curve names (when cipherSuiteId
or curveId return 0) which weakens profiles; change parseProfile to treat those
as errors instead of silently skipping: collect any unknown cipher or curve
names and return a non-nil error (update the return signature accordingly) so
callers must handle it; update the caller that loads embedded profiles to check
the returned error and panic/fail closed if parsing fails (as suggested for the
caller at the embedded-profile load site), ensuring unresolved names never
silently degrade the TLS profile.
|
@openshift-bot: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
JIRA Tickets:
The downstream repository has been updated with the following following upstream commits:
The
vendor/directory has been updated and the following commits were carried:@catalogd-updateThis pull request is expected to merge without any human intervention. If tests are failing here, changes must land upstream to fix any issues so that future downstreaming efforts succeed.
/cc @openshift/openshift-team-operator-framework