Skip to content

[nr-k8s-otel-collector] make collector observability resource attributes configurable#2133

Open
achang209 wants to merge 2 commits intonewrelic:masterfrom
achang209:feat/collector-observability-resource-attributes
Open

[nr-k8s-otel-collector] make collector observability resource attributes configurable#2133
achang209 wants to merge 2 commits intonewrelic:masterfrom
achang209:feat/collector-observability-resource-attributes

Conversation

@achang209
Copy link
Copy Markdown

@achang209 achang209 commented Feb 26, 2026

What this PR does / why we need it:

Exposes service::telemetry::resource as a configurable Helm value under collectorObservability.resource, allowing users to enrich internal otelcol_* metrics with custom resource attributes.

Without this, internal collector metrics arrive in New Relic with no cluster or environment context, making it impossible to distinguish which cluster or environment a collector's internal metrics originate from in multi-cluster deployments. The resource/newrelic processor correctly enriches pipeline metrics with k8s.cluster.name, but service::telemetry exports directly via OTLP and bypasses all pipeline processors, so it never receives that enrichment.

This PR adds a resource block under service::telemetry in both the daemonset and deployment configmaps. k8s.cluster.name is automatically populated from the existing newrelic.common.cluster helper, following the same pattern already used in the resource/newrelic processor. Additional arbitrary attributes can be passed via collectorObservability.resource for things like env.

This also fixes a minor inconsistency where values.yaml documented the scrape interval field as scrapeIntervalMs but the configmap template referenced it as scrapeIntervalSeconds.

Which issue this PR fixes

(optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged)

  • fixes #

Special notes for your reviewer:

  • The resource block is placed at service::telemetry::resource per the OTel collector internal telemetry docs, which applies resource attributes across all internal telemetry signals (metrics, logs, and traces)
  • The resource block is only rendered when collectorObservability.enabled is true, so there is no impact to users who have not enabled collector observability
  • k8s.cluster.name is always set automatically and does not need to be specified by the user
  • The change is identical in both daemonset-configmap.yaml and deployment-configmap.yaml
  • The scrapeIntervalMsscrapeIntervalSeconds rename in values.yaml is a documentation fix only — the configmap template already used scrapeIntervalSeconds so behavior is unchanged

Checklist

[Place an '[x]' (no spaces) in all applicable fields. Please remove unrelated fields.]

  • Chart Version bumped
  • Variables are documented in the README.md
  • Title of the PR starts with chart name (e.g. [mychartname])

Release Notes to Publish (nr-k8s-otel-collector)

If this PR contains changes in nr-k8s-otel-collector, please complete the following section. All other charts should ignore this section.

🚀 What's Changed

  • Added collectorObservability.resource to allow users to enrich internal collector metrics with custom resource attributes. k8s.cluster.name is automatically populated from the chart's cluster value, enabling multi-cluster observability out of the box. Additional attributes such as env can be passed via collectorObservability.resource.
  • Fixed a documentation inconsistency where the scrape interval field was referenced as scrapeIntervalMs in values.yaml but scrapeIntervalSeconds in the configmap template.

@CLAassistant
Copy link
Copy Markdown

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

@achang209
Copy link
Copy Markdown
Author

@dbudziwojskiNR @Philip-R-Beckwith would love feedback on this at your earliest convenience. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants