Skip to content

refactor(metrics): add dedicated range tombstone block counters#36

Merged
polaz merged 9 commits into
mainfrom
feat/#13-refactor-make-blocktype-match-exhaustive-in-tableu
Mar 21, 2026
Merged

refactor(metrics): add dedicated range tombstone block counters#36
polaz merged 9 commits into
mainfrom
feat/#13-refactor-make-blocktype-match-exhaustive-in-tableu

Conversation

@polaz
Copy link
Copy Markdown
Member

@polaz polaz commented Mar 20, 2026

Summary

  • Split BlockType::RangeTombstone out of Data|Meta match arms in load_block() into dedicated branches
  • Add three new counters to Metrics: range_tombstone_block_load_cached, range_tombstone_block_load_io, range_tombstone_block_io_requested
  • Add accessor methods and update aggregate methods (block_io, block_load_io_count, block_load_cached_count) to include range tombstone counters

Closes #13
Closes #34

Test plan

  • cargo check --all-features passes
  • cargo test --all-features — 27/27 tests pass
  • Verify exhaustive match catches future BlockType additions at compile time

Summary by CodeRabbit

  • New Features

    • Added dedicated observability for range-tombstone block operations: separate counters for disk/OS loads, cached loads, requested bytes, and a cache-hit rate. Overall block I/O and load summaries now include range-tombstone metrics for complete block-type reporting.
  • Tests

    • Added a unit test validating zero-initialization and correct metric behavior across cache-miss and cache-hit paths for range-tombstone blocks.

- Add range_tombstone_block_load_io, _load_cached, _io_requested
  fields to Metrics struct
- Add accessor methods: range_tombstone_block_io(),
  range_tombstone_block_load_count(),
  range_tombstone_block_load_cached_count(),
  range_tombstone_block_cache_hit_rate()
- Update aggregate methods (block_io, block_load_io_count,
  block_load_cached_count) to include range tombstone counters
- Split RangeTombstone out of Data|Meta match arms in load_block()

Closes #13
Closes #34
Copilot AI review requested due to automatic review settings March 20, 2026 22:36
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 20, 2026

Warning

Rate limit exceeded

@polaz has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 8 minutes and 35 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 0d5d1752-f2b6-4bb2-95e8-6c0b6a073d70

📥 Commits

Reviewing files that changed from the base of the PR and between 9a6dc2f and e19b507.

📒 Files selected for processing (1)
  • src/table/tests.rs
📝 Walkthrough

Walkthrough

Adds dedicated range-tombstone metrics and accessors, updates aggregate block-level metric methods to include range-tombstone counts/bytes, splits metrics accounting in load_block() for BlockType::RangeTombstone vs Data|Meta, and adds a metrics-gated unit test validating RT counters on miss/hit flows.

Changes

Cohort / File(s) Summary
Metrics struct and accessors
src/metrics.rs
Added atomics: range_tombstone_block_load_io, range_tombstone_block_load_cached, range_tombstone_block_io_requested. Added accessors: range_tombstone_block_io(), range_tombstone_block_load_count(), range_tombstone_block_load_cached_count(), range_tombstone_block_cache_hit_rate(). Updated block_io(), block_load_io_count(), and block_load_cached_count() to include RT counters. Adjusted doc comments to refer to "served from block cache".
Block loading metrics logic
src/table/util.rs
load_block() now increments RT-specific counters when BlockType::RangeTombstone and increments data-specific counters only for `Data
Table recovery instrumentation
src/table/mod.rs
During Table::recover, added metrics instrumentation (cfg-gated) to increment range_tombstone_block_load_io and range_tombstone_block_io_requested when a RangeTombstone block is read prior to decoding.
Unit tests
src/table/tests.rs
Added #[cfg(feature = "metrics")] test load_block_range_tombstone_metrics asserting RT-specific IO and cached counters change appropriately across a cache-miss then cache-hit, while data-block counters remain unchanged.

Sequence Diagram(s)

sequenceDiagram
  participant Table as Table::recover / load_block
  participant BlockIO as Block::from_file
  participant Metrics as Metrics
  participant Cache as Cache

  Table->>BlockIO: read range-tombstone block
  BlockIO-->>Table: block(bytes, type=RangeTombstone)
  Table->>Metrics: increment range_tombstone_block_load_io (cfg(feature="metrics"))
  Table->>Metrics: add range_tombstone_block_io_requested(bytes) (cfg(feature="metrics"))
  Table->>Cache: populate block cache
  Note right of Cache: subsequent load_block() call
  Table->>Cache: try load from cache
  Cache-->>Table: hit
  Table->>Metrics: increment range_tombstone_block_load_cached (cfg(feature="metrics"))
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Poem

🐇
I nibble bytes and count each hop,
Miss then hit — I never stop.
Tombstones tallied, caches met,
Small hops, tidy stats I set.
Hooray for neat accounting, a bunny's pep!

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly summarizes the main change: adding dedicated range tombstone block counters to the metrics system.
Linked Issues check ✅ Passed The PR fully addresses both linked issues #13 and #34 by splitting BlockType::RangeTombstone handling into explicit branches and adding the three required dedicated metrics counters with proper aggregation.
Out of Scope Changes check ✅ Passed All changes are directly aligned with the linked issues: metrics-specific counter additions, match arm refactoring for exhaustiveness, and related test coverage without unrelated modifications.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/#13-refactor-make-blocktype-match-exhaustive-in-tableu

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link
Copy Markdown

codecov Bot commented Mar 20, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR refactors table block-load metrics to track BlockType::RangeTombstone separately from Data|Meta, improving observability for delete-heavy workloads and making BlockType matching more compile-time-exhaustive.

Changes:

  • Split BlockType::RangeTombstone into dedicated match arms in load_block() (cache-hit and I/O paths).
  • Add new Metrics counters for range tombstone block cache hits, disk/page-cache reads, and requested bytes.
  • Update aggregate/accessor methods (block_io, load counts, cache hit rates) to include the new range tombstone counters.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
src/table/util.rs Routes range tombstone block cache/I/O events to dedicated metrics counters via explicit BlockType match arms.
src/metrics.rs Adds new atomic counters and updates aggregate/accessor methods to include range tombstone metrics.

Comment thread src/table/util.rs
Comment thread src/metrics.rs
polaz added 3 commits March 21, 2026 01:00
- Test default zero values, load count summation, cache hit rate
- Test aggregate methods (block_io, block_load_io_count,
  block_load_cached_count) include RT counters
- Add NOTE comments in load_block() explaining that RT blocks
  currently bypass this path (loaded eagerly at table recovery)

Closes #13
Closes #34
…it paths

Write a table with range tombstones, then call load_block() directly
with BlockType::RangeTombstone and a fresh cache to exercise both the
IO (miss) and cached (hit) metrics arms in util.rs.

Closes #13
Closes #34
@polaz polaz requested a review from Copilot March 21, 2026 07:31
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 21, 2026

@polaz Sure, I'll re-review the changes in PR #36 right away!


🧠 Learnings used
Learnt from: polaz
Repo: structured-world/lsm-tree PR: 21
File: src/table/writer/mod.rs:437-457
Timestamp: 2026-03-20T12:29:43.719Z
Learning: In `structured-world/lsm-tree` (`src/table/writer/mod.rs`), the synthetic weak-tombstone sentinel written for RT-only SSTs (when `item_count == 0` but `range_tombstones` is non-empty) intentionally increments `meta.tombstone_count`, `meta.weak_tombstone_count`, `meta.item_count`, and `meta.key_count`. All on-disk metadata counters must match actual block contents for recovery validation consistency (established in commit 7452427a for item_count/key_count; same reasoning applies to tombstone counters). The resulting +1 inflation on GC heuristics is harmless — it overestimates reclaimable space and triggers slightly earlier checks. Do not flag this as a metadata accuracy bug in PR `#21` or future PRs.

Learnt from: polaz
Repo: structured-world/lsm-tree PR: 0
File: :0-0
Timestamp: 2026-03-20T08:17:06.289Z
Learning: In `structured-world/lsm-tree`, the switch from `Mutex<IntervalTree>` to `RwLock<IntervalTree>` (or a freeze-on-seal immutable structure) for memtable range-tombstone storage in `src/memtable/mod.rs` is intentionally deferred from PR `#21` to a separate issue (`#30`). Do not flag the existing `Mutex<IntervalTree>` usage in `src/memtable/mod.rs` as a concurrency concern in the context of PR `#21`; it is tracked in issue `#30`.

Learnt from: polaz
Repo: structured-world/lsm-tree PR: 0
File: :0-0
Timestamp: 2026-03-20T08:17:39.835Z
Learning: In `structured-world/lsm-tree`, the `Memtable` range-tombstone storage uses `Mutex<IntervalTree>` (introduced in PR `#21`). Switching to `RwLock<IntervalTree>` or a freeze-on-seal immutable structure is a deliberate future improvement tracked in issue `#31` — do not flag the `Mutex` usage as a blocking concern in PR `#21` or future PRs that reference this design.

Learnt from: polaz
Repo: structured-world/lsm-tree PR: 0
File: :0-0
Timestamp: 2026-03-20T08:17:39.835Z
Learning: In `structured-world/lsm-tree`, the `Memtable` range-tombstone storage uses `Mutex<IntervalTree>` (introduced in PR `#21`). Switching to `RwLock<IntervalTree>` or a freeze-on-seal immutable structure is a deliberate future improvement tracked in a separate issue — do not flag the `Mutex` usage as a blocking concern in PR `#21` or future PRs that reference this design.

Learnt from: polaz
Repo: structured-world/lsm-tree PR: 21
File: src/table/multi_writer.rs:129-206
Timestamp: 2026-03-20T11:14:02.461Z
Learning: In `structured-world/lsm-tree`, the `write_rts_to_writer` function in `src/table/multi_writer.rs` has a known limitation when `clip_range_tombstones=true` (compaction mode): RT fragments that don't intersect any output writer's KV span are silently dropped, which can lose delete-only gaps (e.g. an RT covering `[m,p)` while all output tables cover `[a,l]` and `[q,z]`). This limitation is intentionally deferred to issue `#32` and is documented as a Known Limitation in PR `#21`. Do not flag this as a blocking concern in PR `#21` or any PR until `#32` is addressed.

Learnt from: polaz
Repo: structured-world/lsm-tree PR: 12
File: src/table/block/mod.rs:109-119
Timestamp: 2026-03-15T16:10:30.764Z
Learning: In `structured-world/lsm-tree`, pre-allocation bounds checks on `uncompressed_length`, `data_length`, `handle.size()`, and `real_val_len` (256 MiB cap, returning `DecompressedSizeTooLarge`) are intentionally deferred to PR `#7` (branch `feat/#258-security-validate-uncompressedlength-before-decomp`, closes issues `#258` and `#257`). Do not flag missing size-cap guards in `Block::from_reader`, `Block::from_file`, or `blob_file::Reader::get` as unaddressed — they are handled in that separate security PR.

Learnt from: CR
Repo: structured-world/lsm-tree PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-20T22:03:44.692Z
Learning: Applies to src/active_tombstone_set.rs : `src/active_tombstone_set.rs` — Tracks active range tombstones during compaction

Learnt from: CR
Repo: structured-world/lsm-tree PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-20T22:03:44.692Z
Learning: Applies to src/range_tombstone.rs : `src/range_tombstone.rs` — Range tombstone data model and serialization

Learnt from: CR
Repo: structured-world/lsm-tree PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-20T22:03:44.692Z
Learning: Applies to src/range_tombstone_filter.rs : `src/range_tombstone_filter.rs` — MVCC-aware range tombstone filtering for iterators

Learnt from: CR
Repo: structured-world/lsm-tree PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-20T22:03:44.692Z
Learning: Applies to src/table/block/**/*.rs : `src/table/block/` — On-disk block format (header + compressed payload)
✅ Actions performed

Full review triggered.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.

Comment thread src/table/util.rs Outdated
Comment thread src/metrics.rs Outdated
polaz added 4 commits March 21, 2026 09:44
…disk"

All *_load_cached_count() methods return block-cache hits, not disk
reads. The pre-existing docstrings incorrectly said "from disk or OS
page cache" — corrected to "served from block cache".
Table recovery loads range tombstone blocks via Block::from_file(),
bypassing load_block(). Increment range_tombstone_block_load_io and
range_tombstone_block_io_requested directly at the recovery call site
so that real-world RT block reads are reflected in metrics.

Remove stale NOTE comments from load_block() match arms and update
test assertions to account for the recovery-time increment.
…bleu' of github.com:structured-world/lsm-tree into feat/#13-refactor-make-blocktype-match-exhaustive-in-tableu
@polaz polaz requested a review from Copilot March 21, 2026 08:00
@polaz
Copy link
Copy Markdown
Member Author

polaz commented Mar 21, 2026

@coderabbitai re-review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 21, 2026

@polaz Sure, I'll re-review PR #36 right away!


🧠 Learnings used
Learnt from: polaz
Repo: structured-world/lsm-tree PR: 21
File: src/table/writer/mod.rs:437-457
Timestamp: 2026-03-20T12:29:43.719Z
Learning: In `structured-world/lsm-tree` (`src/table/writer/mod.rs`), the synthetic weak-tombstone sentinel written for RT-only SSTs (when `item_count == 0` but `range_tombstones` is non-empty) intentionally increments `meta.tombstone_count`, `meta.weak_tombstone_count`, `meta.item_count`, and `meta.key_count`. All on-disk metadata counters must match actual block contents for recovery validation consistency (established in commit 7452427a for item_count/key_count; same reasoning applies to tombstone counters). The resulting +1 inflation on GC heuristics is harmless — it overestimates reclaimable space and triggers slightly earlier checks. Do not flag this as a metadata accuracy bug in PR `#21` or future PRs.

Learnt from: polaz
Repo: structured-world/lsm-tree PR: 21
File: src/table/multi_writer.rs:129-206
Timestamp: 2026-03-20T11:14:02.461Z
Learning: In `structured-world/lsm-tree`, the `write_rts_to_writer` function in `src/table/multi_writer.rs` has a known limitation when `clip_range_tombstones=true` (compaction mode): RT fragments that don't intersect any output writer's KV span are silently dropped, which can lose delete-only gaps (e.g. an RT covering `[m,p)` while all output tables cover `[a,l]` and `[q,z]`). This limitation is intentionally deferred to issue `#32` and is documented as a Known Limitation in PR `#21`. Do not flag this as a blocking concern in PR `#21` or any PR until `#32` is addressed.

Learnt from: polaz
Repo: structured-world/lsm-tree PR: 0
File: :0-0
Timestamp: 2026-03-20T08:17:06.289Z
Learning: In `structured-world/lsm-tree`, the switch from `Mutex<IntervalTree>` to `RwLock<IntervalTree>` (or a freeze-on-seal immutable structure) for memtable range-tombstone storage in `src/memtable/mod.rs` is intentionally deferred from PR `#21` to a separate issue (`#30`). Do not flag the existing `Mutex<IntervalTree>` usage in `src/memtable/mod.rs` as a concurrency concern in the context of PR `#21`; it is tracked in issue `#30`.

Learnt from: polaz
Repo: structured-world/lsm-tree PR: 0
File: :0-0
Timestamp: 2026-03-20T08:17:39.835Z
Learning: In `structured-world/lsm-tree`, the `Memtable` range-tombstone storage uses `Mutex<IntervalTree>` (introduced in PR `#21`). Switching to `RwLock<IntervalTree>` or a freeze-on-seal immutable structure is a deliberate future improvement tracked in a separate issue — do not flag the `Mutex` usage as a blocking concern in PR `#21` or future PRs that reference this design.

Learnt from: polaz
Repo: structured-world/lsm-tree PR: 0
File: :0-0
Timestamp: 2026-03-20T08:17:39.835Z
Learning: In `structured-world/lsm-tree`, the `Memtable` range-tombstone storage uses `Mutex<IntervalTree>` (introduced in PR `#21`). Switching to `RwLock<IntervalTree>` or a freeze-on-seal immutable structure is a deliberate future improvement tracked in issue `#31` — do not flag the `Mutex` usage as a blocking concern in PR `#21` or future PRs that reference this design.

Learnt from: polaz
Repo: structured-world/lsm-tree PR: 21
File: src/table/writer/mod.rs:394-463
Timestamp: 2026-03-20T20:05:18.309Z
Learning: In `structured-world/lsm-tree` (`src/table/writer/mod.rs`), the `key#min`/`key#max` widening for RT bounds in `Writer::finish()` is intentionally limited to the `item_count == 0` (RT-only) branch. For mixed KV+RT tables, `MultiWriter::write_rts_to_writer` (lines 167–199 of `src/table/multi_writer.rs`) already widens `meta.first_key`/`meta.last_key` conservatively in flush mode (`clip=false`). In compaction mode (`clip=true`), RTs are intersected with the KV key range so no widening is needed. Do not flag the absence of unconditional widening in `Writer::finish()` as a bug in PR `#21` or future PRs.

Learnt from: polaz
Repo: structured-world/lsm-tree PR: 12
File: src/table/block/mod.rs:109-119
Timestamp: 2026-03-15T16:10:30.764Z
Learning: In `structured-world/lsm-tree`, pre-allocation bounds checks on `uncompressed_length`, `data_length`, `handle.size()`, and `real_val_len` (256 MiB cap, returning `DecompressedSizeTooLarge`) are intentionally deferred to PR `#7` (branch `feat/#258-security-validate-uncompressedlength-before-decomp`, closes issues `#258` and `#257`). Do not flag missing size-cap guards in `Block::from_reader`, `Block::from_file`, or `blob_file::Reader::get` as unaddressed — they are handled in that separate security PR.

Learnt from: polaz
Repo: structured-world/lsm-tree PR: 23
File: .github/instructions/rust.instructions.md:0-0
Timestamp: 2026-03-17T00:34:54.145Z
Learning: In `structured-world/lsm-tree`, the lint suppression policy distinguishes between legacy and new code:
- Existing/upstream `#[allow(clippy::...)]` in **unchanged** code: treat as legacy — do **not** flag it in reviews.
- New code introduced in a PR **must** use `#[expect(clippy::...)]` with a justification reason.
This avoids churning upstream code while enforcing the `#[expect]` standard for all new contributions.

Learnt from: CR
Repo: structured-world/lsm-tree PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-20T22:03:44.692Z
Learning: Applies to **/*.rs : Before flagging any potential issue, trace the call chain visible in the PR diff. Check the caller to see if it handles the case, verify type-level guarantees, and read adjacent comments about deliberate design decisions.

Learnt from: CR
Repo: structured-world/lsm-tree PR: 0
File: .github/instructions/rust.instructions.md:0-0
Timestamp: 2026-03-20T22:04:21.226Z
Learning: Applies to **/*.rs : Flag incorrect merge semantics: tombstones not propagated to lower levels, point deletes applied out of order (Tier 1 — MUST flag)

Learnt from: CR
Repo: structured-world/lsm-tree PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-20T22:03:44.692Z
Learning: Applies to src/active_tombstone_set.rs : `src/active_tombstone_set.rs` — Tracks active range tombstones during compaction

Learnt from: CR
Repo: structured-world/lsm-tree PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-20T22:03:44.692Z
Learning: Applies to src/range_tombstone.rs : `src/range_tombstone.rs` — Range tombstone data model and serialization

Learnt from: CR
Repo: structured-world/lsm-tree PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-03-20T22:03:44.692Z
Learning: Applies to src/range_tombstone_filter.rs : `src/range_tombstone_filter.rs` — MVCC-aware range tombstone filtering for iterators
✅ Actions performed

Full review triggered.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 4 out of 4 changed files in this pull request and generated 2 comments.

Comment thread src/table/tests.rs
Comment thread src/table/tests.rs Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/table/mod.rs`:
- Around line 582-589: The startup recover() path is incorrectly recording its
eager RT read into
range_tombstone_block_load_io/range_tombstone_block_io_requested even though it
bypasses load_block() and the block cache; change the metric to a
recovery-specific counter instead: add new metrics (e.g.
range_tombstone_recovery_block_load_io and
range_tombstone_recovery_block_io_requested) and update the block in question to
increment those instead of
range_tombstone_block_load_io/range_tombstone_block_io_requested so
load_block(), block_cache_hit_rate(), and range_tombstone_block_cache_hit_rate()
remain semantically correct; reference recover(), the current increments of
range_tombstone_block_load_io and range_tombstone_block_io_requested, and
load_block() when making the change.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 28c20380-2460-401c-a770-79db4ce617c2

📥 Commits

Reviewing files that changed from the base of the PR and between c9ead5b and 9a6dc2f.

📒 Files selected for processing (4)
  • src/metrics.rs
  • src/table/mod.rs
  • src/table/tests.rs
  • src/table/util.rs

Comment thread src/table/mod.rs Outdated
Recovery reads RT blocks via Block::from_file(), bypassing the block
cache. Counting these as range_tombstone_block_load_io skews
block_cache_hit_rate() (unpaired IO without cached counterpart).

Revert the recovery-site increment so RT counters track only
load_block() calls — consistent with filter/index recovery which
also bypass metrics. Add #[expect(clippy::unwrap_used)] on
writer.finish() in the test.
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated no new comments.

@polaz polaz merged commit c3f3232 into main Mar 21, 2026
31 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat(metrics): add dedicated counters for range tombstone block I/O refactor: make BlockType match exhaustive in table/util.rs metrics

2 participants