Skip to content

perf: small-buffer inline cache for replace_rec_fn#13378

Draft
Kha wants to merge 7 commits intoleanprover:masterfrom
Kha:push-kwnquwmslonl
Draft

perf: small-buffer inline cache for replace_rec_fn#13378
Kha wants to merge 7 commits intoleanprover:masterfrom
Kha:push-kwnquwmslonl

Conversation

@Kha
Copy link
Copy Markdown
Member

@Kha Kha commented Apr 12, 2026

This PR replaces the std::unordered_map-based cache in replace_rec_fn with a small-buffer cache that stores its first 16 entries inline in uninitialized stack storage and only allocates a real hash map for the rare large traversal. Instrumentation across a full leanchecker --fresh Init.Data.List.Lemmas run shows that 87% of replace_rec_fn instances hold at most 15 entries and only 0.21% exceed 128, with a mean cache size of just 9 entries spread across ~950k instances. At that scale a hash map is the wrong data structure: its per-instance bucket-array allocation and per-entry node allocation dwarf the cost of a linear scan over a handful of entries.

The new structure pays nothing for entries that are never inserted, no allocation at all on the common path, and falls back to the original unordered_map once the inline buffer fills. Combined with the existing is_likely_unshared filter, lookups on the common path are just a tight scan over a stack-resident array.

On leanchecker --fresh Init.Data.List.Lemmas this shaves 17.10 G -> 16.18 G instructions (~5.4%) and 1.74s -> 1.62s wall-clock (~6.7%) compared to the previous baseline. It supersedes the prior try_emplace and reserve(128) micro-optimizations on the same cache, both of which are no longer needed since the hash map is no longer on the hot path.

@Kha
Copy link
Copy Markdown
Member Author

Kha commented Apr 12, 2026

!bench

@leanprover-radar
Copy link
Copy Markdown

leanprover-radar commented Apr 12, 2026

Benchmark results for 9398a0b against 82bb27f are in. Significant changes detected! @Kha

  • build//instructions: -111.4G (-0.92%)

Large changes (11✅, 5🟥)

  • build/module/Std.Data.HashMap.RawLemmas//instructions: -5.1G (-4.06%)
  • 🟥 elab/big_do//instructions: +452.6M (+1.91%)
  • elab/big_match//instructions: -274.8M (-2.37%)
  • 🟥 elab/big_omega//instructions: +501.1M (+2.13%)
  • 🟥 elab/big_omega_MT//instructions: +500.8M (+2.11%)
  • 🟥 elab/big_struct//instructions: +131.4M (+4.77%)
  • 🟥 elab/big_struct_dep//instructions: +1.5G (+10.63%)
  • elab/cbv_divisors//instructions: -2.3G (-4.36%)
  • elab/cbv_leroy//instructions: -2.0G (-4.20%)
  • elab/grind_bitvec2//instructions: -4.0G (-2.08%)
  • elab/grind_ring_5//instructions: -673.1M (-7.61%)
  • elab/reduceMatch//instructions: -574.9M (-4.44%)
  • misc/import Init.Data.BitVec.Lemmas//instructions: -3.7G (-2.97%)
  • misc/leanchecker --fresh Init//instructions: -24.3G (-6.46%)
  • misc/leanchecker --fresh Init//task-clock: -2s (-9.86%)
  • misc/leanchecker --fresh Init//wall-clock: -2s (-9.86%)

Medium changes (28✅, 3🟥)

Too many entries to display here. View the full report on radar instead.

Small changes (895✅, 29🟥)

Too many entries to display here. View the full report on radar instead.

Kha added 2 commits April 12, 2026 16:50
This PR turns `replace_rec_fn` into a class template parameterized on the
functor type and exposes it from `kernel/replace_fn.h`, alongside templated
`replace<F>(...)` overloads that monomorphize on the functor at the call
site. Hot callers like `instantiate`, `instantiate_rev`, and the C-exported
`lean_expr_instantiate*_core` variants in `kernel/instantiate.cpp` all pass
their substitution closure as a stateless lambda; previously each call into
the closure went through `std::function`'s indirect dispatch on every
recursion step, blocking inlining of the loose-bvar-range early return.
With this change the closure body inlines into `replace_rec_fn::apply` and
the `std::function` indirection is gone.

The legacy `replace(expr, std::function<...>, bool)` overload is preserved
as a thin out-of-line wrapper that instantiates the template once with
`std::function` as the functor type, so external callers continue to work
unchanged.

On `leanchecker --fresh Init.Data.List.Lemmas` this shaves
`17.10 G -> 16.50 G` instructions (~3.5%) and `1.74s -> 1.69s` wall-clock
(~3%). All existing callers benefit automatically since they were already
passing lambdas directly; no caller-side changes were required.
This PR skips inserting an entry into `replace_rec_fn`'s cache when the
result equals the input, i.e. when the user functor decided that the
subterm is unchanged. The dominant kernel callers of `replace` are the
`instantiate*` family in `kernel/instantiate.cpp`, and their substitution
closure short-circuits to `some_expr(m)` whenever
`offset >= get_loose_bvar_range(m)`. That predicate is true for the vast
majority of subterms in any non-trivial expression, so the cache was being
filled almost exclusively with `(node, offset) -> node` self-mappings.
Each such entry costs an insert and pollutes the cache for actually-useful
entries, while a re-visit can recompute the same answer by calling the
functor again at O(1) cost (the loose-bvar range is memoized on the expr
node).

On `leanchecker --fresh Init.Data.List.Lemmas` this shaves
`17.10 G -> 16.55 G` instructions (~3.2%) and `1.74s -> 1.61s` wall-clock
(~7.5%).
@github-actions github-actions bot added the toolchain-available A toolchain is available for this PR, at leanprover/lean4-pr-releases:pr-release-NNNN label Apr 12, 2026
@mathlib-lean-pr-testing
Copy link
Copy Markdown

Mathlib CI status (docs):

  • ❗ Mathlib CI can not be attempted yet, as the nightly-testing-2026-04-09 tag does not exist there yet. We will retry when you push more commits. If you rebase your branch onto nightly-with-mathlib, Mathlib CI should run now. You can force Mathlib CI using the force-mathlib-ci label. (2026-04-12 17:42:11)

@leanprover-bot
Copy link
Copy Markdown
Collaborator

Reference manual CI status:

  • ❗ Reference manual CI can not be attempted yet, as the nightly-testing-2026-04-09 tag does not exist there yet. We will retry when you push more commits. If you rebase your branch onto nightly-with-manual, reference manual CI should run now. You can force reference manual CI using the force-manual-ci label. (2026-04-12 17:42:13)

Kha added 3 commits April 12, 2026 17:54
This PR replaces the `r.raw() != e.raw()` skip-when-result-is-input check
in `replace_rec_fn::save_result` (introduced earlier in this stack) with
an opt-in pre-cache `skip` predicate that callers can supply alongside
the rewriter functor. The skip predicate is invoked *before* the cache
lookup; if it returns true, the input is returned unchanged with no
cache touch.

The earlier identity-skip approach blew up catastrophically on workloads
like `Init.WFExtrinsicFix` (~3-4x instruction count), because it dropped
caching for the *recursion-identity* case as well as the cheap
`m_f`-early-return case. In the recursion-identity case the cache had
been remembering "we already descended into this shared subterm and
proved it's unchanged" — without that memo, every revisit redid the
full recursive descent and the work compounded exponentially.

The skip predicate fixes this by separating the two cases: only the
cheap "no work needed for this subtree" guard goes into `skip`, and the
ordinary cache logic handles every other path unchanged. For
`instantiate*`/`lift_loose_bvars`/`lower_loose_bvars`, the predicate is
the existing memoized loose-bvar-range check that was previously the
first line of `m_f`. It's strictly cheaper than the cache lookup it
replaces.

Measurements vs the `xkn` baseline on `leanchecker --fresh`:

|                       | baseline | this PR |
|-----------------------|----------|---------|
| Init.WFExtrinsicFix   | 9.32 G   | 9.07 G  |
| Init.Data.List.Lemmas | 17.10 G  | 16.61 G |

Both workloads improve modestly, with no regression on either.
This PR replaces the `std::unordered_map`-based cache in `replace_rec_fn`
with a small-buffer cache that stores its first 16 entries inline in
uninitialized stack storage and only allocates a real hash map for the
rare large traversal. Instrumentation across a full
`leanchecker --fresh Init.Data.List.Lemmas` run shows that 87% of
`replace_rec_fn` instances hold at most 15 entries and only 0.21%
exceed 128, with a mean cache size of just 9 entries spread across
~950k instances. At that scale a hash map is the wrong data structure:
its per-instance bucket-array allocation and per-entry node allocation
dwarf the cost of a linear scan over a handful of entries.

The new structure pays nothing for entries that are never inserted, no
allocation at all on the common path, and falls back to the original
`unordered_map` once the inline buffer fills. Combined with the
existing `is_likely_unshared` filter, lookups on the common path are
just a tight scan over a stack-resident array.

On `leanchecker --fresh Init.Data.List.Lemmas` this shaves
`17.10 G -> 16.18 G` instructions (~5.4%) and `1.74s -> 1.62s`
wall-clock (~6.7%) compared to the previous baseline. It supersedes
the prior `try_emplace` and `reserve(128)` micro-optimizations on the
same cache, both of which are no longer needed since the hash map is
no longer on the hot path.
@Kha Kha force-pushed the push-kwnquwmslonl branch from 9398a0b to ecc6f08 Compare April 12, 2026 20:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

toolchain-available A toolchain is available for this PR, at leanprover/lean4-pr-releases:pr-release-NNNN

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants