Transfer Learning Decorator#790
Conversation
* Provides a single source of truth for defining the TL logic * Enables TL for non-TL presets by applying the decorator
There was a problem hiding this comment.
Pull request overview
This PR introduces a class-decorator-based mechanism to enable BayBE’s default transfer learning behavior for Gaussian Process kernel factories (automatically composing a base kernel with the default task kernel when a task parameter is present), and refactors several kernel presets to use it.
Changes:
- Added
_enable_transfer_learningdecorator in the GP kernel factory component layer. - Updated preset kernel factories (EDBO, CHEN, Smoothed EDBO, BayBE) to rely on the decorator rather than per-preset task-exclusion selectors / manual orchestration.
- Adjusted kernel-factory tests and internal factory naming to reflect the new split between numerical-only and task-aware factories.
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
baybe/surrogates/gaussian_process/components/kernel.py |
Adds _enable_transfer_learning and updates ICM defaults to use internal numerical/task factories. |
baybe/surrogates/gaussian_process/presets/edbo.py |
Applies transfer-learning decorator and removes the preset’s default task-excluding selector. |
baybe/surrogates/gaussian_process/presets/edbo_smoothed.py |
Splits numerical kernel factory from transfer-learning-enabled alias. |
baybe/surrogates/gaussian_process/presets/chen.py |
Applies transfer-learning decorator and removes the preset’s default task-excluding selector. |
baybe/surrogates/gaussian_process/presets/baybe.py |
Refactors default kernel factory wiring to reuse Smoothed EDBO and internal task factory. |
tests/test_kernel_factories.py |
Updates imports/usages to the new internal factory names. |
Comments suppressed due to low confidence (1)
baybe/surrogates/gaussian_process/presets/edbo_smoothed.py:49
_SmoothedEDBONumericalKernelFactory._makeuseseffective_dims = train_x.shape[-1]and passesparameter_names=self.get_parameter_names(searchspace)(which isNoneunless a selector is provided). Under transfer learning/multitask search spaces, this will count the task index dimension ineffective_dimsand make the base kernel operate on the task column as well, altering the smoothed prior interpolation. The numerical kernel used as the ICM base kernel should excludeTaskParameterby default, and dimension logic should be computed from the selected active dims/parameter names (not the rawtrain_xwidth).
@define
class _SmoothedEDBONumericalKernelFactory(_PureKernelFactory):
"""A factory providing the core numerical kernel for the smoothed EDBO preset."""
_uses_parameter_names: ClassVar[bool] = True
# See base class.
@override
def _make(
self, searchspace: SearchSpace, train_x: Tensor, train_y: Tensor
) -> Kernel:
effective_dims = train_x.shape[-1]
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
`_enable_transfer_learning` now accepts an optional `name` parameter so that the dynamically created class can have the correct `__name__` when the function is called directly (rather than used as a decorator). This fixes serialization for `SmoothedEDBOKernelFactory`, which was previously serialized as `_SmoothedEDBONumericalKernelFactory`.
Simple aliases like `BayBEKernelFactory = SmoothedEDBOKernelFactory` cause the serialized type name to be that of the underlying class, which means the identity is lost on deserialization. Using thin subclasses ensures each factory has its own stable `__name__`.
6b3abe7 to
a713bfe
Compare
|
@copilot: please review |
I've completed a comprehensive code review of the transfer learning decorator implementation. Here are my findings: ✅ Strengths
🔍 Observations
✨ Code QualityThe implementation is well-structured, properly documented, and follows BayBE's patterns. The decorator successfully abstracts transfer learning while maintaining backward compatibility and correct serialization behavior. No issues found. The implementation is production-ready. |
When used as a decorator (@_enable_transfer_learning), modify the class in-place instead of creating a subclass with the same __name__. The previous approach left two concrete classes with identical names in the subclass registry, causing find_subclass to resolve to the @Define- processed intermediate (without the TL wrapper) during deserialization. When called with an explicit name argument (for cases like SmoothedEDBOKernelFactory where the original class is reused elsewhere), the subclass approach is preserved since the distinct name avoids any collision.
The Protocol metaclass (_ProtocolMeta) defaults __module__ to 'abc' when creating classes via 3-arg type(). Set it explicitly from the parent class so that SmoothedEDBOKernelFactory correctly reports its module as baybe.surrogates.gaussian_process.presets.edbo_smoothed.
| """Construct the kernel.""" | ||
|
|
||
|
|
||
| def _enable_transfer_learning( |
There was a problem hiding this comment.
@Scienfitz: in principle ready and working. However, I have to admit that this was significantly more painful than anticipated, with many footguns along the way. So I'm open to a very harsh review and a complete change of direction, if you prefer and have an alternative/simpler idea.
But I hope that you get my intent for this: I think we need some mechanism that lets us say fill this preset with our default approach for a certain aspect that the preset does not specify, and the filling should be very much done without copying code since the BayBE defaults are expected to move. So we need something like a single source of truth. That said: maybe you have some smarter idea.
DevPR, parent is #745
Last piece to the puzzle:
Presets (i.e. papers, packages, etc) can dictate certain aspects of the GP model while not saying anything about other aspects. For example, both EDBO and CHEN focus on the kernel priors but don't even consider transfer learning at all. This is a general issue, and can also cover other things like multi-fidelity etc.
For these cases, we want to follow the approach
if not defined, use BayBE default mechanism/setting. However, this requires to abstract these settings/mechanism into reusable structures. This PR takes care of this step for transfer learning (which is currently the only mechanism that needs to be ported) in the form of a class decorator. Because other mechanisms will follow in the future and their extent isn't yet fully clear (e.g. multi-fidelity or transfer learning via mean injection), we keep this decorator private for now. A possible future extension of the decorator could have the form@enable_mechanism(transfer_learning=True, multi_fidelity=True)that then accepts any existing GP component and makes the necessary adjustments.