Releases: hyperactive-project/Hyperactive
Releases · hyperactive-project/Hyperactive
v4.0.0
v4.0.0
v3.2.4
Changes from v3.0.0 -> v3.2.4:
- Decouple number of runs from active processes (Thanks to PartiallyTyped). This reduces memory load if number of jobs is huge
- New feature: The progress board enables the user to monitor the optimization progress during the run.
- Display trend of best score
- Plot parameters and score in parallel coordinates
- Generate filter file to define an upper and/or lower bound for all parameters and the score in the parallel coordinate plot
- List parameters of 5 best scores
- add Python 3.8 to tests
- add warnings of search space values does not contain lists
- improve stability of result-methods
- add tests for hyperactive-memory + search spaces
v2.3.0
- add Tree-structured optimization algorithm (idea from Hyperopt)
- add Decision-tree optimization algorithm (idea from sklearn)
- enable new optimization parameters for bayes-opt:
- max_sample_size: maximum number of samples for the gaussian-process-reg to train on. Sampling done by random choice.
- skip_retrain: skips the retraining of the gaussian-process-reg sometimes during the optimization run. Basically returns multiple predictions for next output (which should be apart from another)
v2.1.0
- first stable implementation of "long-term-memory" to save/load search positions/parameter and results.
- enable warm start of sequence based optimizers (bayesian opt, ...) with results from "long-term-memory"
- enable the usage of other gaussian-process-regressors than from sklearn. GPR-class (from gpy, GPflow, ...) can be passed to "optimizer"-kwarg
v2.0.0
API-change to improve usage. Class accepts training data. "search"-method accepts search_config and other optimization-run specific arguments like n_iter, n_jobs, optimizer.
v1.1.1
- small api-change
- extend progress bar information
- re-enable multiprocessing for new api
v1.0.0
- new API that creates model by function and search space by dict
- enables more flexible usage (e.g. free use of framework, ensembles, nn-structure)
- 100% test coverage
v0.4.2
- performance fixes for bayesian optimization and parallel tempering
- better default parameter for most optimizers
- better implementation for metrics
- add support for catboost
- integration of meta-learn code into hyperactive
- cleanup to avoid similar code
v0.4.1.2
- k-fold-cross validation works with keras models
- a cv of < 1 trains the model on a fraction of the training data and tests on the rest
- better testing and code-coverage
- fix of score and predict method
v0.4.0
- improvement of optimizer class structure
- lower memory usage
- add testing of optimization process
- a lot of clean up und several bug fixes (mostly parallel tempering)