You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Release 0.9.4: Python 3.12/3.13 support, standard logging, initial population seeding, and expanded model coverage (#41)
* Next version with revision will be 0.9.4
* Modified: improved example
* Modified: default search spaces
* Added initial and default individual options.
* Fixed mlflow test
* Improved plots to reduced size
* Updated sphinx gallery examples
* Improved plots with responsiveness
* Added default parameters, pytest and gallery sphinx examples of catboost, lightgbm and other sklearn ml models
* Added attributes with number of n_trials_ and optimization_time_ and a future parameter disable_file_output to reduce overhead when a fast execution is needed
* Added disable_file_output to reduce overhead in experiments
* Added optimization_time_ and n_trials_ to the sphinx gallery examples
* Fix parallelization using joblib instead of multiprocessing
* Added: test of ml models
* Updated and fixed: mlflow integration improved and documented, Phase 1 about params
* Updated documentation conf.py to remove mlflow problems and fix scipy
* Modified: version from func
* Added: default conf files for ml scikit learn models
* Updated README with the new features and new example.
* Fix test
* Added tests for initial params in genetic population and disabling the file output
* Removed to do
* Improved gitignore
* The library now follows the standard Python library logging pattern
* Added verbose option
* Added python 3.12
* Added python 3.13
Copy file name to clipboardExpand all lines: docs/sections/Advanced/index.rst
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,6 +9,7 @@ The advanced customization options in `mloptimizer` enable fine-tuning of the op
9
9
score_functions
10
10
reproducibility
11
11
parallel
12
+
logging
12
13
13
14
14
15
Overview of Customization Options
@@ -20,4 +21,6 @@ Overview of Customization Options
20
21
21
22
- **Parallel Processing**: Accelerate optimization by distributing computations across multiple cores. Parallel processing can significantly reduce runtime, especially for complex models or extensive hyperparameter spaces.
22
23
24
+
- **Logging Configuration**: Configure logging output to monitor optimization progress, save logs to files, or integrate with your existing logging setup. mloptimizer follows the standard Python library logging pattern for maximum flexibility.
25
+
23
26
Each section provides detailed guidance on implementing these advanced options.
Copy file name to clipboardExpand all lines: docs/sections/Introduction/overview.rst
+18Lines changed: 18 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -131,6 +131,24 @@ Setting the same `seed` value across multiple runs will produce identical result
131
131
132
132
On macOS with newer processor architectures (e.g., M1 or M2 chips), users may experience occasional reproducibility issues due to hardware-related differences in random number generation and floating-point calculations. To ensure consistency across runs, we recommend running `mloptimizer` within a Docker container configured for reproducible behavior. This approach helps isolate the environment and improves reproducibility on macOS hardware.
133
133
134
+
Logging and Verbosity
135
+
---------------------
136
+
137
+
By default, `mloptimizer` runs silently without logging output. To enable logging, use the ``verbose`` parameter:
138
+
139
+
.. code-block:: python
140
+
141
+
# Silent (default)
142
+
opt = GeneticSearch(..., verbose=0)
143
+
144
+
# Info level - shows optimization lifecycle
145
+
opt = GeneticSearch(..., verbose=1)
146
+
147
+
# Debug level - shows detailed info
148
+
opt = GeneticSearch(..., verbose=2)
149
+
150
+
For more advanced logging configuration, see the :doc:`../Advanced/logging` section.
MLflow is an open-source platform for managing the machine learning lifecycle, including experiment tracking, model versioning, and deployment. The `mloptimizer` library integrates seamlessly with MLflow to provide comprehensive tracking of genetic algorithm optimization runs, enabling you to monitor evolution progress, compare hyperparameter configurations, and analyze results.
5
+
6
+
.. toctree::
7
+
:hidden:
8
+
9
+
mlflow_basics
10
+
mlflow_viewing
11
+
mlflow_remote
12
+
13
+
Overview of MLflow Features
14
+
----------------------------
15
+
16
+
- **Experiment Tracking**: Automatically log all optimization runs with their configurations, metrics, and results. Track generation-level metrics to visualize how fitness evolves across generations.
17
+
18
+
- **Result Visualization**: Use the MLflow UI to interactively explore runs, compare different optimization strategies, and analyze hyperparameter impact on model performance.
19
+
20
+
- **Remote Tracking**: Configure MLflow to use remote tracking servers for team collaboration and centralized experiment management. Share optimization results across your organization.
21
+
22
+
Each section provides detailed guidance on using MLflow with mloptimizer.
23
+
24
+
Key Benefits
25
+
------------
26
+
27
+
**Generation-Level Tracking**
28
+
Every generation's best, average, and worst fitness scores are logged, allowing you to visualize the evolution of your population over time.
29
+
30
+
**Comprehensive Metadata**
31
+
Dataset characteristics, optimization configuration, early stopping information, and timing metrics are automatically recorded.
32
+
33
+
**Flexible Storage**
34
+
Use local file-based storage for quick experiments or configure remote MLflow servers with database backends for production deployments.
35
+
36
+
**Seamless Integration**
37
+
Simply add ``use_mlflow=True`` to your ``GeneticSearch`` configuration - no additional code required.
0 commit comments