Update dependency optuna to ==4.5.* - autoclosed #421
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
==4.3.*->==4.5.*Release Notes
optuna/optuna (optuna)
v4.5.0Compare Source
This is the release note of v4.5.0.
Highlights
GPSamplerfor constrained multi-objective optimizationGPSampleris now able to handle multiple objective and constraints simultaneously using the newly introduced constrained LogEHVI acquisition function.The figures below show the difference between
GPSampler(LogEHVI, unconstrained) vsGPSampler(constrained LogEHVI, new feature). The 3-dimensional version of the C2DTLZ2 benchmark problem we used is a problem where some areas of the Pareto front of the original DTLZ2 problem are made infeasible by constraints. Therefore, even if constraints are not taken into account, it is possible to obtain the Pareto front. Experimental results show that both LogEHVI and constrained LogEHVI can approximate the Pareto front, but the latter has significantly fewer infeasible solutions, demonstrating its efficiency.Significant speedup of
TPESamplerTPESampleris significantly (about 5x as listed in the table below) faster! It enables a larger number of trials in each study. The speedup was achieved through a series of enhancements in constant factors.The following table shows the speed comparison of
TPESamplerbetween v4.4.0 and v4.5.0. The experiments were conducted usingmultivariate=Trueon a search space with 3 continuous parameters and 3 numerical discrete parameters. Each row shows the runtime for each number of objectives and each column shows each number of trials to be evaluated. Each runtime is shown along with the standard error over 3 random seeds. The numbers in parentheses represent the speedup factor in comparison to v4.4.0. For example, (5.1x) means the runtime of v4.5.0 is 5.1 times faster than that of v4.4.0.n_objectives/n_trialsSignificant speedup of
plot_hypervolume_historyplot_hypervolume_historyis essential to assess the performance of multi-objective optimization, but it was unbearably slow when a large number of trials are evaluated on a many-objective (The number of objectives > 3) problem. v4.5.0 addressed this issue by incrementally updating the hypervolume instead of calculating each hypervolume from scratch.The following figure shows the elapsed times of hypervolume history plot in Optuna v4.4.0 and v4.5.0 using a four-objective problem. The x-axis represents the number of trials and the y-axis represents the elapsed times for each setup. The blue and red lines are the results of v4.4.0 and v4.5.0, respectively.
CmaEsSamplernow supports 1D search spaceUp until Optuna v4.4,
CmaEsSamplercould not handle one-dimensional space and fell back to random search. Optuna v4.5 now allows the CMA-ES algorithm to be used for one-dimensional space.The optunahub library is available on conda-forge
Now, you can install the optunahub library via conda-forge as follows.
New Features
ConstrainedLogEHVI(#6198)GPSampler(#6224)CmaEsSampler(#6228)Enhancements
optuna._lightgbm_tunermodule (optuna/optuna-integration#233, thanks @milkcoffeen!)qehvi_candidates_func(optuna/optuna-integration#242, thanks @LukeGT!)tell_with_warningto avoid unnecessaryget_trialcall (#6133)scipy-stubsin the type-check dependencies (#6174, thanks @jorenham!)GPSamplerfalls back toRandomSampler(#6179, thanks @sisird864!)GPSamplerdue to L-BFGS in SciPy v1.15 (#6191)ndtri_exp(#6194)TPESampler(#6200)_log_gauss_massevaluations (#6202)_BatchedTruncNormDistributionsby vectorization (#6220)is_pareto_frontand using simple Python loops (#6223)ndtri_exp(#6229)plot_hypervolume_history(#6232)lru_cacheto skip HSSP (#6240, thanks @fusawa-yugo!)Bug Fixes
GPSampler(#6181)TPESamplerwithmultivariateandconstant_liar(#6189)Installation
Documentation
GPSampleras a sampler that supports constraints (#6176, thanks @1kastner!)README.md(#6222, thanks @muhammadibrahim313!)Examples
Tests
test_log_completed_trial_skip_storage_access(#6208)Code Fixes
KernelParamsTensortowards cleaner GP-related modules (#6152)KernelParamsTensortoGPRegressor(#6153)v3.0.0.d.py(#6154, thanks @dross20!)optuna/_imports.py(#6167, thanks @AdrianStrymer!)optuna.artifacts._download.py(#6177, thanks @dross20!)is_categoricalto search space (#6182)optuna.artifacts._list_artifact_meta.py(#6187, thanks @dross20!)GPSampler(#6195)SearchSpacein GP (#6197)_truncnorm(#6201)TYPE_CHECKINGinoptuna/_gp/acqf.pyto avoid circular imports (#6204, thanks @CarvedCoder!)TYPE_CHECKINGinoptuna/_gp/optim_mixed.pyto avoid circular imports (#6205, thanks @Subodh-12!)GPSampler(#6213)BaseGASampler(#6219)torch.newaxiswithNonefor old PyTorch (#6237)Continuous Integration
READMEforblackdoc==0.3.10(#6150)pytest-xdistto speed up the CI (#6170)test_get_timeline_plot_with_killed_running_trials(#6210)test_experimental(#6211)xfail(#6217)GPSampler(#6235)Other
README(optuna/optuna-integration#234, thanks @ParagEkbote!)4.5.0.dev(optuna/optuna-integration#237)README(#6159)Thanks to All the Contributors!
This release was made possible by the authors and the people who participated in the reviews and discussions.
@1kastner, @AdrianStrymer, @CarvedCoder, @Greesb, @HideakiImamura, @LukeGT, @ParagEkbote, @Subodh-12, @c-bata, @contramundum53, @dhyeyinf, @dross20, @fusawa-yugo, @gen740, @hvy, @jorenham, @kAIto47802, @ktns, @milkcoffeen, @muhammadibrahim313, @nabenabe0928, @not522, @nzw0301, @sawa3030, @sisird864, @toshihikoyanase, @unKnownNG, @vcovo, @y0z
v4.4.0Compare Source
This is the release note of v4.4.0.
Highlights
In addition to new features, bug fixes, and improvements in documentation and testing, version 4.4 introduces a new tool called the Optuna MCP Server.
Optuna MCP Server
The Optuna MCP server can be accessed by any MCP client via uv — for instance, with Claude Desktop, simply add the following configuration to your MCP server settings file. Of course, other LLM clients like VSCode or Cline can also be used similarly. You can also access it via Docker. If you want to persist the results, you can use the — storage option. For details, please refer to the repository.
Gaussian Process-Based Multi-objective Optimization
Optuna’s GPSampler, introduced in version 3.6, offers superior speed and performance compared to existing Bayesian optimization frameworks, particularly when handling objective functions with discrete variables. In Optuna v4.4, we have extended this GPSampler to support multi-objective optimization problems. The applications of multi-objective optimization are broad, and the new multi-objective capabilities introduced in this GPSampler are expected to find applications in fields such as material design, experimental design problems, and high-cost hyperparameter optimization.
GPSampler can be easily integrated into your program and performs well against the existing BoTorchSampler. We encourage you to try it out with your multi-objective optimization problems.
New Features in OptunaHub
During the development period of Optuna v4.4, several new features were also introduced to OptunaHub, the feature-sharing platform for Optuna:
Breaking Changes
consider_priorBehavior and Remove Support forFalse(#6007)restart_strategyandinc_popsizeto simplifyCmaEsSampler(#6025)TPESamplerkeyword-only (#6041)New Features
AcquisitionFuncParamsfor LogEHVI (#6052)GPSampler(#6069)n_recent_trialstoplot_timeline(#6110, thanks @msdsm!)Enhancements
TYPE_CHECKINGofsamplers/_gp/sampler.py(#6059)_tell_with_warning(#6079)_compute_3dfor hypervolume computation (#6112, thanks @shmurai!)plot_hypervolume_history(#6115, thanks @shmurai!)convert_positional_args(#6117, thanks @shmurai!)Study.best_trialperformance by avoiding unnecessary deep copy (#6119, thanks @msdsm!)assume_paretofor hv calculation in_calculate_weights_below_for_multi_objective(#6129)Bug Fixes
request.valuesinOptunaStorageProxyService(#6044, thanks @hitsgub!)BruteForceSamplerforHyperbandPruner(#6107)Documentation
optuna.pruners.MedianPrunerandoptuna.pruners.PatientPruner(#6055, thanks @ParagEkbote!)GPSampler(#6081)_get_best_trialto follow coding conventions (#6122)Examples
RAEDME.md(optuna/optuna-examples#323)tensorflowandnumpy(optuna/optuna-examples#324)Tests
test_base_gasampler.py(#6104)n_recent_trialsofplot_timeline(follow-up #6110) (#6116)test_study.pyby removing redundancy (#6120)Code Fixes
optuna/_experimental.py(#6045, thanks @ParagEkbote!)optuna/importance/_base.py(#6046, thanks @ParagEkbote!)optuna/_convert_positional_args.py(#6050, thanks @ParagEkbote!)optuna/_deprecated.py(#6051, thanks @ParagEkbote!)optuna/_gp/gp.py(#6053, thanks @ParagEkbote!)etain sbx (#6056, thanks @hrntsm!)CmaEsAttrKeysand_attr_keysfor Simplification (#6068)np.isnanwithmath.isnan(#6080)_tell_with_warning(#6082)optuna/distributions.py(#6086, thanks @AdrianStrymer!)optuna/_gp/gp.py(#6090, thanks @Samarthi!)ExperimentalWarningif heartbeat is enabled (#6106, thanks @lan496!)optuna/visualization/_terminator_improvement.py(#6139, thanks @Prashantdhaka23!)optim_mixed.py(#6140, thanks @Ajay-Satish-01!)test_trial.py(#6141, thanks @saishreyakumar!)optuna/storages/_rdb/models.pyfor consistency among the codebase (#6143, thanks @Shubham05122002!)Continuous Integration
wandb(optuna/optuna-integration#228)checks-optionalCI on the fork repositories (#6103)blackdoc(#6145)Other
Thanks to All the Contributors!
This release was made possible by the authors and the people who participated in the reviews and discussions.
@AdrianStrymer, @Ajay-Satish-01, @Alnusjaponica, @Copilot, @HideakiImamura, @ParagEkbote, @Prashantdhaka23, @Samarthi, @Shubham05122002, @SubhadityaMukherjee, @c-bata, @contramundum53, @copilot-pull-request-reviewer[bot], @fusawa-yugo, @gen740, @himkt, @hitsgub, @hrntsm, @kAIto47802, @lan496, @leevers, @milkcoffeen, @msdsm, @nabenabe0928, @not522, @nzw0301, @saishreyakumar, @sawa3030, @shmurai, @toshihikoyanase, @y0z
Configuration
📅 Schedule: Branch creation - Between 12:00 AM and 03:59 AM, only on Monday ( * 0-3 * * 1 ) (UTC), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.