Skip to content

Conversation

@renovate
Copy link
Contributor

@renovate renovate bot commented Aug 25, 2025

This PR contains the following updates:

Package Change Age Confidence
optuna ==4.3.* -> ==4.5.* age confidence

Release Notes

optuna/optuna (optuna)

v4.5.0

Compare Source

This is the release note of v4.5.0.

Highlights

GPSampler for constrained multi-objective optimization

GPSampler is now able to handle multiple objective and constraints simultaneously using the newly introduced constrained LogEHVI acquisition function.

The figures below show the difference between GPSampler (LogEHVI, unconstrained) vs GPSampler (constrained LogEHVI, new feature). The 3-dimensional version of the C2DTLZ2 benchmark problem we used is a problem where some areas of the Pareto front of the original DTLZ2 problem are made infeasible by constraints. Therefore, even if constraints are not taken into account, it is possible to obtain the Pareto front. Experimental results show that both LogEHVI and constrained LogEHVI can approximate the Pareto front, but the latter has significantly fewer infeasible solutions, demonstrating its efficiency.

Optuna v4.4 (LogEHVI) Optuna v4.5 (Constrained LogEHVI)
Log EHVI Constrained LogEHVI

Significant speedup of TPESampler

TPESampler is significantly (about 5x as listed in the table below) faster! It enables a larger number of trials in each study. The speedup was achieved through a series of enhancements in constant factors.

The following table shows the speed comparison of TPESampler between v4.4.0 and v4.5.0. The experiments were conducted using multivariate=True on a search space with 3 continuous parameters and 3 numerical discrete parameters. Each row shows the runtime for each number of objectives and each column shows each number of trials to be evaluated. Each runtime is shown along with the standard error over 3 random seeds. The numbers in parentheses represent the speedup factor in comparison to v4.4.0. For example, (5.1x) means the runtime of v4.5.0 is 5.1 times faster than that of v4.4.0.

n_objectives/n_trials 500 1000 1500 2000
1 1.4 $\pm$ 0.03 (5.1x) 3.9 $\pm$ 0.07 (5.3x) 7.3 $\pm$ 0.09 (5.4x) 11.9 $\pm$ 0.10 (5.4x)
2 1.8 $\pm$ 0.01 (4.7x) 4.7 $\pm$ 0.02 (4.8x) 8.7 $\pm$ 0.03 (4.8x) 13.9 $\pm$ 0.04 (4.9x)
3 2.0 $\pm$ 0.01 (4.2x) 5.4 $\pm$ 0.03 (4.4x) 10.0 $\pm$ 0.03 (4.6x) 15.9 $\pm$ 0.03 (4.7x)
4 4.2 $\pm$ 0.11 (3.2x) 12.1 $\pm$ 0.14 (3.9x) 20.9 $\pm$ 0.23 (4.2x) 31.3 $\pm$ 0.05 (4.4x)
5 12.1 $\pm$ 0.59 (4.7x) 30.8 $\pm$ 0.16 (5.8x) 50.7 $\pm$ 0.46 (6.5x) 72.8 $\pm$ 1.13 (7.1x)

Significant speedup of plot_hypervolume_history

plot_hypervolume_history is essential to assess the performance of multi-objective optimization, but it was unbearably slow when a large number of trials are evaluated on a many-objective (The number of objectives > 3) problem. v4.5.0 addressed this issue by incrementally updating the hypervolume instead of calculating each hypervolume from scratch.

The following figure shows the elapsed times of hypervolume history plot in Optuna v4.4.0 and v4.5.0 using a four-objective problem. The x-axis represents the number of trials and the y-axis represents the elapsed times for each setup. The blue and red lines are the results of v4.4.0 and v4.5.0, respectively.

Speedup of plot_hypervolume_history

CmaEsSampler now supports 1D search space

Up until Optuna v4.4, CmaEsSampler could not handle one-dimensional space and fell back to random search. Optuna v4.5 now allows the CMA-ES algorithm to be used for one-dimensional space.

The optunahub library is available on conda-forge

Now, you can install the optunahub library via conda-forge as follows.

conda install conda-forge::optunahub
Conda-Forge

New Features

  • Add ConstrainedLogEHVI (#​6198)
  • Add support for constrained multi-objective optimization in GPSampler (#​6224)
  • Support 1D Search Spaces in CmaEsSampler (#​6228)

Enhancements

  • Move optuna._lightgbm_tuner module (optuna/optuna-integration#233, thanks @​milkcoffeen!)
  • Fix numerical issue warning on qehvi_candidates_func (optuna/optuna-integration#242, thanks @​LukeGT!)
  • Calculate hypervolume in HSSP using sum of contributions (#​6130)
  • Use hypervolume difference as upperbound of contribs in HSSP (#​6131)
  • Refactor tell_with_warning to avoid unnecessary get_trial call (#​6133)
  • Print fully qualified name of experimental function by default (#​6162, thanks @​ktns!)
  • Include scipy-stubs in the type-check dependencies (#​6174, thanks @​jorenham!)
  • Warn when GPSampler falls back to RandomSampler (#​6179, thanks @​sisird864!)
  • Handle slowdown of GPSampler due to L-BFGS in SciPy v1.15 (#​6191)
  • Use the Newton method instead of bisect in ndtri_exp (#​6194)
  • Speed up erf for TPESampler (#​6200)
  • Avoid duplications in _log_gauss_mass evaluations (#​6202)
  • Remove unnecessary NumPy usage (#​6215)
  • Use subset comparator to judge if trials are included in search space (#​6218)
  • Speed up log pdf in _BatchedTruncNormDistributions by vectorization (#​6220)
  • Speed up WFG by skipping is_pareto_front and using simple Python loops (#​6223)
  • Vectorize ndtri_exp (#​6229)
  • Speed up plot_hypervolume_history (#​6232)
  • Speed up HSSP 4D+ by using a decremental approach (#​6234)
  • Use lru_cache to skip HSSP (#​6240, thanks @​fusawa-yugo!)
  • Add hypervolume computation for a zero size array (#​6245)

Bug Fixes

  • Fix: Resolve PG17 incompatibility for ENUMS in CASE statements (#​6099, thanks @​vcovo!)
  • Fix ill-combination of journal and gRPC (#​6175)
  • Fix a bug in constrained GPSampler (#​6181)
  • Fix TPESampler with multivariate and constant_liar (#​6189)

Installation

  • Remove version constraint for torch with Python 3.13 (#​6233)

Documentation

Examples

Tests

  • Fix test_log_completed_trial_skip_storage_access (#​6208)

Code Fixes

Continuous Integration

Other

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

@​1kastner, @​AdrianStrymer, @​CarvedCoder, @​Greesb, @​HideakiImamura, @​LukeGT, @​ParagEkbote, @​Subodh-12, @​c-bata, @​contramundum53, @​dhyeyinf, @​dross20, @​fusawa-yugo, @​gen740, @​hvy, @​jorenham, @​kAIto47802, @​ktns, @​milkcoffeen, @​muhammadibrahim313, @​nabenabe0928, @​not522, @​nzw0301, @​sawa3030, @​sisird864, @​toshihikoyanase, @​unKnownNG, @​vcovo, @​y0z

v4.4.0

Compare Source

This is the release note of v4.4.0.

Highlights

In addition to new features, bug fixes, and improvements in documentation and testing, version 4.4 introduces a new tool called the Optuna MCP Server.

Optuna MCP Server

The Optuna MCP server can be accessed by any MCP client via uv — for instance, with Claude Desktop, simply add the following configuration to your MCP server settings file. Of course, other LLM clients like VSCode or Cline can also be used similarly. You can also access it via Docker. If you want to persist the results, you can use the — storage option. For details, please refer to the repository.

{
  "mcpServers": {
    … (Other MCP Servers' settings)
    "Optuna": {
      "command": "uvx",
      "args": [
        "optuna-mcp"
      ]
    }
  }
}

image3

Gaussian Process-Based Multi-objective Optimization

Optuna’s GPSampler, introduced in version 3.6, offers superior speed and performance compared to existing Bayesian optimization frameworks, particularly when handling objective functions with discrete variables. In Optuna v4.4, we have extended this GPSampler to support multi-objective optimization problems. The applications of multi-objective optimization are broad, and the new multi-objective capabilities introduced in this GPSampler are expected to find applications in fields such as material design, experimental design problems, and high-cost hyperparameter optimization.

GPSampler can be easily integrated into your program and performs well against the existing BoTorchSampler. We encourage you to try it out with your multi-objective optimization problems.

sampler = optuna.samplers.GPSampler()
study = optuna.create_study(directions=["minimize", "minimize"], sampler=sampler)

image2

New Features in OptunaHub

During the development period of Optuna v4.4, several new features were also introduced to OptunaHub, the feature-sharing platform for Optuna:

Vizier sampler performance
image1
TPE acquisition visualizer
image4

Breaking Changes

  • Update consider_prior Behavior and Remove Support for False (#​6007)
  • Remove restart_strategy and inc_popsize to simplify CmaEsSampler (#​6025)
  • Make all arguments of TPESampler keyword-only (#​6041)

New Features

  • Add a module to preprocess solutions for hypervolume improvement calculation (#​6039)
  • Add AcquisitionFuncParams for LogEHVI (#​6052)
  • Support Multi-Objective Optimization GPSampler (#​6069)
  • Add n_recent_trials to plot_timeline (#​6110, thanks @​msdsm!)

Enhancements

  • Adapt TYPE_CHECKING of samplers/_gp/sampler.py (#​6059)
  • Avoid deepcopy in _tell_with_warning (#​6079)
  • Add _compute_3d for hypervolume computation (#​6112, thanks @​shmurai!)
  • Improve performance of plot_hypervolume_history (#​6115, thanks @​shmurai!)
  • add deprecated/removed version specification to calls of convert_positional_args (#​6117, thanks @​shmurai!)
  • Optimize Study.best_trial performance by avoiding unnecessary deep copy (#​6119, thanks @​msdsm!)
  • Refactor and speed up HV3D (#​6124)
  • Add assume_pareto for hv calculation in _calculate_weights_below_for_multi_objective (#​6129)

Bug Fixes

Documentation

Examples

Tests

  • Add float precision tests for storages (#​6040)
  • Refactor test_base_gasampler.py (#​6104)
  • chore: run tests for importance only with in-memory (#​6109)
  • Improve test cases for n_recent_trials of plot_timeline (follow-up #​6110) (#​6116)
  • Performance optimization for test_study.py by removing redundancy (#​6120)

Code Fixes

Continuous Integration

Other

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

@​AdrianStrymer, @​Ajay-Satish-01, @​Alnusjaponica, @​Copilot, @​HideakiImamura, @​ParagEkbote, @​Prashantdhaka23, @​Samarthi, @​Shubham05122002, @​SubhadityaMukherjee, @​c-bata, @​contramundum53, @​copilot-pull-request-reviewer[bot], @​fusawa-yugo, @​gen740, @​himkt, @​hitsgub, @​hrntsm, @​kAIto47802, @​lan496, @​leevers, @​milkcoffeen, @​msdsm, @​nabenabe0928, @​not522, @​nzw0301, @​saishreyakumar, @​sawa3030, @​shmurai, @​toshihikoyanase, @​y0z


Configuration

📅 Schedule: Branch creation - Between 12:00 AM and 03:59 AM, only on Monday ( * 0-3 * * 1 ) (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot changed the title Update dependency optuna to ==4.5.* Update dependency optuna to ==4.5.* - autoclosed Aug 26, 2025
@renovate renovate bot closed this Aug 26, 2025
@renovate renovate bot deleted the renovate/optuna-4.x branch August 26, 2025 06:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant