Skip to content

Commit def42e3

Browse files
SebastianM-Cclaude
andcommitted
fix deprecations
Co-authored-by: Claude <noreply@anthropic.com>
1 parent d6cea4a commit def42e3

File tree

16 files changed

+49
-51
lines changed

16 files changed

+49
-51
lines changed

docs/src/API/ad.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ The choices for the auto-AD fill-ins with quick descriptions are:
77
- `AutoTracker()`: Like ReverseDiff but GPU-compatible
88
- `AutoZygote()`: The fastest choice for non-mutating array-based (BLAS) functions
99
- `AutoFiniteDiff()`: Finite differencing, not optimal but always applicable
10-
- `AutoModelingToolkit()`: The fastest choice for large scalar optimizations
10+
- `AutoSymbolics()`: The fastest choice for large scalar optimizations
1111
- `AutoEnzyme()`: Highly performant AD choice for type stable and optimized code
1212
- `AutoMooncake()`: Like Zygote and ReverseDiff, but supports GPU and mutating code
1313

@@ -21,7 +21,7 @@ OptimizationBase.AutoFiniteDiff
2121
OptimizationBase.AutoReverseDiff
2222
OptimizationBase.AutoZygote
2323
OptimizationBase.AutoTracker
24-
OptimizationBase.AutoModelingToolkit
24+
OptimizationBase.AutoSymbolics
2525
OptimizationBase.AutoEnzyme
2626
ADTypes.AutoMooncake
2727
```

docs/src/API/modelingtoolkit.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ optimization of code. Optimizers can better interface with the extra
77
symbolic information provided by the system.
88

99
There are two ways that the user interacts with ModelingToolkit.jl.
10-
One can use `OptimizationFunction` with `AutoModelingToolkit` for
10+
One can use `OptimizationFunction` with `AutoSymbolics` for
1111
automatically transforming numerical codes into symbolic codes. See
1212
the [OptimizationFunction documentation](@ref optfunction) for more
1313
details.

docs/src/examples/rosenbrock.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ flexibility of Optimization.jl. This is a gauntlet of many solvers to get a feel
55
for common workflows of the package and give copy-pastable starting points.
66

77
!!! note
8-
8+
99
This example uses many different solvers of Optimization.jl. Each solver
1010
subpackage needs to be installed separate. For example, for the details on
1111
the installation and usage of OptimizationOptimJL.jl package, see the
@@ -14,12 +14,12 @@ for common workflows of the package and give copy-pastable starting points.
1414
The objective of this exercise is to determine the $(x, y)$ value pair that minimizes the result of a Rosenbrock function $f$ with some parameter values $a$ and $b$. The Rosenbrock function is useful for testing because it is known *a priori* to have a global minimum at $(a, a^2)$.
1515
```math
1616
f(x,\,y;\,a,\,b) = \left(a - x\right)^2 + b \left(y - x^2\right)^2
17-
```
17+
```
1818

1919
The Optimization.jl interface expects functions to be defined with a vector of optimization arguments $\bar{x}$ and a vector of parameters $\bar{p}$, i.e.:
2020
```math
2121
f(\bar{x},\,\bar{p}) = \left(p_1 - x_1\right)^2 + p_2 \left(x_2 - x_1^2\right)^2
22-
```
22+
```
2323

2424
Parameters $a$ and $b$ are captured in a vector $\bar{p}$ and assigned some arbitrary values to produce a particular Rosenbrock function to be minimized.
2525
```math
@@ -164,7 +164,7 @@ sol = solve(prob, CMAEvolutionStrategyOpt())
164164

165165
```@example rosenbrock
166166
using OptimizationNLopt, ModelingToolkit
167-
optf = OptimizationFunction(rosenbrock, Optimization.AutoModelingToolkit())
167+
optf = OptimizationFunction(rosenbrock, Optimization.AutoSymbolics())
168168
prob = OptimizationProblem(optf, x0, _p)
169169
170170
sol = solve(prob, Opt(:LN_BOBYQA, 2))

docs/src/optimization_packages/mathoptinterface.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ the `maxtime` common keyword argument.
2020
`OptimizationMOI` supports an argument `mtkize` which takes a boolean (default to `false`)
2121
that allows automatic symbolic expression generation, this allows using any AD backend with
2222
solvers or interfaces such as AmplNLWriter that require the expression graph of the objective
23-
and constraints. This always happens automatically in the case of the `AutoModelingToolkit`
23+
and constraints. This always happens automatically in the case of the `AutoSymbolics`
2424
`adtype`.
2525

2626
An optimizer which supports the `MathOptInterface` API can be called
@@ -94,7 +94,7 @@ The following shows how to use integer linear programming within `Optimization`.
9494
[Juniper documentation](https://github.com/lanl-ansi/Juniper.jl) for more
9595
detail.
9696
- The integer domain is inferred based on the bounds of the variable:
97-
97+
9898
+ Setting the lower bound to zero and the upper bound to one corresponds to `MOI.ZeroOne()` or a binary decision variable
9999
+ Providing other or no bounds corresponds to `MOI.Integer()`
100100

docs/src/optimization_packages/optim.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -340,7 +340,7 @@ using Optimization, OptimizationOptimJL, ModelingToolkit
340340
rosenbrock(x, p) = (1 - x[1])^2 + 100 * (x[2] - x[1]^2)^2
341341
x0 = zeros(2)
342342
p = [1.0, 100.0]
343-
f = OptimizationFunction(rosenbrock, Optimization.AutoModelingToolkit())
343+
f = OptimizationFunction(rosenbrock, Optimization.AutoSymbolics())
344344
prob = Optimization.OptimizationProblem(f, x0, p)
345345
sol = solve(prob, Optim.Newton())
346346
```

docs/src/tutorials/constraints.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ x_1 * x_2 = 0.5
8181
```
8282

8383
```@example constraints
84-
optprob = OptimizationFunction(rosenbrock, Optimization.AutoModelingToolkit(), cons = cons)
84+
optprob = OptimizationFunction(rosenbrock, Optimization.AutoSymbolics(), cons = cons)
8585
prob = OptimizationProblem(optprob, x0, _p, lcons = [1.0, 0.5], ucons = [1.0, 0.5])
8686
```
8787

docs/src/tutorials/linearandinteger.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ objective = (u, p) -> (v = p[1:5]; dot(v, u))
9191
9292
cons = (res, u, p) -> (w = p[6:10]; res .= [sum(w[i] * u[i]^2 for i in 1:5)])
9393
94-
optf = OptimizationFunction(objective, Optimization.AutoModelingToolkit(), cons = cons)
94+
optf = OptimizationFunction(objective, Optimization.AutoSymbolics(), cons = cons)
9595
optprob = OptimizationProblem(optf,
9696
zeros(5),
9797
vcat(v, w);

lib/OptimizationBase/ext/OptimizationMTKExt.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ module OptimizationMTKExt
33
import OptimizationBase, OptimizationBase.ArrayInterface
44
import SciMLBase
55
import SciMLBase: OptimizationFunction
6-
import OptimizationBase.ADTypes: AutoModelingToolkit, AutoSymbolics, AutoSparse
6+
import OptimizationBase.ADTypes: AutoSymbolics, AutoSparse
77
using ModelingToolkit
88

99
function OptimizationBase.instantiate_function(

lib/OptimizationMOI/src/moi.jl

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -16,14 +16,14 @@ function MOIOptimizationCache(prob::OptimizationProblem, opt; kwargs...)
1616
f = prob.f
1717
reinit_cache = OptimizationBase.ReInitCache(prob.u0, prob.p)
1818
if isnothing(f.sys)
19-
if f.adtype isa OptimizationBase.AutoModelingToolkit
19+
if f.adtype isa OptimizationBase.AutoSymbolics
2020
num_cons = prob.ucons === nothing ? 0 : length(prob.ucons)
2121
f = OptimizationBase.instantiate_function(prob.f,
2222
reinit_cache,
2323
prob.f.adtype,
2424
num_cons)
2525
else
26-
throw(ArgumentError("Expected an `OptimizationProblem` that was setup via an `OptimizationSystem`, or AutoModelingToolkit ad choice"))
26+
throw(ArgumentError("Expected an `OptimizationProblem` that was setup via an `OptimizationSystem`, or AutoSymbolics ad choice"))
2727
end
2828
end
2929

@@ -35,16 +35,16 @@ function MOIOptimizationCache(prob::OptimizationProblem, opt; kwargs...)
3535
cons_expr = Vector{Expr}(undef, length(cons))
3636
Threads.@sync for i in eachindex(cons)
3737
Threads.@spawn if prob.lcons[i] == prob.ucons[i] == 0
38-
cons_expr[i] = Expr(:call, :(==),
38+
cons_expr[i] = Expr(:call, :(==),
3939
repl_getindex!(convert_to_expr(f.cons_expr[i],
4040
expr_map;
41-
expand_expr = false)), 0)
41+
expand_expr = false)), 0)
4242
else
4343
# MTK canonicalizes the expression form
44-
cons_expr[i] = Expr(:call, :(<=),
44+
cons_expr[i] = Expr(:call, :(<=),
4545
repl_getindex!(convert_to_expr(f.cons_expr[i],
4646
expr_map;
47-
expand_expr = false)), 0)
47+
expand_expr = false)), 0)
4848
end
4949
end
5050

lib/OptimizationMOI/test/runtests.jl

Lines changed: 10 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ end
3737
_p = [1.0, 100.0]
3838
cons_circ = (res, x, p) -> res .= [x[1]^2 + x[2]^2]
3939
optprob = OptimizationFunction(
40-
rosenbrock, OptimizationBase.AutoZygote();
40+
rosenbrock, AutoZygote();
4141
cons = cons_circ)
4242
prob = OptimizationProblem(optprob, x0, _p, ucons = [Inf], lcons = [0.0])
4343
evaluator = init(prob, Ipopt.Optimizer()).evaluator
@@ -63,7 +63,7 @@ end
6363
_p = [1.0, 100.0]
6464
l1 = rosenbrock(x0, _p)
6565

66-
optprob = OptimizationFunction((x, p) -> -rosenbrock(x, p), OptimizationBase.AutoZygote())
66+
optprob = OptimizationFunction((x, p) -> -rosenbrock(x, p), AutoZygote())
6767
prob = OptimizationProblem(optprob, x0, _p; sense = OptimizationBase.MaxSense)
6868

6969
callback = function (state, l)
@@ -79,7 +79,7 @@ end
7979
sol = solve!(cache)
8080
@test 10 * sol.objective < l1
8181

82-
optprob = OptimizationFunction(rosenbrock, OptimizationBase.AutoZygote())
82+
optprob = OptimizationFunction(rosenbrock, AutoZygote())
8383
prob = OptimizationProblem(optprob, x0, _p; sense = OptimizationBase.MinSense)
8484

8585
opt = Ipopt.Optimizer()
@@ -126,7 +126,7 @@ end
126126

127127
cons_circ = (res, x, p) -> res .= [x[1]^2 + x[2]^2]
128128
optprob = OptimizationFunction(
129-
rosenbrock, OptimizationBase.AutoModelingToolkit(true, true);
129+
rosenbrock, AutoSparse(AutoSymbolics());
130130
cons = cons_circ)
131131
prob = OptimizationProblem(optprob, x0, _p, ucons = [Inf], lcons = [0.0])
132132

@@ -141,10 +141,8 @@ end
141141

142142
@testset "backends" begin
143143
backends = (
144-
OptimizationBase.AutoModelingToolkit(false, false),
145-
OptimizationBase.AutoModelingToolkit(true, false),
146-
OptimizationBase.AutoModelingToolkit(false, true),
147-
OptimizationBase.AutoModelingToolkit(true, true))
144+
AutoSymbolics(),
145+
AutoSparse(AutoSymbolics()))
148146
for backend in backends
149147
@testset "$backend" begin
150148
_test_sparse_derivatives_hs071(backend, Ipopt.Optimizer())
@@ -167,7 +165,7 @@ end
167165
u0 = [0.0, 0.0, 0.0, 1.0]
168166

169167
optfun = OptimizationFunction((u, p) -> -v'u, cons = (res, u, p) -> res .= w'u,
170-
OptimizationBase.AutoForwardDiff())
168+
AutoForwardDiff())
171169

172170
optprob = OptimizationProblem(optfun, u0; lb = zero.(u0), ub = one.(u0),
173171
int = ones(Bool, length(u0)),
@@ -185,7 +183,7 @@ end
185183
u0 = [1.0]
186184

187185
optfun = OptimizationFunction((u, p) -> sum(abs2, x * u[1] .- y),
188-
OptimizationBase.AutoForwardDiff())
186+
AutoForwardDiff())
189187

190188
optprob = OptimizationProblem(optfun, u0; lb = one.(u0), ub = 6.0 .* u0,
191189
int = ones(Bool, length(u0)))
@@ -264,7 +262,7 @@ end
264262

265263
cons(res, x, p) = (res .= [x[1]^2 + x[2]^2, x[1] * x[2]])
266264

267-
optprob = OptimizationFunction(rosenbrock, OptimizationBase.AutoModelingToolkit();
265+
optprob = OptimizationFunction(rosenbrock, AutoSymbolics();
268266
cons = cons)
269267
prob = OptimizationProblem(optprob, x0, _p, lcons = [1.0, 0.5], ucons = [1.0, 0.5])
270268
sol = solve(prob, AmplNLWriter.Optimizer(Ipopt_jll.amplexe))
@@ -285,7 +283,7 @@ end
285283
end
286284
lag_hess_prototype = sparse([1 1; 0 1])
287285

288-
optprob = OptimizationFunction(rosenbrock, OptimizationBase.AutoForwardDiff();
286+
optprob = OptimizationFunction(rosenbrock, AutoForwardDiff();
289287
cons = cons, lag_h = lagh, lag_hess_prototype)
290288
prob = OptimizationProblem(optprob, x0, _p, lcons = [1.0, 0.5], ucons = [1.0, 0.5])
291289
sol = solve(prob, Ipopt.Optimizer())

0 commit comments

Comments
 (0)