Allocation-free access to evaluator #520
Open
+440
−411
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I did this some months ago in an attempt to see if we could get allocation-free iterations after jump-dev/MathOptInterface.jl#2740.
Before this PR, it relies on calling
MOI.eval_objective(model, x)which accessmodel.nlp_data.evaluatorwhich is type-unstable since the type ofevaluatoris not concrete:https://github.com/jump-dev/MathOptInterface.jl/blob/034a5272dad29a233783cc49ed7e15195450dd96/src/nlp.jl#L176
So the fix in this PR is to give to Ipopt closures that capture the evaluator, not the model.
Now even after this, we're not allocation-free yet because there is the type-instability in accessing
Ipopt.jl/src/C_wrapper.jl
Lines 18 to 21 in 077843b
We can add the type of these functions as parameters to
IpoptProblembut then we need to modify this type-assert:Ipopt.jl/src/C_wrapper.jl
Line 35 in 077843b
and I'm not sure how to make it work with
@ccallableIpopt.jl/src/C_wrapper.jl
Lines 190 to 191 in 077843b
As discussed during the JuMP-dev hackathon, maybe #514 could help.