Skip to content

Commit 6817db6

Browse files
authored
Update Documenter version, fix piracy (#88)
1 parent 009d4e4 commit 6817db6

File tree

8 files changed

+29
-12
lines changed

8 files changed

+29
-12
lines changed

docs/Project.toml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,4 +9,6 @@ ProximalCore = "dc4f5ac2-75d1-4f31-931e-60435d74994b"
99
ProximalOperators = "a725b495-10eb-56fe-b38b-717eba820537"
1010

1111
[compat]
12+
Documenter = "1"
13+
DocumenterCitations = "1.2"
1214
ProximalOperators = "0.15"

docs/make.jl

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,6 @@ for directory in literate_directories
2222
end
2323

2424
makedocs(
25-
bib,
2625
modules=[ProximalAlgorithms, ProximalCore],
2726
sitename="ProximalAlgorithms.jl",
2827
pages=[
@@ -38,6 +37,8 @@ makedocs(
3837
],
3938
"Bibliography" => "bibliography.md",
4039
],
40+
plugins=[bib],
41+
checkdocs=:exported,
4142
)
4243

4344
deploydocs(

docs/src/examples/sparse_linear_regression.jl

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,11 @@ end
5151

5252
mean_squared_error(label, output) = mean((output .- label) .^ 2) / 2
5353

54-
training_loss(wb) = mean_squared_error(training_label, standardized_linear_model(wb, training_input))
54+
using ProximalAlgorithms
55+
56+
training_loss = ProximalAlgorithms.ZygoteFunction(
57+
wb -> mean_squared_error(training_label, standardized_linear_model(wb, training_input))
58+
)
5559

5660
# As regularization we will use the L1 norm, implemented in [ProximalOperators](https://github.com/JuliaFirstOrder/ProximalOperators.jl):
5761

@@ -64,8 +68,6 @@ reg = ProximalOperators.NormL1(1)
6468
# Therefore we construct the algorithm, then apply it to our problem by providing a starting point,
6569
# and the objective terms `f=training_loss` (smooth) and `g=reg` (non smooth).
6670

67-
using ProximalAlgorithms
68-
6971
ffb = ProximalAlgorithms.FastForwardBackward()
7072
solution, iterations = ffb(x0=zeros(n_features + 1), f=training_loss, g=reg)
7173

docs/src/guide/custom_objectives.jl

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,11 @@
3131
#
3232
# Let's try to minimize the celebrated Rosenbrock function, but constrained to the unit norm ball. The cost function is
3333

34-
rosenbrock2D(x) = 100 * (x[2] - x[1]^2)^2 + (1 - x[1])^2
34+
using ProximalAlgorithms
35+
36+
rosenbrock2D = ProximalAlgorithms.ZygoteFunction(
37+
x -> 100 * (x[2] - x[1]^2)^2 + (1 - x[1])^2
38+
)
3539

3640
# To enforce the constraint, we define the indicator of the unit ball, together with its proximal mapping:
3741
# this is simply projection onto the unit norm ball, so it is sufficient to normalize any given point that lies
@@ -55,8 +59,6 @@ end
5559

5660
# We can now minimize the function, for which we will use [`PANOC`](@ref), which is a Newton-type method:
5761

58-
using ProximalAlgorithms
59-
6062
panoc = ProximalAlgorithms.PANOC()
6163
solution, iterations = panoc(x0=-ones(2), f=rosenbrock2D, g=IndUnitBall())
6264

docs/src/guide/getting_started.jl

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,9 @@ using LinearAlgebra
5454
using ProximalOperators
5555
using ProximalAlgorithms
5656

57-
quadratic_cost(x) = dot([3.4 1.2; 1.2 4.5] * x, x) / 2 + dot([-2.3, 9.9], x)
57+
quadratic_cost = ProximalAlgorithms.ZygoteFunction(
58+
x -> dot([3.4 1.2; 1.2 4.5] * x, x) / 2 + dot([-2.3, 9.9], x)
59+
)
5860
box_indicator = ProximalOperators.IndBox(0, 1)
5961

6062
ffb = ProximalAlgorithms.FastForwardBackward(maxit=1000, tol=1e-5, verbose=true)

docs/src/guide/implemented_algorithms.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ For this reason, specific algorithms by the name of "primal-dual" splitting sche
7272
Algorithm | Assumptions | Oracle | Implementation | References
7373
----------|-------------|--------|----------------|-----------
7474
Chambolle-Pock | ``f\equiv 0``, ``g, h`` convex, ``L`` linear operator | ``\operatorname{prox}_{\gamma g}``, ``\operatorname{prox}_{\gamma h}``, ``L``, ``L^*`` | [`ChambollePock`](@ref) | [Chambolle2011](@cite)
75-
Vu-Condat | ``f`` convex and smooth, ``g, h`` convex, ``L`` linear operator | ``\nabla f``, ``\operatorname{prox}_{\gamma g}``, ``\operatorname{prox}_{\gamma h}``, ``L``, ``L^*`` | [`VuCodat`](@ref) | [Vu2013](@cite), [Condat2013](@cite)
75+
Vu-Condat | ``f`` convex and smooth, ``g, h`` convex, ``L`` linear operator | ``\nabla f``, ``\operatorname{prox}_{\gamma g}``, ``\operatorname{prox}_{\gamma h}``, ``L``, ``L^*`` | [`VuCondat`](@ref) | [Vu2013](@cite), [Condat2013](@cite)
7676
AFBA | ``f`` convex and smooth, ``g, h`` convex, ``L`` linear operator | ``\nabla f``, ``\operatorname{prox}_{\gamma g}``, ``\operatorname{prox}_{\gamma h}``, ``L``, ``L^*`` | [`AFBA`](@ref) | [Latafat2017](@cite)
7777

7878
```@docs

src/utilities/ad.jl

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,14 @@
11
using Zygote: pullback
22
using ProximalCore
33

4-
function ProximalCore.gradient!(grad, f, x)
5-
fx, pb = pullback(f, x)
4+
struct ZygoteFunction{F}
5+
f::F
6+
end
7+
8+
(f::ZygoteFunction)(x) = f.f(x)
9+
10+
function ProximalCore.gradient!(grad, f::ZygoteFunction, x)
11+
fx, pb = pullback(f.f, x)
612
grad .= pb(one(fx))[1]
713
return fx
814
end

test/utilities/test_ad.jl

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,9 @@ using ProximalAlgorithms
1212
-1.0 -1.0 -1.0 1.0 3.0
1313
]
1414
b = T[1.0, 2.0, 3.0, 4.0]
15-
f(x) = R(1/2) * norm(A * x - b, 2)^2
15+
f = ProximalAlgorithms.ZygoteFunction(
16+
x -> R(1/2) * norm(A * x - b, 2)^2
17+
)
1618
Lf = opnorm(A)^2
1719
m, n = size(A)
1820

0 commit comments

Comments
 (0)