You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jul 1, 2023. It is now read-only.
let optimizer = SGD<Model, Float>(learningRate: 0.02)
44
44
var classifier =Model()
45
+
let context =Context(learningPhase: .training)
45
46
let x: Tensor<Float> =...
46
47
let y: Tensor<Float> =...
47
48
```
@@ -53,7 +54,7 @@ One way to define a training epoch is to use the [`Differentiable.gradient(in:)`
53
54
```swift
54
55
for_in0..<1000 {
55
56
let 𝛁model = classifier.gradient { classifier -> Tensor<Float>in
56
-
let ŷ = classifier.applied(to: x)
57
+
let ŷ = classifier.applied(to: x, in: context)
57
58
let loss =softmaxCrossEntropy(logits: ŷ, labels: y)
58
59
print("Loss: \(loss)")
59
60
return loss
@@ -62,11 +63,11 @@ for _ in 0..<1000 {
62
63
}
63
64
```
64
65
65
-
Another way is to make use of methods on `Differentiable` or `Layer` that produce a pullback (i.e. a backpropagation function). Pullbacks allow you to compose your derivative computation with great flexibility.
66
+
Another way is to make use of methods on `Differentiable` or `Layer` that produce a backpropagation function. This allows you to compose your derivative computation with great flexibility.
66
67
67
68
```swift
68
69
for_in0..<1000 {
69
-
let (ŷ, backprop) = classifier.valueWithPullback(at: x)
70
+
let (ŷ, backprop) = classifier.appliedForBackpropagation(to: x, in: context)
0 commit comments