Skip to content
This repository was archived by the owner on Jul 1, 2023. It is now read-only.

Commit ec2dd47

Browse files
authored
Update readme to reflect API improvements.
1 parent 1f09d1f commit ec2dd47

File tree

1 file changed

+5
-4
lines changed

1 file changed

+5
-4
lines changed

README.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ let hiddenSize: Int = 10
2626
struct Model: Layer {
2727
var layer1 = Dense<Float>(inputSize: 4, outputSize: hiddenSize, activation: relu)
2828
var layer2 = Dense<Float>(inputSize: hiddenSize, outputSize: hiddenSize, activation: relu)
29-
var layer3 = Dense<Float>(inputSize: hiddenSize, outputSize: 3, activation: {$0})
29+
var layer3 = Dense<Float>(inputSize: hiddenSize, outputSize: 3, activation: identity)
3030

3131
@differentiable(wrt: (self, input))
3232
func applied(to input: Tensor<Float>) -> Tensor<Float> {
@@ -42,6 +42,7 @@ struct Model: Layer {
4242
```swift
4343
let optimizer = SGD<Model, Float>(learningRate: 0.02)
4444
var classifier = Model()
45+
let context = Context(learningPhase: .training)
4546
let x: Tensor<Float> = ...
4647
let y: Tensor<Float> = ...
4748
```
@@ -53,7 +54,7 @@ One way to define a training epoch is to use the [`Differentiable.gradient(in:)`
5354
```swift
5455
for _ in 0..<1000 {
5556
let 𝛁model = classifier.gradient { classifier -> Tensor<Float> in
56-
let ŷ = classifier.applied(to: x)
57+
let ŷ = classifier.applied(to: x, in: context)
5758
let loss = softmaxCrossEntropy(logits: ŷ, labels: y)
5859
print("Loss: \(loss)")
5960
return loss
@@ -62,11 +63,11 @@ for _ in 0..<1000 {
6263
}
6364
```
6465

65-
Another way is to make use of methods on `Differentiable` or `Layer` that produce a pullback (i.e. a backpropagation function). Pullbacks allow you to compose your derivative computation with great flexibility.
66+
Another way is to make use of methods on `Differentiable` or `Layer` that produce a backpropagation function. This allows you to compose your derivative computation with great flexibility.
6667

6768
```swift
6869
for _ in 0..<1000 {
69-
let (ŷ, backprop) = classifier.valueWithPullback(at: x)
70+
let (ŷ, backprop) = classifier.appliedForBackpropagation(to: x, in: context)
7071
let (loss, 𝛁ŷ) = ŷ.valueWithGradient { ŷ in softmaxCrossEntropy(logits: ŷ, labels: y) }
7172
print("Model output: \(ŷ), Loss: \(loss)")
7273
let 𝛁model = backprop(𝛁ŷ)

0 commit comments

Comments
 (0)