Skip to content

Commit f04ad6c

Browse files
Minor changes
1 parent 50c84c3 commit f04ad6c

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

src/ppo_with_pytorch.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,9 @@ def sd_map(f: Callable[..., torch.Tensor], *sds) -> StepData:
193193
return StepData(**items)
194194

195195
def eval_unroll(agent, env, length):
196-
"""Return number of episodes and average reward for a single unroll."""
196+
"""
197+
Return number of episodes and average reward for a single unroll.
198+
"""
197199
observation = env.reset()
198200
episodes = torch.zeros((), device=agent.device)
199201
episode_reward = torch.zeros((), device=agent.device)

0 commit comments

Comments
 (0)