You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Apr 8, 2025. It is now read-only.
Copy file name to clipboardExpand all lines: CONTRIBUTING.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -82,4 +82,4 @@ This repository is organized as follows:
82
82
└── unit_tests.py # Main entry for unit tests, regarding `./models/` and `./utils/`.
83
83
```
84
84
85
-
To develop a new project (or a new approach), a `runner` (which defines the training procedure, e.g., the data forwarding flow, the optimizing order of various loss terms, how the model(s) should be updated, etc.), a `loss` (which defines the computation of each loss term), and a `config` (which collects all configurations used in the project) are necessary. You may also need to design your own `dataset` (including data `transformation`), `model` structure, evaluation `metric`, `augmentation` pipeline, and running `controller` if they are not supported yet. **NOTE:** All these modules are almost independent of each other. Hence, once a new feature (e.g., `dataset`, `model`, `metric`, `augmentation`, or `controller`) is developed, it can be shared to others with minor effort. It you are interested in sharing your work, we really appreciate your contribution to these modules.
85
+
To develop a new project (or a new approach), a `runner` (which defines the training procedure, e.g., the data forwarding flow, the optimizing order of various loss terms, how the model(s) should be updated, etc.), a `loss` (which defines the computation of each loss term), and a `config` (which collects all configurations used in the project) are necessary. You may also need to design your own `dataset` (including data `transformation`), `model` structure, evaluation `metric`, `augmentation` pipeline, and running `controller` if they are not supported yet. **NOTE:** All these modules are almost independent from each other. Hence, once a new feature (e.g., `dataset`, `model`, `metric`, `augmentation`, or `controller`) is developed, it can be shared to others with minor effort. It you are interested in sharing your work, we really appreciate your contribution to these modules.
Copy file name to clipboardExpand all lines: README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -68,12 +68,12 @@ Please find more training demos under `./scripts/training_demos/`.
68
68
69
69
## Inspect Training Results
70
70
71
-
Besides using TensorBoard to track the training process, the raw results (e.g., training losses and running time) are saved in JSON format. They can be easily inspected with the following script
71
+
Besides using TensorBoard to track the training process, the raw results (e.g., training losses and running time) are saved in[JSON Lines](https://jsonlines.org/) format. They can be easily inspected with the following script
0 commit comments