Skip to content
This repository was archived by the owner on Nov 9, 2022. It is now read-only.

Commit ddd46b7

Browse files
committed
Better README
1 parent 805af37 commit ddd46b7

File tree

2 files changed

+21
-21
lines changed

2 files changed

+21
-21
lines changed

README.md

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -7,38 +7,38 @@ This repository contains a description of the DroneDeploy Segmentation Dataset a
77

88
### Quickstart
99

10+
Follow these steps to train a model and run inference end-to-end:
11+
1012
```
11-
# clone this repository
1213
git clone https://github.com/dronedeploy/dd-ml-segmentation-benchmark.git
13-
cd dd-ml-segmentation-benchmark/
14-
# install requirements
14+
cd dd-ml-segmentation-benchmark
1515
pip3 install -r requirements.txt
16-
# optional: log in to W&B to see your training metrics,
17-
# track your experiments, and submit your models to the benchmark
18-
wandb login
19-
# train Fastai model
20-
python3 main.py
21-
# train Keras model
22-
python3 main_keras.py
2316
24-
```
25-
26-
### Training
17+
# optional: log in to W&B to track your experiements
18+
wandb login
2719
28-
To start training a model on a small sample dataset run the following, once working you should use the *full dataset* by changing `main.py`
20+
# train a Keras model
21+
python3 main_keras.py
2922
30-
```
31-
pip3 install -r requirements.txt
23+
# train a Fastai model
3224
python3 main.py
3325
```
3426

35-
This will download the sample dataset and begin training a model. You can monitor training performance on [Weights and Biases](https://www.wandb.com/). Once training is complete inference will be performed on all test scenes and a number of prediction images with names like `123123_ABCABC-prediction.png` will be created. After the the images are created they will be scored. Here's what the prediction looks like, not bad for 50 lines of code but there is a lot of room for improvement:
27+
This will download the sample dataset and begin training a model. You can monitor training performance on [Weights and Biases](https://www.wandb.com/). Once training is complete inference will be performed on all test scenes and a number of prediction images with names like `123123_ABCABC-prediction.png` will be created in the `wandb` directory. After the the images are created they will be scored. Here's what a prediction looks like, not bad for 50 lines of code but there is a lot of room for improvement:
3628

3729
![Example](https://github.com/dronedeploy/dd-ml-segmentation-benchmark/raw/master/img/out.gif)
3830

3931
### Dataset Details
4032

41-
The *full dataset* can be downloaded by changing a line in `main.py` this is the dataset that should be used for benchmarking. The dataset comprises a number of aerial scenes captured from drones. Each scene has a ground resolution of 10 cm per pixel. For each scene there is a corresponding "image", "elevation" and "label". The image is an RGB tif, the elevation is a single channel floating point .tif and the label is a PNG with 7 colors representing the 7 classes. Please see `index.csv` - inside the downloaded dataset - for a description of the quality of each labelled image and the distribution of the labels. To use the dataset you can split it into smaller chips (see `images2chips.py`). Here is an example of one of the labelled scenes:
33+
The dataset comprises a number of aerial scenes captured from drones. Each scene has a ground resolution of 10 cm per pixel. For each scene there is a corresponding "image", "elevation" and "label". These are located in the `images`, `elevation` and `labels` directory.
34+
35+
The images an RGB tifs, the elevations are single channel floating point tifs (where each pixel value represents elevation in meters) and finally the labels are PNGs with 7 colors representing the 7 classes (documented below)
36+
37+
In addition please see `index.csv` - inside the downloaded dataset folder - for a description of the quality of each labelled image and the distribution of the labels.
38+
39+
To use a dataset for training it need to first be converted to chips (see `images2chips.py`). This will created an `images-chips` and `label-chips` directory which contains a number of `300x300` (by default) RGB images. The `label-chips` are also RGB but will be very low pixel intensities `[0 .. 7]` so will appear black as first glance. You can use the `color2class` and `category2mask` function to switch between the two label representations.
40+
41+
Here is an example of one of the labelled scenes:
4242

4343
![Example](https://github.com/dronedeploy/dd-ml-segmentation-benchmark/raw/master/img/15efe45820_D95DF0B1F4INSPIRE-label.png)
4444

@@ -62,8 +62,8 @@ Color (Blue, Green, Red) to Class Name:
6262
----
6363
The sample implementation is very basic and there is immediate opportunity to experiment with:
6464
- Data augmentation (`dataloader.py`)
65-
- Hyper- parameters (`train.py`)
66-
- Post-processing (`inference.py`)
65+
- Hyper- parameters (`train_*.py`)
66+
- Post-processing (`inference_*.py`)
6767
- Chip size (`images2chips.py`)
68-
- Model architecture (`train.py`)
68+
- Model architecture (`train_*.py`)
6969
- Elevation tiles are not currently used at all (`images2chips.py`)
File renamed without changes.

0 commit comments

Comments
 (0)