Skip to content

Commit f385ced

Browse files
🧑‍💻 pre-commit autoupdate (#902)
* 🧑‍💻 pre-commit autoupdate updates: - [github.com/executablebooks/mdformat: 0.7.19 → 0.7.21](hukkin/mdformat@0.7.19...0.7.21) - [github.com/astral-sh/ruff-pre-commit: v0.8.2 → v0.8.6](astral-sh/ruff-pre-commit@v0.8.2...v0.8.6) * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
1 parent 0ddd2b1 commit f385ced

12 files changed

+1022
-1022
lines changed

.pre-commit-config.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ repos:
2323
- mdformat-black
2424
- mdformat-myst
2525
- repo: https://github.com/executablebooks/mdformat
26-
rev: 0.7.19
26+
rev: 0.7.21
2727
hooks:
2828
- id: mdformat
2929
# Optionally add plugins
@@ -60,7 +60,7 @@ repos:
6060
- id: rst-inline-touching-normal # Detect mistake of inline code touching normal text in rst.
6161
- repo: https://github.com/astral-sh/ruff-pre-commit
6262
# Ruff version.
63-
rev: v0.8.2
63+
rev: v0.8.6
6464
hooks:
6565
- id: ruff
6666
args: [--fix, --exit-non-zero-on-fix]

benchmarks/annotation_store.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -500,7 +500,7 @@
500500
"# Part 1: Small Scale Benchmarking of Annotation Storage\n",
501501
"\n",
502502
"Using the already defined data generation functions (`cell_polygon` and\n",
503-
"`cell_grid`), we create some simple artificial cell boundaries by\n",
503+
"`cell_grid`), we create some simple artificial cell boundaries by\n",
504504
"creating a circle of points, adding some noise, scaling to introduce\n",
505505
"eccentricity, and then rotating. We use 20 points per cell, which is a\n",
506506
"reasonably high value for cell annotation. However, this can be\n",

examples/02-stain-normalization.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@
4646
"\n",
4747
"1. Load a sample WSI.\n",
4848
"1. Extract a square patch.\n",
49-
"1. Stain-normalize the tile using various built-in methods.\n",
49+
"1. Stain-normalize the tile using various built-in methods.\n",
5050
"1. Stain-normalize with a user-defined stain matrix.<br/>\n",
5151
"\n",
5252
"During the above steps, we will be using functions from our `stainnorm` module [here](https://github.com/TissueImageAnalytics/tiatoolbox/blob/develop/tiatoolbox/tools/stainnorm.py). This demo assumes some understanding of the functions in the `wsireader` module (for example by going through the demo [here](https://github.com/TissueImageAnalytics/tiatoolbox/blob/develop/examples/01-wsi-reading.ipynb)).\n",

examples/04-patch-extraction.ipynb

Lines changed: 984 additions & 984 deletions
Large diffs are not rendered by default.

examples/05-patch-prediction.ipynb

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@
118118
]
119119
},
120120
"source": [
121-
"**\\[essential\\]** Please install the following package, which is required in this notebook.\n",
121+
"**[essential]** Please install the following package, which is required in this notebook.\n",
122122
"\n"
123123
]
124124
},
@@ -470,14 +470,14 @@
470470
"As you can see for this patch dataset, we have 9 classes/labels with IDs 0-8 and associated class names. describing the dominant tissue type in the patch:\n",
471471
"\n",
472472
"- BACK ⟶ Background (empty glass region)\n",
473-
"- LYM ⟶ Lymphocytes\n",
473+
"- LYM ⟶ Lymphocytes\n",
474474
"- NORM ⟶ Normal colon mucosa\n",
475-
"- DEB ⟶ Debris\n",
476-
"- MUS ⟶ Smooth muscle\n",
477-
"- STR ⟶ Cancer-associated stroma\n",
478-
"- ADI ⟶ Adipose\n",
479-
"- MUC ⟶ Mucus\n",
480-
"- TUM ⟶ Colorectal adenocarcinoma epithelium\n",
475+
"- DEB ⟶ Debris\n",
476+
"- MUS ⟶ Smooth muscle\n",
477+
"- STR ⟶ Cancer-associated stroma\n",
478+
"- ADI ⟶ Adipose\n",
479+
"- MUC ⟶ Mucus\n",
480+
"- TUM ⟶ Colorectal adenocarcinoma epithelium\n",
481481
"\n",
482482
"It is easy to use this code for your dataset - just ensure that your dataset is arranged like this example (images of different classes are placed into different subfolders), and set the right image extension in the `image_ext` variable.\n",
483483
"\n"
@@ -532,7 +532,7 @@
532532
"\n",
533533
"- `model`: Use an externally defined PyTorch model for prediction, with weights already loaded. This is useful when you want to use your own pretrained model on your own data. The only constraint is that the input model should follow `tiatoolbox.models.abc.ModelABC` class structure. For more information on this matter, please refer to our [example notebook on advanced model techniques](https://github.com/TissueImageAnalytics/tiatoolbox/blob/develop/examples/07-advanced-modeling.ipynb).\n",
534534
"- `pretrained_model `: This argument has already been discussed above. With it, you can tell tiatoolbox to use one of its pretrained models for the prediction task. A complete list of pretrained models can be found [here](https://tia-toolbox.readthedocs.io/en/latest/usage.html?highlight=pretrained%20models#tiatoolbox.models.architecture.get_pretrained_model). If both `model` and `pretrained_model` arguments are used, then `pretrained_model` is ignored. In this example, we used `resnet18-kather100K,` which means that the model architecture is an 18 layer ResNet, trained on the Kather100k dataset.\n",
535-
"- `pretrained_weight`: When using a `pretrained_model`, the corresponding pretrained weights will also be downloaded by default. You can override the default with your own set of weights via the `pretrained_weight` argument.\n",
535+
"- `pretrained_weight`: When using a `pretrained_model`, the corresponding pretrained weights will also be downloaded by default. You can override the default with your own set of weights via the `pretrained_weight` argument.\n",
536536
"- `batch_size`: Number of images fed into the model each time. Higher values for this parameter require a larger (GPU) memory capacity.\n",
537537
"\n",
538538
"The second line in the snippet above calls the `predict` method to apply the CNN on the input patches and get the results. Here are some important `predict` input arguments and their descriptions:\n",
@@ -541,7 +541,7 @@
541541
"- `imgs`: List of inputs. When using `patch` mode, the input must be a list of images OR a list of image file paths, OR a Numpy array corresponding to an image list. However, for the `tile` and `wsi` modes, the `imgs` argument should be a list of paths to the input tiles or WSIs.\n",
542542
"- `return_probabilities`: set to *__True__* to get per class probabilities alongside predicted labels of input patches. If you wish to merge the predictions to generate prediction maps for `tile` or `wsi` modes, you can set `return_probabilities=True`.\n",
543543
"\n",
544-
"In the `patch` prediction mode, the `predict` method returns an output dictionary that contains the `predictions` (predicted labels) and `probabilities` (probability that a certain patch belongs to a certain class).\n",
544+
"In the `patch` prediction mode, the `predict` method returns an output dictionary that contains the `predictions` (predicted labels) and `probabilities` (probability that a certain patch belongs to a certain class).\n",
545545
"\n",
546546
"The cell below uses common python tools to visualize the patch classification results in terms of classification accuracy and confusion matrix.\n",
547547
"\n"
@@ -732,9 +732,9 @@
732732
"\n",
733733
"- `mode='tile'`: the type of image input. We use `tile` since our input is a large image tile.\n",
734734
"- `imgs`: in tile mode, the input is *required* to be a list of file paths.\n",
735-
"- `save_dir`: Output directory when processing multiple tiles. We explained before why this is necessary when we are working with multiple big tiles.\n",
736-
"- `patch_size`: This parameter sets the size of patches (in \\[W, H\\] format) to be extracted from the input files, and for which labels will be predicted.\n",
737-
"- `stride_size`: The stride (in \\[W, H\\] format) to consider when extracting patches from the tile. Using a stride smaller than the patch size results in overlapping between consecutive patches.\n",
735+
"- `save_dir`: Output directory when processing multiple tiles. We explained before why this is necessary when we are working with multiple big tiles.\n",
736+
"- `patch_size`: This parameter sets the size of patches (in [W, H] format) to be extracted from the input files, and for which labels will be predicted.\n",
737+
"- `stride_size`: The stride (in [W, H] format) to consider when extracting patches from the tile. Using a stride smaller than the patch size results in overlapping between consecutive patches.\n",
738738
"- `labels` (optional): List of labels with the same size as `imgs` that refers to the label of each input tile (not to be confused with the prediction of each patch).\n",
739739
"\n",
740740
"In this example, we input only one tile. Therefore the toolbox does not save the output as files and instead returns a list that contains an output dictionary with the following keys:\n",
@@ -815,7 +815,7 @@
815815
"id": "TocLP9Bcr4A4"
816816
},
817817
"source": [
818-
"Here, we show a prediction map where each colour denotes a different predicted category. We overlay the prediction map on the original image. To generate this prediction map, we utilize the `merge_predictions` method from the `PatchPredictor` class which accepts as arguments the path of the original image, `predictor` outputs, `mode` (set to `tile` or `wsi`), `tile_resolution` (at which tiles were originally extracted) and `resolution` (at which the prediction map is generated), and outputs the \"Prediction map\", in which regions have indexed values based on their classes.\n",
818+
"Here, we show a prediction map where each colour denotes a different predicted category. We overlay the prediction map on the original image. To generate this prediction map, we utilize the `merge_predictions` method from the `PatchPredictor` class which accepts as arguments the path of the original image, `predictor` outputs, `mode` (set to `tile` or `wsi`), `tile_resolution` (at which tiles were originally extracted) and `resolution` (at which the prediction map is generated), and outputs the \"Prediction map\", in which regions have indexed values based on their classes.\n",
819819
"\n",
820820
"To visualize the prediction map as an overlay on the input image, we use the `overlay_prediction_mask` function from the `tiatoolbox.utils.visualization` module. It accepts as arguments the original image, the prediction map, the `alpha` parameter which specifies the blending ratio of overlay and original image, and the `label_info` dictionary which contains names and desired colours for different classes. Below we generate an example of an acceptable `label_info` dictionary and show how it can be used with `overlay_patch_prediction`.\n",
821821
"\n"
@@ -898,7 +898,7 @@
898898
"source": [
899899
"## Get predictions for patches within a WSI\n",
900900
"\n",
901-
"We demonstrate how to obtain predictions for all patches within a whole-slide image. As in previous sections, we will use `PatchPredictor` and its `predict` method, but this time we set the `mode` to `'wsi'`. We also introduce `IOPatchPredictorConfig`, a class that specifies the configuration of image reading and prediction writing for the model prediction engine.\n",
901+
"We demonstrate how to obtain predictions for all patches within a whole-slide image. As in previous sections, we will use `PatchPredictor` and its `predict` method, but this time we set the `mode` to `'wsi'`. We also introduce `IOPatchPredictorConfig`, a class that specifies the configuration of image reading and prediction writing for the model prediction engine.\n",
902902
"\n"
903903
]
904904
},
@@ -981,7 +981,7 @@
981981
"- `mode`: set to 'wsi' when analysing whole slide images.\n",
982982
"- `ioconfig`: set the IO configuration information using the `IOPatchPredictorConfig` class.\n",
983983
"- `resolution` and `unit` (not shown above): These arguments specify the level or micron-per-pixel resolution of the WSI levels from which we plan to extract patches and can be used instead of `ioconfig`. Here we specify the WSI's level as `'baseline'`, which is equivalent to level 0. In general, this is the level of greatest resolution. In this particular case, the image has only one level. More information can be found in the [documentation](https://tia-toolbox.readthedocs.io/en/latest/usage.html?highlight=WSIReader.read_rect#tiatoolbox.wsicore.wsireader.WSIReader.read_rect).\n",
984-
"- `masks`: A list of paths corresponding to the masks of WSIs in the `imgs` list. These masks specify the regions in the original WSIs from which we want to extract patches. If the mask of a particular WSI is specified as `None`, then the labels for all patches of that WSI (even background regions) would be predicted. This could cause unnecessary computation.\n",
984+
"- `masks`: A list of paths corresponding to the masks of WSIs in the `imgs` list. These masks specify the regions in the original WSIs from which we want to extract patches. If the mask of a particular WSI is specified as `None`, then the labels for all patches of that WSI (even background regions) would be predicted. This could cause unnecessary computation.\n",
985985
"- `merge_predictions`: You can set this parameter to `True` if you wish to generate a 2D map of patch classification results. However, for big WSIs you might need a large amount of memory available to do this on the file. An alternative (default) solution is to set `merge_predictions=False`, and then generate the 2D prediction maps using `merge_predictions` function as you will see later on.\n",
986986
"\n",
987987
"We see how the prediction model works on our whole-slide images by visualizing the `wsi_output`. We first need to merge patch prediction outputs and then visualize them as an overlay on the original image. As before, the `merge_predictions` method is used to merge the patch predictions. Here we set the parameters `resolution=1.25, units='power'` to generate the prediction map at 1.25x magnification. If you would like to have higher/lower resolution (bigger/smaller) prediction maps, you need to change these parameters accordingly. When the predictions are merged, use the `overlay_patch_prediction` function to overlay the prediction map on the WSI thumbnail, which should be extracted at the same resolution used for prediction merging. Below you can see the result:\n",

examples/06-semantic-segmentation.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -310,7 +310,7 @@
310310
"\n",
311311
"### Inference on tiles\n",
312312
"\n",
313-
"Much similar to the patch classifier functionality of the tiatoolbox, the semantic segmentation module works both on image tiles and structured WSIs. First, we need to create an instance of the `SemanticSegmentor` class which controls the whole process of semantic segmentation task and then use it to do prediction on the input image(s):\n",
313+
"Much similar to the patch classifier functionality of the tiatoolbox, the semantic segmentation module works both on image tiles and structured WSIs. First, we need to create an instance of the `SemanticSegmentor` class which controls the whole process of semantic segmentation task and then use it to do prediction on the input image(s):\n",
314314
"\n"
315315
]
316316
},

0 commit comments

Comments
 (0)