You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.Rmd
+68-10Lines changed: 68 additions & 10 deletions
Original file line number
Diff line number
Diff line change
@@ -3,18 +3,21 @@ output: github_document
3
3
---
4
4
5
5
# Nested Cross-Validation: Comparing Methods and Implementations
6
+
### (In-progress)
6
7
7
-
Nested cross-validation has become a recommended technique for situations in which the size of our dataset is insufficient to simultaneously handle hyperparameter tuning and algorithm comparison. Using standard methods such as k-fold cross-validation in such situations results in significant increases in optimization bias. Nested cross-validation has been shown to produce low bias, out-of-sample error estimates even using datasets with only a few hundred rows and therefore gives a better judgemnet of generalization performance.
8
+
Nested cross-validation has become a recommended technique for situations in which the size of our dataset is insufficient to simultaneously handle hyperparameter tuning and algorithm comparison. Examples of such situations include: proof of concept, start-ups, medical studies, time series, etc. Using standard methods such as k-fold cross-validation in these cases may result in significant increases in optimization bias. Nested cross-validation has been shown to produce low bias, out-of-sample error estimates even using datasets with only hundreds of rows and therefore gives a better judgement of generalization performance.
8
9
9
-
The primary issue with this technique is that it is computationally very expensive with potentially tens of 1000s of models being trained during the process. While researching this technique, I found two methods of performing nested cross-validation — one authored by [Sabastian Raschka](https://github.com/rasbt/stat479-machine-learning-fs19/blob/master/11_eval4-algo/code/11-eval4-algo__nested-cv_verbose1.ipynb) and the other by [Max Kuhn and Kjell Johnson](https://tidymodels.github.io/rsample/articles/Applications/Nested_Resampling.html).
10
-
This experiment seeks to answer two questions:
10
+
The primary issue with this technique is that it is computationally very expensive with potentially tens of 1000s of models being trained during the process. While researching this technique, I found two slightly different methods of performing nested cross-validation — one authored by [Sabastian Raschka](https://github.com/rasbt/stat479-machine-learning-fs19/blob/master/11_eval4-algo/code/11-eval4-algo__nested-cv_verbose1.ipynb) and the other by [Max Kuhn and Kjell Johnson](https://tidymodels.github.io/rsample/articles/Applications/Nested_Resampling.html).
11
+
I'll be examining two aspects of nested cross-validation:
11
12
12
-
1.What's the fastest implementation of each method?
13
-
2.How many repeats, given the size of this dataset, should we expect to need to obtain a reasonably accurate out-of-sample error estimate?
13
+
1.Duration: Which packages and functions give us the fastest implementation of each method?
14
+
2.Performance: First, develop a testing framework. Then, using a generated dataset, find how many repeats, given the number of samples, should we expect to need in order to obtain a reasonably accurate out-of-sample error estimate.
14
15
15
16
With regards to the question of speed, I'll will be testing implementations of both methods from various packages which include {tune}, {mlr3}, {h2o}, and {sklearn}.
16
17
17
-
Duration experiment details:
18
+
19
+
## Duration Experiment
20
+
Experiment details:
18
21
19
22
* Random Forest and Elastic Net Regression algorithms
20
23
* Both with 100x2 hyperparameter grids
@@ -37,11 +40,9 @@ Various elements of the technique can be altered to improve performance. These i
37
40
3. Inner-Loop CV strategy
38
41
4. Grid search strategy
39
42
40
-
For the performance experiment (question 2), the fastest implementation of each method will be used in running a nested cross-validation with different sizes of data ranging from 100 to 5000 observations and different numbers of repeats of the outer-loop cv strategy. The chosen algorithm and hyperparameters will predict on a 100K row simulated dataset and the mean absolute error will be calculated for each combination of repeat, data size, and method.
41
-
42
-
43
+
These elements also affect the run times. Both methods will be using the same size grids, but Kuhn-Johnson uses repeats and more folds in the outer and inner loops while Raschka's trains an extra model over the entire training set at the end at the end. Using Kuhn-Johnson, 50,000 models will be trained for each algorithm — using Raschka's, 1,001 models.
43
44
44
-
Progress (duration in seconds)
45
+
MLFlow was used to keep track of the duration (seconds) of each run along with the implementation and method used. I've used implementation to describe the various changes in coding structures that accompanies using each package's functions. A couple examples are the python for-loop being replaced with a while-loop and `iter_next` function when using {reticulate} and {mlr3} entirely using R's R6 Object Oriented Programming system.
45
46
46
47

47
48
@@ -105,6 +106,63 @@ durations
105
106
```
106
107
107
108
109
+
## Performance Experiment
110
+
111
+
Experiment details:
112
+
113
+
* The fastest implementation of each method will be used in running a nested cross-validation with different sizes of data ranging from 100 to 5000 observations and different numbers of repeats of the outer-loop cv strategy.
114
+
* The chosen algorithm and hyperparameters will used to predict on a 100K row simulated dataset and the mean absolute error will be calculated for each combination of repeat, data size, and method.
0 commit comments