You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.Rmd
+29-5Lines changed: 29 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -110,9 +110,10 @@ Experiment details:
110
110
111
111
* The fastest implementation of each method will be used in running a nested cross-validation with different sizes of data ranging from 100 to 5000 observations and different numbers of repeats of the outer-loop cv strategy.
112
112
* The {mlr3} implementation was the fastest for Raschka's method, but the Ranger-Kuhn-Johnson implementation was close. To simplify, I'll be using Ranger-Kuhn-Johnson for both methods.
113
-
* The chosen algorithm and hyperparameters will used to predict on a 100K row simulated dataset and the mean absolute error will be calculated for each combination of repeat, data size, and method.
114
-
* Runtimes began to explode after n = 800 for my 8 vcpu, 16 GB RAM desktop, therefore I ran this experiment using AWS instances: a r5.2xlarge for the Elastic Net and a r5.24xlarge for Random Forest.
115
-
* I'll be transitioning from imperative scripts to a functional approach, because I'm iterating through different numbers of repeats and sample sizes. Given the long runtimes and impermanent nature of my internet connection, it would also be nice to cache each iteration as it finishes. The [{drake}](https://github.com/ropensci/drake) package is superb on both counts, so I'm using it to orchestrate.
113
+
* The chosen algorithm and hyperparameters will be used to predict on a 100K row simulated dataset.
114
+
* The percent error between the the average mean absolute error (MAE) across the outer-loop folds and the MAE of the predictions on this 100K dataset will be calculated for each combination of repeat, data size, and method.
115
+
* To make this experiment manageable in terms of runtimes, I'm using AWS instances: a r5.2xlarge for the Elastic Net and a r5.24xlarge for Random Forest.
116
+
* Iterating through different numbers of repeats, sample sizes, and methods makes a functional approach more appropriate than running imperative scripts. Also, given the long runtimes and impermanent nature of my internet connection, it would also be nice to cache each iteration as it finishes. The [{drake}](https://github.com/ropensci/drake) package is superb on both counts, so I'm using it to orchestrate.
0 commit comments