Parameter Optimization in Rapidminer 5.0
In several of my video tutorials I assign different parameters for my learning model “on the fly.” Of course the question any astute reader/viewer should ask is, “why did you choose those parameters instead of another combination?”
That’s a great question and the answer is, “well I just choose those parameters to illustrate my point for the video.” While this answer is not at all satisfying to the astute reader/viewer, it does lead us to ask the most important question of all, “what are the right parameters to choose?”
This can be answered very well if you were to use Rapidminer’s Parameter Optimization operator in your initial data discovery phase. This operator allows you to choose some or all of the parameters in your experiment and iterate different values for them to meet some specific requirement on your part (i.e. performance).
For example, if you were using the Neural Net operator and didn’t know what to set your learning and momentum parameters to, to get the best classification accuracy, you would use the Parameter Optimization operator to interate different combinations of those parameters to find the best accuracy.
Once the Parameter Optimization operator determines those values, you can input them into your experiment and truly optimize your model for performance! See below for an actual output from a parameter optimization model I’m working on. You can see that Rapidminer indicated that momentum of 0.3 and a learning rate of 0.6 was the best parameter settings to maximize the accuracy rate and minimize the classification error.
While is operator is a fantastic feature (they got evolutionary optimizers too!) for us data modelers, its a massive computer resource suck. I would advise anyone using this operator to have a very powerful server or computer, with oodles of memory, to run your iterations.