Isolation Forests in H2O.ai

A new feature has been added to H2O-3 open source, isolation forests. I’ve always been a fan of understanding outliers and love using One Class SVM’s as a method, but the isolation forests appear to be better in finding outliers, in most cases. Continue reading “Isolation Forests in H2O.ai”

What is Reusable Holdout?

Overfitting and introducing bias during model training is always a big topic in data science. Typically you train a model using Cross Validation by creating a model on k-1 folds and test it on the remaining one fold. This one fold is the holdout set and will usually work very well if, and only if, the trained model is independent of the holdout set. Under normal situations, this works well, but you might begin to leak information into the model as the test fold changes. The observations in the first test will be different in the second BUT some observations in the first test set will now be in training set. This creates an opportunity to ‘leak’ information into the model.

Ideally, the holdout score gives an accurate estimate of the true performance of the model on the underlying distribution from which the data were drawn. However, this is only the case when the model is independent of the holdout data! In contrast, in a competition the model generally incorporates previously observed feedback from the holdout set. Competitors work adaptively and iteratively with the feedback they receive. An improved score for one submission might convince the team to tweak their current approach, while a lower score might cause them to try out a different strategy. But the moment a team modifies their model based on a previously observed holdout score, they create a dependency between the model and the holdout data that invalidates the assumption of the classic holdout method. As a result, competitors may begin to overfit to the holdout data that supports the leaderboard. This means that their score on the public leaderboard continues to improve, while the true performance of the model does not. In fact, unreliable leaderboards are a widely observed phenomenon in machine learning competitions. (via Moritz Hardt)

What is Reusable Holdout?

Reusable Holdout is a tweak to Cross Validation. It uses the same holdout test set over and over again during training. You select this holdout set prior to Cross Validation and then use it for every iteration of the model building you do, thereby ensuring that nothing potentially leaks into your model building.

Rather than limiting the analyst, our approach provides means of reliably verifying the results of an arbitrary adaptive data analysis. The key tool for doing so is what we call the reusable holdout method. As with the classic holdout method discussed above, the analyst is given unfettered access to the training data. What changes is that there is a new algorithm in charge of evaluating statistics on the holdout set. This algorithm ensures that the holdout set maintains the essential guarantees of fresh data over the course of many estimation steps.

What is Reusable Holdout

If you want to dig deeper into this topic, check out this research paper here.

Matrix Factorization for Missing Value Imputation

I stumbled across an interested reddit post about using matrix factorization (MF) for imputing missing values.

The original poster was trying to solve a complex time series that had missing values. The solution was to use matrix factorization to impute those missing values.

Since I never heard of that application before, I got curious and searched the web for information. I came across this post using matrix factorization and Python to impute missing values.

In a nutshell:

Recommendations can be generated by a wide range of algorithms. While user-based or item-based collaborative filtering methods are simple and intuitive, matrix factorization techniques are usually more effective because they allow us to discover the latent features underlying the interactions between users and items. Of course, matrix factorization is simply a mathematical tool for playing around with matrices, and is therefore applicable in many scenarios where one would like to find out something hidden under the data.

The author uses a movie rating example, where you have users and different ratings for movies. Of course, a table like this will have many missing ratings. When you look at the table, it looks just like a matrix that’s waiting to be solved!

In a recommendation system such as Netflix or MovieLens, there is a group of users and a set of items (movies for the above two systems). Given that each users have rated some items in the system, we would like to predict how the users would rate the items that they have not yet rated, such that we can make recommendations to the users. In this case, all the information we have about the existing ratings can be represented in a matrix. Assume now we have 5 users and 10 items, and ratings are integers ranging from 1 to 5, the matrix may look something like this (a hyphen means that the user has not yet rated the movie):

Matrix Factorization of Movie Ratings

After applying MF, you get these imputed results:

Matrix Factorization of Movie Ratings Results

Of course I skipped over the discussion of Regularization and the Python Code, but you can read about that here.

Going back to the original Reddit post, I was intriqued how this imputation method is available in H2O.ai’s open source offering. It’s called ‘Generalized Low Ranked Models‘ and not only helps with dimensionality reduction BUT it also imputes missing values. I must check out more because I know there’s a better way than just replacing the average value.