H2O.ai

This week (10/3/18) I traveled to Mountain View, the ground zero of this awesome AI startup. I’m in sales engineering training and learning all the in’s and out’s of DriverlessAI. Even better, I get a crash course in H2O-3, the open source platform and libraries.

In my first two weeks I’ve done the following:

  1. Spun up DriverlessAI in Docker and learned some Docker commands
  2. Kicked off DriverlessAI jobs via the python client
  3. Installed and started H2o’s Sparkling Water and a one node spark cluster
  4. Scored a DriverlessAI mojo file on Spark via Sparkling Water
  5. Spun up DriverlessAI on AWS multiple times
  6. Made a presentation and demo in NYC
  7. Did two demos / troubleshot some AWS issues
  8. Started meeting the Development team in Mtn View
  9. Met new sales and marketing colleagues

It feels like I’m drinking from a firehose but it’s all ok, it’s the journey that motivates me.

Always be learning. Always.

NYC AI Event 2018

Not even two weeks in at H2o.ai and I’m already giving presentations. Boy did I miss this!

I was invited to the give a short talk and demo of Driverless AI at the NYC AI gathering in midtown. It was hosted by one of IBM’s partners (Ligthouse) and they invited H2o, IBM, and NVIDIA to talk about where ‘AI’ is headed.

The speakers were fantastic and I learned a lot. I didn’t know that IBM and NVIDIA have delivered (installed?) the world’s fastest super computer for the US Government. It had like 500 GPUs (don’t quote on it, it was a lot) and like 10,000 cores on Power9 boxes. Really impressive.

All in all, everthing went off without a problem and I got to meet a lot of great people.

Matrix Factorization for Missing Value Imputation

I stumbled across an interested reddit post about using matrix factorization (MF) for imputing missing values.

The original poster was trying to solve a complex time series that had missing values. The solution was to use matrix factorization to impute those missing values.

Since I never heard of that application before, I got curious and searched the web for information. I came across this post using matrix factorization and Python to impute missing values.

In a nutshell:

Recommendations can be generated by a wide range of algorithms. While user-based or item-based collaborative filtering methods are simple and intuitive, matrix factorization techniques are usually more effective because they allow us to discover the latent features underlying the interactions between users and items. Of course, matrix factorization is simply a mathematical tool for playing around with matrices, and is therefore applicable in many scenarios where one would like to find out something hidden under the data.

The author uses a movie rating example, where you have users and different ratings for movies. Of course, a table like this will have many missing ratings. When you look at the table, it looks just like a matrix that’s waiting to be solved!

In a recommendation system such as Netflix or MovieLens, there is a group of users and a set of items (movies for the above two systems). Given that each users have rated some items in the system, we would like to predict how the users would rate the items that they have not yet rated, such that we can make recommendations to the users. In this case, all the information we have about the existing ratings can be represented in a matrix. Assume now we have 5 users and 10 items, and ratings are integers ranging from 1 to 5, the matrix may look something like this (a hyphen means that the user has not yet rated the movie):

Matrix Factorization of Movie Ratings

After applying MF, you get these imputed results:

Matrix Factorization of Movie Ratings Results

Of course I skipped over the discussion of Regularization and the Python Code, but you can read about that here.

Going back to the original Reddit post, I was intriqued how this imputation method is available in H2O.ai’s open source offering. It’s called ‘Generalized Low Ranked Models‘ and not only helps with dimensionality reduction BUT it also imputes missing values. I must check out more because I know there’s a better way than just replacing the average value.