Tag Machine Learning

Posts: 17

Introduction to Driverless AI from H2O.ai

I finally posted a new video on my YouTube channel after a year of no activity. It felt good and is part of my 'content refresh' project I'm working on. In this video I do an introduction to Driverless AI and its EDA capabilities. The forthcoming videos will go into the training, testing, diagnostics, machine learning interpretability, and much more. Please drop a comment or question in the channel if you have any.

comments

Building a Machine Learning Framework from scratch

Great article by Florian Cäsar on how his team developed a new machine learning framework. From scratch. In 491 steps!

He summarizes the entire process up in this great quote:

| *From images, text files, or your cat videos, bits are fed to the
  data pipeline that transforms them into usable data chunks and in
  turn to data sets,*
| *which are then fed in small pieces to a trainer that manages all
  the training and passes it right on to the underlying neural
  network,*
| *which consists of many underlying neural network layers connected
  through an arbitrarily linear or funky architecture,*
| *which consist of many underlying neurons that form the smallest
  computational unit and are nudged in the right direction according
  to the trainers optimiser,*
| *which takes the network and the transient training data in the
  shape of layer buffers, marks the parameters it can improve, runs
  every layer, and calculates ahow well did we doscore based on
  the calculated and correct answers from the supplied small pieces
  of the given dataset according to the optimisers settings*
| *which computes the gradient of every parameter with respect to
  the score and then nudges the individual neurons correspondingly,*
| *which then is run again and again until the optimiser reports
  results that are good enough as set in a rich criteria and hook
  system,*
| *which is based on global and local nested
  parameter-identifier-registries that contain the shared parameters
  and distribute them safely to all workers*
| *which are the actual workhorses of the training process that do
  as their operator says using individual and separate mathematical
  backends*
| *which use the layer-defined placeholder computation graphs and
  put in the raw data and then execute it on their computational
  backend,*
| *which are all also managed by the operator that distributes the
  workers work as needed and configured and also functions as a
  coordinator to the owning trainer,*
| *which connects the network, the optimiser, the operator, the
  initialisers*
| *which tell the trainer with which distribution to initialise what
  parameters, which work similar to hooks that act as a bridge
  between them all and communicate with external things using the
  Sigma environment,*
| *which is the container and laid-back manager to everything that
  also supplies and runs these external things called monitors*
| *which can be truly anything that makes us of the training data
  and*
| *which finally display the learned funny cat image*
| *… from the hooks from the workers from their operator from
  its assigned network from its dozens of layers from its millions
  of individual neurons derived from some data records from data
  chunks from data sets from data extractors.*

In other words, they created a new software called Sigma.Core.

Sigma.Core

Sigma.core appears to be a Windows based machine learning software that uses deep learning. It's feature list is small but impressive:

  • Uses different deep learning layers (i.e. dropouts, recurrent, etc)
  • Uses both linear and nonlinear networks
  • Four (4) different optimizers
  • Has hooks for storing, restoring checkpoints, CPU and runtime metrics
  • Runs on multi and single CPU's, CUDA GPU
  • Native Windows GUI
  • Functional automatic differentiation

How long did it take?

According to Florian it took about 700 hours of intro/research, 2000 hours of development, and 2 souls sold to the devil. That's over 1 full year of work for one person, assuming a standard 40 hour work week!

comments

How StockTwits Uses Machine Learning

Fascinating behind the scenes interview of StockTwit's Senior Data Scientist Garrett Hoffman. He shares great tidbits on how StockTwits uses machine learning for sentiment analysis. I've summarized the highlights below:

  • Idea generation is a huge barrier for active trading
  • Next gen of traders uses social media to make decisions
  • Garrett solves data problems and builds features for the StockTwits platform
  • This includes: production data science, product analytics, and insights research
  • Understanding social dynamics makes for a better user experience
  • Focus is to understand social dynamics of StockTwits (ST) community
  • Focuses on what's happening inside the ST community
  • ST's market sentiment model helps users with decision making
  • Users 'tag' content for bullish or bearish classes
  • Only 20 to 30% of content is tagged
  • Using ST's market sentiment model increases coverage to 100%
  • For Data Science work, Python Stack is used
  • Use: Numpy, SciPy, Pandas, Scikit-Learn
  • Jupyter Notebooks for research and prototyping
  • Flask for API deployment
  • For Deep Learning, uses Tensorflow with AWS EC2 instances
  • Can spin up GPU's as needed
  • Deep Learning methods used are Recurrent Neural Nets, Word2Vec, and Autoencoders
  • Stays abreast of new machine learning techniques from blogs, conferences and Twitter
  • Follows Twitter accounts from Google, Spotify, Apple, and small tech companies
  • One area ST wants to improve on is DevOps around Data Science
  • Bridge the gap between research/prototype phase and embedding it into tech stack for deployment
  • Misconception that complex solutions are best
  • Complexity ONLY ok if it leads to deeper insight
  • Simple solutions are best
  • Future long-term ideas: use AI around natural language

comments

The Rise of Machine Learning Trading

This is huge if you ask me. First it was roboadvisers and now the machines are going to take over trading.

The machine learning agent found and exploited arbitrage opportunities in the presence of transaction costs in a simulated market proof of concept. via Newsweek

What I gleaned from this article is that the strategies are found using some sort of deep learning application. Namely, reinforcement learning.

Ritter said: "You can't be myopic and win these games. So reinforcement learning is a general framework for solving those types of problems where you have a delayed reward or you are trying to maximise a cumulative reward over time.

I can hear the screams already, the machines are coming for our traders! First it was HFT and now this! Here's a link to the actual paper.

comments

AlphaGO Zero learns on its own

The news dropped that Google's new implementation of AlphaGo, called AlphaGO Zero, was able to learn completely on its own. No training set was first used, rather it built it's own training set as it played against the older AlphaGO.

Earlier versions of AlphaGo were taught to play the game using two methods. In the first, called supervised learning, researchers fed the program 100,000 top amateur Go games and taught it to imitate what it saw. In the second, called reinforcement learning, they had the program play itself and learn from the results.

AlphaGo Zero skipped the first step. The program began as a blank slate, knowing only the rules of Go, and played games against itself. At first, it placed stones randomly on the board. Over time it got better at evaluating board positions and identifying advantageous moves. It also learned many of the canonical elements of Go strategy and discovered new strategies all its own. via Quantamagazine

Imagine if you took this deep learning technology and used it on the Quantum Computer Google is developing? Amazing times we are living in.

comments

Neural Market Trends is the online home of Thomas Ott.