Market Recap

Last week I posted my Death Cross & a Test article and a few weeks before that my article on How Passive Investing Saved my Life. It turns out some of my readers found it useful so I decided to post a market recap from last week. I’m not sure how long I’ll keep these up but it’s fun to write about for the time being.

I don’t keep up on ALL the market gyrations but I’m aware that earnings season is coming and several companies are already ‘warning’ of possible misses. While some of that’s due to the Trump administration not know what they’re doing (tariffs, threatening to fire Chairman Powell, etc), some of it could be due to Bull Market exhaustion. Whatever the case is, I expect plenty of volatility going forward for the rest of January and even into February.

Market Rebound?

It looks like the S&P 500 (my main go to chart) is firming up. I look at the booth the weekly and daily charts and noticed that the daily chart is rebounding nicely from the lows in December. There appears to be some weakening happening in volume as it’s nearing the 50 DMA. Strong volume and crossing the 50DMA line is really important here. This is a wait and see.

S&P500 Daily Chart through 2019-01-11, market recap
S&P500 Daily Chart through 2019-01-11

I’m not overly concerned right now as the weekly chart in the S&P500 is still very positive but the price action is below the 50WMA line. Not a good omen if the price action stalls below it and we might see some stalling in the daily chart (above). Still, those white hollow candles and decent volume on the weekly chart makes me want to take a wait and see position.

S&P500 Weekly Chart through 2019-01-11, market recap
S&P500 Weekly Chart through 2019-01-11

I haven’t market timed or moved any money out of my retirement accounts. My mad money — aka small stock investments in my Brokerage account — is not doing so well. Meh. That’ll be a topic of another post soon.

Despite all this, I’m still long but I expect weakness going forward this year. That’s not bad, it lets me dollar cost average better in 401k.

ETF Pick of the Week

This pick of the week is mostly me remembering something from my past trading days. I really like ETF’s and I plan on rotating out many of my individual stock holdings into ETF’s over the next few years. I find that they’re easy for me to think about and keep track of.

This weeks ‘pick’ is EEM, the iShares MSCI Emerging Markets Index. Let’s look at the daily and weekly charts.

EEM Daily Chart through 2019-01-11, market recap
EEM Daily Chart through 2019-01-11

EEM just plain sucks on the daily chart. It had a death cross many months ago and has been riding the 50DMA average down ever since. While the recent price action since early 2019 is nice, it hasn’t broken above the last high beginning last December. If it does, then yeah I’d be watching it more closely. If it takes that high out and the high last October, while breaking above the 200DMA, then I’d consider buying.

Still, there’s so much craziness going on right now with the Trump Tariffs, Brexit, and whatever else you want to blame.

EEM Weekly Chart through 2019-01-11, market recap
EEM Weekly Chart through 2019-01-11

The weekly chart looks a bit better. The 50WMA is still above the 200WMA, a test of the 200WMA in late October 2018 and again in December 2018 held, so that’s positive, BUT the 50WMA racing toward the 200WMA is cause for concern. Weekly moving averages that make a ‘Death Cross’ are big deal for me.

Famous Last Words

Still long and Bullish overall, but very cautious. I’m keeping an eye on the S&P 500 and might buy EEM for my long term portfolio if it perks up more and the charts keep going from the lower left to the upper right.

Now, go out and do something fun.

Functional Programming in Python

I’m spending time trying to understand the differences between writing classes and functions in Python. Which one is better and why? From what I’m gathering, a lot of people are tired of writing classes in general. Classes are used in Object Oriented Programming (OOP) and some python coders hate it because it’s writing too many lines of code when only a few really matter. So programmers like functional programming (FP) in python instead.

To that end, I’ve been watching videos of both. OOP and FP videos on the Internet and started writing notes on them. Below is a great but also very deep video on functional progamming in python by Daniel Kirsch from PyData 2016. It’s a great video and his presentation is about 30 minutes with a great Q&A session.

Functional Programming in Python

My notes from the above video are above are below:

  • First Class Functions
  • Higher Order Functions
  • Purity
  • Immutability (not going to talk about it)
  • Composition
  • Partial Application & Currying
  • Purity, a function without ‘side effects’
  • First Class Functions, simply means that functions are like everybody else
  • Can define with ‘def’ or lambda
  • Can use the name of functions as variables and do higher-order programming
  • Decorators “… provide a simple syntax for calling higher-order functions. By definition, a decorator is a function that takes another function and extends the behavior of the latter function without explicitly modifying it.”
  • Partial function applications – “The primary tool supplied by the Functools module is the class partial, which can be used to “wrap” a callable object with default arguments. Partial objects are similar to function objects with slight differences. Partial function application makes it easier to write and maintain the code.”
  • Partial functions are very powerful
  • “Currying transforms a function that takes multiple arguments in such a way that it can be called as a chain of functions. Each with a single argument (Partial Application).” via Wikipedia
  • The important concept for Currying is closures, aka lexical scoping
  • Remembers the variables in the scope where it was defined
  • List comprehensions vs functional equivalents
  • Map function vs list comprehension
  • Filter function vs list comprehension
  • Reduce vs list comprehension
  • Why not write out the loop instead? Using Map/Filter/Reduce is cleaner
  • Function composition: i.e. run a filter and then map: map(f, filter(p, seq))
  • ‘Import functools’ is very useful
  • Main takeaways: Function Programming is possible in Python (to a degree)
  • Main takeaways: Small composable function are good
  • Main takeaways: FP == Build General Tools and Compose them
  • Python is missing: more list functions
  • Python is missing: Nicer lambda syntax
  • Python is missing: Automatic currying, composition syntax
  • Python is missing: ADTS (Sum Types)
  • Python is missing: Pattern Matching
  • Some remedies for list functions
  • Links provide in video @ 26:00
  • Suggest learning Haskell as a gateway to functional programming.

Automatic Feature Engineering with Driverless AI

Dmitry Larko, Kaggle Grandmaster, and Senior Data Scientist at goes into depth on how to apply feature engineering in general and in Driverless AI. This video is over a year old and the version of Driverless AI shown is in beta form. The current version is much more developed today.

This is by far one of the best videos I’ve seen on the topic of feature engineering, not because I work for, but because it approaches the concepts in an easy to understand manner. Plus Dmitry does an awesome job of helping watchers understand with great examples.

The question and answer part is also very good, especially the discussion on overfitting. My notes from the video are below.

  • Feature engineering is extremely important in model building
  • “Coming up with features is difficult, time-consuming, requires expert knowledge. “Applied machine learning” is basically feature engineering” – Andrew Ng
  • Common Machine Learning workflow (see image below)
Feature Engineering, Driverless AI
  • What is feature engineering? Example uses Polar coordinate conversions for linear classifications
  • Creating a target variable is NOT feature engineering
  • Removing duplicates/Missing values/Scaling/Normalization/Feature Selection IS NOT feature engineering
  • Feature Selection should be done AFTER feature engineering
  • Feature Engineering Cycle: Dataset > Hypotheis Set > Validate Hypothesis > Apply Hypothesis > Dataset
  • Domain knowledge is key, so is prior experience
  • EDA / ML model feedback is important
  • Validation set: use cross validation. Be aware of data leakage
  • Target encoding is powerful but can introduce leakage when applied wrong
  • Feature engineering is hard and very very time consuming
  • Feature engineering makes your model better, simpler models
  • Transform predictor/response variables into a normal distribution in some situation like log transform
  • Feature Encoding turns categorical features into numerical features
  • Labeled encoding and one hot encoding
  • Labeled encoding is bad, it implies an order which is not preferred
  • One hot encoding transforms into binary (dummy coding)
  • One hot encoding create a very sparse data set
  • Columns BLOW UP in size with one hot encoding
  • You can do frequency encoding instead of one hot encoding
  • Frequency Encoding is robust but what about balanced data sets?
  • Then you do Target Mean encoding. Downfall is high cardinality features. This can cause leakage!
  • To avoid leakage, you can use ‘leave one out’ schema
  • Apply Bayesian smoothing, calc a weight average on the mean of the training set
  • What about numerical features? Feature encoding using: Binning with quantiles / PCA and SVD / Clustering
  • Great, then how do you find feature interactions?
  • Apply domain knowledge / Apply genetic programming / ML also behavior (investigate model weights, etc)
  • You could encode categories features by stats (std dev, etc)
  • Feature Extraction is the application of extracting value out of hidden features, like zip code
  • Zip code can give you state and city information
  • You can extract day, week, holiday, etc can be extracted date-times

Update: The documentation on the feature transformations applied is here. Check it out, it’s pretty intense.