TensorFlow and High Level APIs

I got a chance to watch this great presentation on the upcoming release of TensorFlow v2 by Martin Wicke. He goes over the big changes – and there’s a lot – plus how you can upgrade your earlier versions of TensorFlow to the new one. Let’s hope the new version is faster than before! My video notes are below:

  • Since it’s release, TensorFlow (TF) has grown into a vibrant community
  • Learned a lot on how people used TF
  • Realized using TF can be painful
  • You can do everything in TF but what is the best way
  • TF 2.0 alpha is just been released
  • Do ‘pip install -U –pre tensorflow’
  • Adopted tf.keras as high-level API (SWEET!)
  • Includes eager execution by default
  • TF 2 is a major release that removes duplicate functionality, makes the APIs consistent, and makes it compatible in different TF versions
  • New flexibilities: full low-level API, internal operations are accessible now (tf.raw_ops), and inheritable interfaces for variables, checkpoints, and layers
  • How do I upgrade to TensorFlow 2
  • Google is starting the process of converting the largest codebase ever
  • Will provide migration guides and best practices
  • Two scripts will be shipped: backward compatibility and a conversion script
  • The reorganization of API causes a lot of function name changes
TensorFlow, v2
Martin Wicke of Google shows the new TensorFlow v2 conversion script
  • Release candidate in ‘Spring 2019’ < might be a bit flexible in the timeline
  • All on GitHub and project tracker
  • Needs user testing, please go download it
  • Karmel Allison is an Engineering manager for TF and will show off high-level APIs
  • TF adopted Keras
  • Implemented Keras and optimized in TF as tf.keras
  • Keras built from the ground up to be pythonic and simple
  • Tf.keras was built for small models, whereas in Google they need HUGE model building
  • Focused on production-ready estimators
  • How do you bridge the gap from simple vs scalable API
  • Debug with Eager, easy to review Numpy array
  • TF also consolidated many APIs into Keras
  • There’s one set of Optimizers now, fully scalable
  • One set of Metrics and Losses now
  • One set of Layers
  • Took care of RNN layers in TF, there is one version of GRE and LSTM layers and selects the right CPU/GPU at runtime
  • Easier configurable data parsing now (WOW, I have to check this out)
  • TensorBoard is now integrated into Keras
  • TF distribute strategy for distributing work with Keras
  • Can add or change distribution strategy with a few lines of code
  • TF Models can be exported to SavedModel using the Keras function (and reloaded too)
  • Coming soon: multi-node sync
  • Coming soon: TPU’s

There’s a lot in this 22 minute video about TensorFlow v2. Must watch.

Deep Learning Can Hallucinate

This is very interesting, a way to trip up computer vision by adding a strange image to fool classification tasks.

As the researchers write, the sticker “allows attackers to create a physical-world attack without prior knowledge of the lighting conditions, camera angle, type of classifier being attacked, or even the other items within the scene.” So, after such an image is generated, it could be “distributed across the Internet for other attackers to print out and use.” via The Verge

Google Duplex and Google Assistant

File this under damn cool! This feature alone would lead me to consider ditching my Alexa and iPhone for a Google Home and Android phone.

I’m so busy juggling work, life, and kids around that just having a ‘digital’ assistant to make reservations, appointments, and set my schedule is just amazing.

I’m really amazed at how well the assistant can navigate not a straight forward case, like the 2nd restaurant reservation booking. Of course, nothing will be perfect but using the deep learning and natural language processing (NLP) with the power of Google, it will become common place in a few years.

I think I need to buy some GOOG stock. 😉