One month in

By Will Angley
11 Jun 2017

I’ve spent the past month learning how to employ Google’s natural language processing technologies, and building a starter application for my team. Learning things is hard if you try ramping up suddenly, just like exercise is, and getting up to speed quickly has left me a little bit sore too.

Language is statistical, so no machine and no person will understand it correctly all the time. But the tech I’m using – and that you can use through the Cloud Natural Language API – has been accurate often enough for me that I’ve only needed to think about it once, when analyzing old texts like Walt Whitman’s Song of Myself.1 This makes me very happy indeed.

I’ve started reading more research papers, and going to more talks about them. There’s a strong community around this within Google Research NYC, and I’m enjoying it thoroughly.

My favorite talk so far was Prof. Orabona’s talk on Backprop without Learning Rates Through Coin Betting, which lets you automate tuning your neural network training in exchange for training possibly taking O(log(n)) more time. After trying to teach students how to do this tuning while substitute teaching an ML class, and not really feeling like I know it myself, being able to replace all of it with a hundred lines of code is utterly amazing.

I’ve dropped some projects that I was working on before joining Research on the floor. I’m hoping to pick them up again soon after I complete some work for my first project review on Tuesday.

  1. Printed English used to contract word-final ed to ‘d – so formed becomes form’d in the first section of “Song of Myself” – but stopped doing so before the texts used to train Google’s computer models of English were written. You’ll want to work around this in your own code if you process texts that do this.