I missed the Research talks this week while working to meet a deadline (perhaps a bit too hard in hindsight), but did learn about a paper that seems cool:
“Segmental Recurrent Neural Networks” by Lingpeng Kong, Chris Dyer, and Noah A. Smith
a neural network architecture specialized for the segmented nature of language, in any of text strings, handwriting, or speech.
Specializing the structure of neural networks greatly improves performance in other domains - convolutional networks do better at visual tasks, neuron-for-neuron, than other ways of building neural networks. Could this offer a similar advantage on linguistic tasks?
That said, I can’t say I understand this paper yet after half an hour reading on my Pixel while waiting for dinner out tonight; it’s harder than the scientific papers that I learned to deal with in college, and I don’t yet have much background in this field.
Google searches turn up advice on how to read a paper that seem worthwhile to try when I pick this up again, tomorrow or Monday. I’m out of “reading scientific paper” tokens for the night.
I’m still surprised at how much of my day I spend looking at text-only terminals, and text-mostly web pages, fifty years after The Mother of All Demos showed how powerful graphical user interfaces can be.
Keeping a journal and looking back is often enough to surprise yourself, and at least one thing seems clear two weeks in: I skim far more articles on the Internet than I save to my Instapaper account, and I save far more to my Instapaper account than seems worth talking about on the Internet a week later.
This feels like an opportunity to free a substantial amount of time by building better habits here.