Monthly Archives: January 2018

Paper: A Generative Model for Parsing Natural Language to Meaning Representations (67/365)

Today’s paper once again constructs a semantic parser. The parser is trained on sentences paired with their meaning representations. But there is no finer labeling of the correspondence between words and meaning tokens. The meaning representation in this paper takes … Continue reading

Posted in Uncategorized | Tagged , , | Leave a comment

Paper: PPDB: The Paraphrase Database (66/365)

In the last post, the authors made use of paraphrases. It turns out that there is in fact a paraphrase database and it’s quite interesting how it was created. It starts with the basic observation that given translated texts from … Continue reading

Posted in Uncategorized | Tagged , , | Leave a comment

Paper: Building a Semantic Parser Overnight (65/365)

In a previous post I talked about a paper that encoded questions as programs which in turn defined a procedure that executed in the environment of a battleship game. In other words, each question defines a program that defines the … Continue reading

Posted in Uncategorized | Tagged , , | 1 Comment

Paper: Concepts in a Probabilistic Language of Thought (64/365)

In a previous post, I described a paper that wrote questions as a LISP expression. The authors of that paper take their input from today’s paper by Goodman et. al. I think it’s clear to everyone that concepts, as humans … Continue reading

Posted in Uncategorized | Tagged , , | Leave a comment

Paper: Attention Is All You Need (63/365)

I can’t believe after my gripe yesterday that I have picked today’s paper. This paper claims that a feed-forward network with self-attention trains faster and performs better than a recurrent or a convolutional neural network on translation tasks. I have … Continue reading

Posted in Uncategorized | Tagged , , | 1 Comment

Paper: Question Asking as Program Generation (62/365)

I am very bored by language models that just pump the training data through something and then predict the word following a partially revealed sentence. I mean, I don’t learn anything from them apart from a lot of technical wizardry. … Continue reading

Posted in Uncategorized | Tagged , , | 1 Comment