Pushdown Layers: Encoding Recursive Structure in Transformer Language Models

Stanford, MIT
EMNLP 2023

Abstract

Recursion is a prominent feature of human language, and fundamentally challenging for self-attention due to the lack of an explicit recursive-state tracking mechanism. Consequently, Transformer language models poorly capture long-tail recursive structure and exhibit sample-inefficient syntactic generalization. This work introduces Pushdown Layers, a new self-attention layer that models recursive state via a stack tape that tracks estimated depths of every token in an incremental parse of the observed prefix. Transformer LMs with Pushdown Layers are syntactic language models that autoregressively and synchronously update this stack tape as they predict new tokens, in turn using the stack tape to softly modulate attention over tokens---for instance, learning to ``skip'' over closed constituents. When trained on a corpus of strings annotated with silver constituency parses, Transformers equipped with Pushdown Layers achieve dramatically better and 3-5x more sample-efficient syntactic generalization, while maintaining similar perplexities. Pushdown Layers are a drop-in replacement for standard self-attention. We illustrate this by finetuning GPT2-medium with Pushdown Layers on an automatically parsed WikiText-103, leading to improvements on several GLUE text classification tasks.

MY ALT TEXT

Left: The stack-tape (in grey) featurizes contents of an explicit stack, in terms of estimated token depths, where the stack represents incremental parses. These depths map onto depth embeddings (in blue) that are added to token keys before computing attention scores, softly biasing attention towards a recursive syntactic computation. The stack is updated synchronously with the newly predicted word, via an attachment head that selects a constituent to reduce the newly predicted word with, via attention.

Right: Illustration of how the parse [[The dog] [is happy]] is built as a unique sequence of stack-tape updates in Pushdown LMs. Here, as the word happy is predicted, the attachment head chooses a constituent (bolded) from the current incremental parse, via attention. Attachment decisions are made to constituents by attending to their rightmost token, and none of the other tokens of a constituent can be attended to (shown as dashed lines). These attachment decisions are used to update depth values in the tape.

BibTeX

@inproceedings{murty2023pushdown,
          title     = {Pushdown Layers: Encoding Recursive Structure in Transformer Language Models},
          author    = {Murty, Shikhar and Sharma, Pratyusha and Andreas, Jacob and Manning, Christopher},
          booktitle = {Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing},
          month     = {December},
          year      = {2023}
      }