Shikhar Murty

Stanford, CA

prof_pic.jpg

I am currently a 5th year CS PhD Student in the Stanford NLP Group, advised by Chris Manning.

I’m interested in building learning systems that can learn rich structures from limited data, and generalize (2, 3) beyond what they were trained on. These days, I’m thinking a lot about long-horizon planning and reasoning in Language Models.

I interned at Google Deepmind in Summer 2023 where I worked with Mandar Joshi, Kenton Lee and Pete Shaw. In Summer 2021, I interned with Marco Ribeiro and Scott Lundberg at Microsoft Research.

News

February, 2024 Talks in NYC (NYU / Columbia / Cornell) on “Improving the Structure and Interpretation of Language in Modern Sequence Models”
September, 2023 Talks at AI2 + UW titled “Understanding and Improving Generalization in Transformers”
May, 2023 Talk at MIT BCS titled “Transformers, Tree Structures and Generalization”
May, 2022 Talk about Language Patching at the NL Supervision Workshop at ACL 2022 in Dublin, in person!
February, 2022 Talk about using Language Supervision in ML (1, 2) at Apple Siri

Selected publications

2023

  1. Pushdown Layers: Encoding Recursive Structure in Transformer Language Models
    Shikhar Murty, Pratyusha Sharma, Jacob Andreas, and Christopher Manning
    In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Dec 2023
  2. Characterizing intrinsic compositionality in transformers with Tree Projections
    Shikhar Murty, Pratyusha Sharma, Jacob Andreas, and Christopher D Manning
    In The Eleventh International Conference on Learning Representations , Dec 2023
  3. Grokking of Hierarchical Structure in Vanilla Transformers
    Shikhar Murty, Pratyusha Sharma, Jacob Andreas, and Christopher D. Manning
    In Annual Meeting of the Association for Computational Linguistics, Dec 2023

2022

  1. Fixing Model Bugs with Natural Language Patches
    Shikhar Murty, Christopher D. Manning, Scott M. Lundberg, and Marco Tulio Ribeiro
    In Conference on Empirical Methods in Natural Language Processing, Dec 2022

2020

  1. ExpBERT: Representation Engineering with Natural Language Explanations
    Shikhar Murty, Pang Wei Koh, and Percy Liang
    In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Jul 2020

2019

  1. Systematic Generalization: What Is Required and Can It Be Learned?
    Dzmitry Bahdanau*, Shikhar Murty*, Michael Noukhovitch, Thien Huu Nguyen, Harm Vries, and Aaron Courville
    In International Conference on Learning Representations, Jul 2019