For an up-to-date list, please refer to my Google Scholar page.

(*=equal contribution)


  1. Arxiv
    Attention-guided generative models for extractive question answering
    Xu, Peng*, Liang, Davis*, Huang, Zhiheng, and Xiang, Bing
    arXiv preprint arXiv:2110.06393 2021
  2. Arxiv
    Multiplicative Position-aware Transformer Models for Language Understanding
    Huang, Zhiheng, Liang, Davis, Xu, Peng, and Xiang, Bing
    arXiv preprint arXiv:2109.12788 2021


  1. Arxiv
    Embedding-based Zero-shot Retrieval through Query Generation
    Liang, Davis*, Xu, Peng*, Shakeri, Siamak, Santos, Cicero Nogueira dos, Nallapati, Ramesh, Huang, Zhiheng, and Xiang, Bing
    arXiv preprint arXiv:2009.10270 2020
  2. ACL
    Masked language model scoring
    Salazar, Julian, Liang, Davis, Nguyen, Toan Q, and Kirchhoff, Katrin
    ACL 2020
  3. EMNLP Findings
    Improve transformer models with better relative position embeddings
    Huang, Zhiheng, Liang, Davis, Xu, Peng, and Xiang, Bing
    EMNLP Findings 2020
  4. Resistance AI
    Decoding and Diversity in Machine Translation
    Roberts, Nicholas, Liang, Davis, Neubig, Graham, and Lipton, Zachary C
    NeurIPS Resistance AI Workshop 2020
  5. Arxiv
    TRANS-BLSTM: Transformer with bidirectional LSTM for language understanding
    Huang, Zhiheng, Xu, Peng, Liang, Davis, Mishra, Ajay, and Xiang, Bing
    arXiv preprint arXiv:2003.07000 2020



    1. IDLT
      Invariant representation learning for robust deep networks
      Salazar, Julian, Liang, Davis, Huang, Zhiheng, and Lipton, Zachary C
      In Workshop on Integration of Deep Learning Theories, NeurIPS 2018


    1. IJCNLP
      Deep automated multi-task learning
      Liang, Davis, and Shu, Yan
      IJCNLP 2017