alt text My name is Yuchen Lu. I am currently a P.hD. candidate at the Mila lab of University of Montreal, supervised by Prof. Aaron Courville. Before that I received my undergraduate degree at UIUC working with Prof. Jian Peng. I was also an undergrad at Shanghai Jiao Tong University.

My fundamental research interest is language learning as systematic generalization. Humans are able to generate unseen novel utterances from a limited sample of data, while current machine learning approaches fall short on. Building an intelligent agent that is able to acquire the language as efficient as humans is the important next step as we are seeing the marginal effect of increasing model size of language models. I believe there are two main missing pieces of puzzles:

  1. Humans learn the language in an embodied environment, and humans acquire language as a tool to influence the world around them. We should model situated language learning beyond learning from a static corpus.

  2. Language evolves and adapts to an iterated transmission process so that it becomes structured and easy-to-acquire for the later generations. We should model this cultural evolution aspect of language in our language learning.

I enjoying seeing the impact of my research. Recently, our research team, parternered with WebDip successfully developed an AI player for the board game Diplomacy, and it’s covered in one of the most popular podcast channels in the community.

I also co-founded Tuninsight, an award-winning Montreal-based start-up.

The email is luyuchen [DOT] paul [AT] gmail [DOT] com.

Interests

  • Natural Language Processing
  • Emergent Commuincation
  • Embodied Language Learning

Education

  • BSc in CS, 2015-2017

    University of Illinois, Urbana-Champaign

  • BSc in ECE, 2013-2015

    Shanghai Jiaotong University

Recent News

[05/17/2021] I will join Facebook as a research intern this summer on the topic of muli-modal pretraining

[01/12/2020] Our paper on Iterated learning for emergent systematicity in VQA is accepted at ICLR2021 (Oral)

[01/12/2020] New paper on Unsupervised Task Decomposition is accepted at ICLR2021

Experience

 
 
 
 
 

Research Intern

Facebook

May 2021 – Aug 2021 Montreal, Canada
I focus on the problem of large-scale multimodal pretraining.
 
 
 
 
 

Research Intern (Canceled due to COVID)

MIT-IBM Watson AI Lab

Jun 2020 – Sep 2020 Boston, US
Hosted by Chuang Gan. I studied the problem of unsupervised task decomposition from unstructured demonstration. Work accepted at ICLR2021.
 
 
 
 
 

Research Intern

Horizon Robotics

May 2017 – Aug 2017 Beijing, China
Hosted by Heng Luo, I researched about adverasial example and the label leaking effects

Slides

Paper Presentation: Re-evaluate Evaluation

Reinforcement Learning and Control as Probabilistic Inference

Iterated Learning for Deep Learning