I am a postdoctoral researcher at the University of Virginia, working with Prof. Tom Hartvigsen.
My research focuses on enabling language models to learn continuously and adapt beyond their initial pretraining. I develop methods in model editing and continual learning that allow models to incorporate new knowledge, correct errors, and evolve over time—without forgetting what they already know.
I earned my Ph.D. in Computer Science and Engineering from IIIT Delhi, where I was advised by Prof. Vikram Goyal and Prof. Tanmoy Chakraborty.
Selected Publications
-
Lifelong Model Editing with Graph-Based External Memory
Yash Kumar Atri, Ahmed Alaa, Thomas Hartvigsen
ACL Findings 2025 -
Continually Self-Improving Language Models for Bariatric Surgery Question–Answering
Yash Kumar Atri, Thomas H. Shin, Thomas Hartvigsen
Preprint 2025 -
Promoting Topic Coherence and Inter-Document Consorts in Multi-Document Summarization via Simplicial Complex and Sheaf Graph
Yash Kumar Atri, Arun Iyer, Tanmoy Chakraborty, Vikram Goyal
EMNLP (Main) 2023
Full list: yashkumaratri.com • Google Scholar
Education
Advisors: Prof. Vikram Goyal, Prof. Tanmoy Chakraborty
Thesis: Advancing Text Summarization with Conscience, Comprehension, and Multimodality
Advisor: Dr. Amit Kumar
Thesis: Machine Translation in Indian Languages
Experience
Advised by Prof. Tom Hartvigsen.
Research on lifelong model editing and continual learning for language models.
Awards
- DAAD AInet Fellow 2024 – Postdoc-NET-AI, Germany
- Travel Grants – Microsoft, iHub-Anubhuti, ACM-India (EMNLP 2023, KDD 2023)
Professional Service
Reviewing
Area Chair, ARR 2025;
Reviewer for ICLR 25,24; ARR 25,24; TCSS 25,24; TASL 24;
Knowledge-Based Systems (KBS) 2024; ASONAM 2024; ICON 25,24; BDA 24,23;
EMNLP 23; ACL 23.
Organizing
- Organizer: BDA 2023, ICON 2023
- Workshop Chair: ACSS (2020–2022), COFAD 2020
Teaching
- Teaching Assistant: CSE557 (W2020, W2021), CSE506 (S2020, S2021) — IIIT Delhi
- Guest Lecturer: Tutorial on AI-Driven Mental Health Counseling — ICON 2023