Chao Zhang

Table of Contents

Quick Links

James Edenfield Associate Professor
School of Computational Science and Engineering
College of Computing
Georgia Institute of Technology

Office: CODA E1358B
Address: 756 W Peachtree St NW, Atlanta, GA 30308
Email: chaozhang@gatech.edu

Research

My research lies in the areas of data science, machine learning, and AI. My goal is to build advanced large language models and AI agents for complex task-solving and decision-making. My technical efforts center on addressing key challenges in data efficiency, computation efficiency, and model robustness. On the application front, I am deeply interested in harnessing foundation models to advance AI for science.

Currently, I am working on the following themes:

  1. LLM Post-Training Data and Algorithms for Agents – Developing efficient post-training methods for adapting and fine-tuning LLMs to create intelligent agents that can learn from experience, interact with environments, and improve their reasoning and agentic capabilities.
  2. AI Safety and Trustworthy AI – Facilitating responsible and reliable deployment of AI systems through critical techniques including uncertainty quantification, enhancing LLM factuality, improving LLM alignment, and developing robust methods for building trustworthy AI agents.
  3. Datasets, Evaluation, and Benchmarks – Creating comprehensive evaluation frameworks, datasets, and benchmarks for assessing AI system performance, particularly focusing on agent capabilities, reasoning abilities, and real-world task completion across diverse domains.
  4. AI for Scientific Discovery – Developing foundation models and AI agents to accelerate scientific discovery, with applications in diverse fields such as time series analysis, material science, biomedical and life sciences.

Acknowledgment: My work has been generously supported by research funding/gift from NSF (IIS CAREER-2144338, IIS-2106961, IIS-2008334), ONR MURI , Kolon, HomeDepot, ADP, and Adobe. My work has also been recognized by an NSF CAREER Award, a Facebook Faculty Award, an Amazon AWS Machine Learning Research Award, a Google Faculty Research Award, a Kolon Faculty Fellowship, an ACM SIGKDD Dissertation Runner-up Award, and several paper awards from IMWUT (UbiComp), ECML/PKDD, and ML4H.

1. Post-Training for LLM Agents

We investigate improving LLM reasoning and agentic abilities by enabling them to evolve through interaction with external environments for feedback. The goal is to better adapt LLMs and enhance their reasoning and agentic capabilities through efficient post-training data collection and algorithmic innovations.

2. Safe and Trustworthy AI

3. Data Evaluation and Benchmarks

4. AI for Scientific Discovery

We aim to leverage AI and foundation models for advancing scientific discovery. We develop domain-specific foundation models and LLM agents for different scientific domains. On the application side, we collaborate with domain-experts to advance scientific discovery in material design, biomedical and life science, and urban science:

5. Previous Projects

This section includes our previous work on weak supervision, active learning, and uncertainty quantification for language models and traditional machine learning tasks.

Awards

  • 2024 GaTech CoC Outstanding Junior Faculty Award
  • 2022 NSF Career Award
  • 2022 ML4H Outstanding Paper Award
  • 2021 Facebook Faculty Research Award
  • 2021 Kolon Faculty Fellowship
  • 2020 Amazon AWS Machine Learning Research Award
  • 2020 Google Faculty Research Award
  • 2019 ACM SIGKDD Dissertation Award Runner-up
  • 2018 ACM IMWUT Distinguished Paper Award
  • 2015 ECML/PKDD Best Student Paper Runner-up Award
  • 2013 Chiang Chen Overseas Graduate Fellowship

Publications

(* denotes equal contribution)

2025

2024

2023

2022

2021

2020

2019

2018

Earlier

Students

Prospective students: I am always looking for strong and motivated students to join our group. If you are interested in working with me, you can either email me or fill out this form.

Current:

  • Agam A. Shah: Ph.D. Student in ML (co-advised with Sudheer Chava)
  • Changhao Li: Ph.D. Student in CSE (co-advised with Bo Dai)
  • Haorui Wang: Ph.D. Student in CSE
  • Haotian Sun: Ph.D. Student in ML (co-advised with Bo Dai)
  • Jing Peng: Ph.D. Student in CSE
  • Kuan Wang: Ph.D. Student in CSE
  • Rui Feng: Ph.D. Student in CS
  • Rushi Qiang: Ph.D. Student in CSE (co-advised with Bo Dai)
  • Yanbin Yin: Ph.D. Student in CS

Alumni:

  • Yuchen Zhuang: Ph.D. Student in ML (–> Research Scientist @ Google Deepmind)
  • Yinghao Li: Ph.D. Student in ML (–> Research Scientist @ Amazon AWS)
  • Rongzhi Zhang: Ph.D. Student in ML (–> Research Scientist @ Amazon Rufus Team)
  • Yue Yu: Ph.D., 2024 (–> Research Scientist @ Meta GenAI Team)
  • Lingkai Kong: Ph.D., 2024 (–> Postdoc Fellow @ Harvard)
  • Yanbo Xu: Ph.D., 2023 (–> Research Scientist @ Microsoft Research)
  • Binghong Chen: Ph.D., 2023 (–> Quant @ Citadel Capital, co-advised with Prof. Le Song)
  • Pranav Shetty: Ph.D., 2023 (–> Research Scienctist @ JP Morgan Chase, JP Morgan AI Ph.D. Fellowship, co-advised with Prof. Rampi Ramprasad)
  • Yi Rong: Visiting Ph.D. Student
  • Vidit Jain: M.S. Student in CS
  • Mukund Rungta: M.S. Student in CS
  • Junyang Zhang: M.S. Student in CS
  • Piyush Patil: M.S. Student in CS
  • Mengyang Liu: M.S. Student in CSE
  • Isaac Rehg: M.S. in CS
  • Wendi Ren: M.S. in CSE
  • Ruijia Wang: M.S. in CSE
  • Jacob Wessell: M.S. in CS
  • Wenhao Mu: M.S. in CS
  • Shangqing Xu: M.S. in CS