
Ph.D. Student
Computer Science Department
University of Virginia
Email: [email protected] or [email protected]
Now Iโm working on:
Twitter | Linkedin | CV
Github | Google Scholar
Hi!๐ Iโm Xinyu (/สษชn.ju:/
) Zhu, ****a first-year computer science Ph.D. student at the University of Virginia, advised by Prof. ****Yu Meng. I am broadly interested in NLP and ML, especially in solving complex reasoning problems with LLMs, improving generation quality and reducing hallucination of LLMs. My long-term research goal is to enable human expert level reasoning, decision-making and cognitive intelligence for neural models and systems. I received my master degree at Tsinghua University, advised by Prof. ****Yujiu Yang.
โจNews
- [06/2025] Our AdaDecode is accepted at ICML 2025! ๐
- [05/2025] I moved to Cupertino and started my internship at Apple AIML, looking forward to meeting new friends in the Bay Area! ๐
- [03/2025] One survey on the honesty of LLMs is accepted at TMLR! ๐
- [01/2025] One paper on evaluating LMM's reasoning via chart-to-code generation ****is accepted at ICLR 2025! ๐
- [10/2024] Check out our comprehensive survey on the honesty of LLMs! ๐ Also checkout our accompanied github repo! PRs welcome!
- [09/2024] One paper on improving MoE models' reasoning via self-contrast has been accepted at NeurIPS 2024, congrats to all the co-authors๐! See you in Vancouver!
- Old news
๐จโ๐Education
- University of Virginia 2024 -- Present
Ph.D. in Computer Science
- Tsinghua University 2021 -- 2024
Master in Electronic and Information Engineering
- Xidian University 2017 -- 2021
Bachelor in Electronic Science and Technology
๐Selected Works (* indicates equal contribution)

The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning
Preprint
Xinyu Zhu, Mengzhou Xia, Zhepei Wei, Wei-Lin Chen, Danqi Chen, Yu Meng
[paper][code][X thread]

Unchosen Experts Can Contribute Too: Unleashing MoE Models' Power by Self-Contrast
NeurIPS 2024
Chufan Shi, Cheng Yang*, Xinyu Zhu*, Jiahao Wang*, Taiqiang Wu, Siheng Li, Deng Cai, Yujiu Yang, Yu Meng*
[paper][code]