Short Bio

I’m Zhicheng YANG, who is pursuing a Ph.D. degree advised by Prof. Jing Tang and Prof. Yiwei Wang. Before that, I obtained my Master’s degree at the HCP Lab and bachelor’s degree of Computer Science and Technology in SYSU, where I was fortunately advised by Prof. Xiaodan Liang to conduct research in NLP.

Research Interests

I have a broad interest in utilizing Large Models (LMs) for complex reasoning tasks.

With the training text that would take 10^4 years for a human to read, LLMs excel beyond human capability in the domains of writing, translation, and knowledge reservoirs. However, their mathematical ability is still at the level of primary and secondary school students. To fill this confusing and unignorable gap, I’m interested in the following topics:

  • Automatic Data Synthesis
  • Self-Evolution Framework
  • Learning Efficiency (performance improvement / data quantity)

News

Preprints

Benchmarking LLMs for Optimization Modeling and Enhancing Reasoning via Reverse Socratic Synthesis
Zhicheng Yang, Yinya Huang, Wei Shi, Liang Feng, Linqi Song, Yiwei Wang, Xiaodan Liang, Jing Tang
[Paper] [Code]

AlignedCoT: Prompting Large Language Models via Native-Speaking Demonstrations
Zhicheng Yang, Yinya Huang, Jing Xiong, Liang Feng, Xiaodan Liang, Yiwei Wang, Jing Tang
[Paper] [Code]

Recent Publication

LogicSolver: Towards Interpretable Math Word Problem Solving with Logical Prompt-enhanced Learning
Zhicheng Yang*, Jinghui Qin*, Jiaqi Chen, Liang Lin, Xiaodan Liang
The 2022 Conference on Empirical Methods in Natural Language Processing, 2022. (Findings of EMNLP 2022)
[Paper] [Code]

Unbiased Math Word Problems Benchmark for Mitigating Solving Bias
Zhicheng Yang, Jinghui Qin, Jiaqi Chen, Xiaodan Liang
Annual Conference of the North American Chapter of the Association for Computational Linguistics, 2022. (Findings of NAACL 2022)
[Paper] [Code]

CLOMO: Counterfactual Logical Modification with Large Language Models
Yinya Huang, Ruixin Hong, Hongming Zhang, Wei Shao, Zhicheng Yang, Dong Yu, Changshui Zhang, Xiaodan Liang, Linqi Song
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics. (ACL 2024)
[Paper]

DQ-LoRe: Dual Queries with Low Rank Approximation Re-ranking for In-Context Learning
Jing Xiong, Zixuan Li, Chuanyang Zheng, Zhijiang Guo, Yichun Yin, Enze Xie, Zhicheng Yang, Qingxing Cao, Haiming Wang, Xiongwei Han, Jing Tang, Chengming Li, Xiaodan Liang
12th International Conference on Learning Representations, 2024. (ICLR 2024)
[Paper]

ATG: Benchmarking Automated Theorem Generation for Generative Language Models
Xiaohan Lin, Qingxing Cao, Yinya Huang, Zhicheng Yang, Zhengying Liu, Zhenguo Li, Xiaodan Liang
Annual Conference of the North American Chapter of the Association for Computational Linguistics, 2024. (Findings of NAACL 2024)
[Paper]

Template-based Contrastive Distillation Pre-training for Math Word Problem Solving
Jinghui Qin*, Zhicheng Yang*, Jiaqi Chen, Xiaodan Liang and Liang Lin
IEEE Transactions on Neural Networks and Learning Systems, 2023. (TNNLS)
[Paper]

(* denotes co-first authors)

Education

  • 2020 — 2023: Mphil in Pattern Recognition and Intelligent Systems, School of Intelligent Systems Engineering, Sun Yat-sen University.
  • 2016 — 2020: B.Sc. in Computer Science and Engineering, School of Computer Science and Engineering, Sun Yat-sen University.

Honors and Awards

  • National First Prize, Contemporary Undergraduate Mathematical Contest in Modeling (CUMCM), China
  • First Prize Scholarship, Sun Yat-sen University
  • Second Prize Scholarship, Sun Yat-sen University

Experience

  • NLP Research Intern, Huawei Noah's Ark
  • Recommender System Intern, ByteDance-Data-Douyin
  • NLP Research Intern, DMAI

Flag Counter