About

I am a senior undergraduate majoring in Computer Science at Shenzhen University. I am currently seeking PhD and research internship opportunities in efficient inference for large models. If you are interested, please feel free to email me.

Research Interests

I am interested in context compression for LLM-based agents and next-generation efficient model architectures.

My previous work focused on large model pre-training, efficient inference, and multimodal AI companions.

Publications

  1. Pretraining Context Compressor thumbnail
    Pretraining Context Compressor for Large Language Models with Embedding-Based Memory
    Yuhong Dai, Jianxun Lian, Yitian Huang, Wei Zhang, Mingyang Zhou, Mingqi Wu, Xing Xie, Hao Liao
    ACL 2025 (Main Conference)

Experience

StepFun
Beijing, China
Foundation Model Post-training Intern
Worked on reward modeling and RLHF pipelines for Step series models.
Amber Group
Hong Kong, SAR
Research Intern
Trained LLM-based financial advisor models with RLHF in financial scenarios.
Microsoft Research Asia
Beijing, China
Research Intern, Social Computing Group
Star of Tomorrow Award · Conducted research and prototyping work in social computing.
Tencent
Shenzhen, China
Software Engineering Intern
RoboMaster
China (Dongguan / Changsha / Shenzhen)
Vision Team Member
Developed perception and vision modules for autonomous robotic systems in RoboMaster competitions.