Open Menu
Close Menu
Home
Publication
Experience
Large Language Models
Grove MoE: Towards Efficient and Superior MoE LLMs with Adjugate Experts
Aug 12, 2025
On-Policy Optimization with Group Equivalent Preference for Multi-Programming Language Understanding
May 19, 2025
ToTRL: Unlock LLM Tree-of-Thoughts Reasoning Potential through Puzzles Solving
May 19, 2025
Efficient OpAmp Adaptation for Zoom Attention to Golden Contexts
May 15, 2025
Parameter-Efficient Sparsity Crafting from Dense to Mixture-of-Experts for Instruction Tuning on General Tasks
Sep 20, 2024