This project introduces a deep-learning-based audio-to-notation system that automatically transcribes guzheng performances into Jianpu. Using Transformer sequence models and CQT spectral features, the system captures non-uniform tuning, glissandi, and overtones. The research includes baseline modeling, expressive-feature recognition, AI-supported tuning optimization, and real-time web-based score generation. The outcome supports digital preservation and broader accessibility of Chinese traditional music.