Curriculum vitae

A quick overview of my background and research

If we’re meeting for a collaboration or interview, this page is a fast way to understand what I’ve worked on: education, publications, research experience, and selected projects. The PDF version below includes the full details.

Download CV

Education

BSc in Information and Computing Science

Xi'an Jiaotong-Liverpool University · Sep 2022 - Jul 2026 (expected)

Current GPA: 3.63/4.00. Capstone project: Listening or Reading? An Empirical Study of Modality Importance Analysis Across AQA Question Types.

Publications

Listening or Reading? An Empirical Study of Modality Importance Analysis Across AQA Question Types

DCASE 2025 Workshop · 2025

Zeyu Yin, Yiqiang Cai, Pingsong Deng, Xinyang Lyu, Shengchen Li

ECHOTWIN-QA: A Dual-Tower BEATSBERT System for DCASE 2025 Task 5 Audio Question Answering

DCASE 2025 Challenge (Task 5) · 2025

Zeyu Yin, Ziyang Zhou, Yiqiang Cai, Shengchen Li, Xi Shao

ADAPTF-SEPNET: AudioSet-Driven Adaptive Pre-training of TF-SEPNet for Multi-device Acoustic Scene Classification

DCASE 2025 Challenge · 2025

Ziyang Zhou, Zeyu Yin, Yiqiang Cai, Shengchen Li, Xi Shao

EmoSound: A Multimodal AI Agent for Emotion-Aware Audio Accompaniment of Emoticons

BICS 2025 · 2025

Jianghui Sun, Haosen Shi, Zeyu Yin, Wansu Mo, Hongyi Ding, Yiming Hu, Xi Yang, Yuyao Yan

Research experience

Participant / Researcher, DCASE 2025 Challenge & Workshop

2025 · Barcelona, Spain

Participated in DCASE 2025 Task 5: Audio Question Answering and the DCASE 2025 Workshop. Developed an end-to-end AQA system and conducted experiments and ablation studies to analyze modality importance across question types.

Undergraduate Researcher, Summer Undergraduate Research Fellowship (SURF)

Summer 2024 · Suzhou, China

Worked on Expressive Timing Modelling in Performed Classical Piano Music.

Projects

Listening or Reading? An Empirical Study of Modality Importance Analysis Across AQA Question Types

Audio Question Answering · modality weighting · Acoustic Reasoning · DCASE 2025

A research case study investigating how audio QA systems trade off acoustic evidence against semantic and text-derived shortcuts across question types.

ECHOTWIN-QA: A Dual-Tower BEATSBERT System for DCASE 2025 Task 5 Audio Question Answering

Challenge system · BEATSBERT · PyTorch

A full end-to-end AQA system built for the DCASE 2025 challenge, including training, evaluation, ablations, and technical reporting.

ADAPTF-SEPNET: AudioSet-Driven Adaptive Pre-training of TF-SEPNet for Multi-device Acoustic Scene Classification

Acoustic scene classification · Challenge · Evaluation

A collaborative challenge project focused on model development, experimental evaluation, and analysis for acoustic scene classification.

EmoSound: A Multimodal AI Agent for Emotion-Aware Audio Accompaniment of Emoticons

Multimodal agent · BICS 2025 · Evaluation

A multimodal AI agent project where I implemented components for the agent and evaluation pipeline, and contributed to experiments and writing.

Skills

Programming, ML, web, and systems

Python, Java, JavaScript/TypeScript, SQL, Bash, PyTorch, HuggingFace, NestJS, Spring Boot, Node.js

Also experienced with scikit-learn, pandas, NumPy, MySQL, Postgres, Linux, Docker, Git, Cloudflare Tunnel, SLURM, LaTeX, and Markdown.

Honors

University Academic Excellence Award

2023 · Full Scholarship

Received the university academic excellence award with full scholarship support.

National Second Prize, Future Engineer Competition (Smart Invention)

2021, 2022

Won the national second prize twice; developed an Arduino-based intelligent system integrating AI technology.