WenetSpeech-Yue: A Large-Scale Cantonese Speech Corpus with Multi-dimensional Annotation

Longhao Li1, Zhao Guo1, Hongjie Chen2, Yuhang Dai1, Ziyu Zhang1, Hongfei Xue1, Tianlun Zuo1, Chengyou Wang1, Shuiyuan Wang1, Xin Xu3, Hui Bu3, Jie Li2, Jian Kang2, Binbin Zhang4, Ruibin Yuan5, Ziya Zhou5, Wei Xue5, Lei Xie1

1 ASLP, Northwest Polytechnical University
2 Institute of Artificial Intelligence (TeleAI), China Telecom
3 Beijing AISHELL Technology Co., Ltd.
4 WeNet Open Source Community
5 Hong Kong University of Science and Technology

📑 Paper    |    🐙 GitHub    |    🤗 HuggingFace
🖥️ HuggingFace Space    |    🎤 Demo Page    |    💬 Contact Us



Abstract

The development of speech understanding and generation has been significantly accelerated by the availability of large-scale, high-quality speech datasets. Among these, ASR and TTS are regarded as the most established and fundamental tasks. However, for Cantonese (Yue Chinese), spoken by approximately 84.9 million native speakers worldwide, limited annotated resources have hindered progress and resulted in suboptimal ASR and TTS performance. To address this challenge, we propose WenetSpeech-Pipe, an integrated pipeline for building large-scale speech corpus with multi-dimensional annotation tailored for speech understanding and generation. It comprises six modules: Audio Collection, Speaker Attributes Annotation, Speech Quality Annotation, Automatic Speech Recognition, Text Postprocessing and Recognizer Output Voting, enabling rich and high-quality annotations. Based on this pipeline, we release WenetSpeech-Yue, the first large-scale Cantonese speech corpus with multi-dimensional annotation for ASR and TTS, covering 21,800 hours across 10 domains with annotations including ASR transcription, text confidence, speaker identity, age, gender, speech quality scores, among other annotations. We also release WSYue-eval, a comprehensive Cantonese benchmark with two components: WSYue-ASR-eval, a manually annotated set for evaluating ASR on short and long utterances, code-switching, and diverse acoustic conditions, and WSYue-TTS-eval, with base and coverage subsets for standard and generalization testing. Experimental results show that models trained on WenetSpeech-Yue achieve competitive results against state-of-the-art (SOTA) Cantonese ASR and TTS systems, including commercial and LLM-based models, highlighting the value of our dataset and pipeline. The dataset, benchmark, and the ASR and TTS models built upon WenetSpeech-Yue will be open-sourced. Demos can be found in the supplementary material.

WenetSpeech-Pipe

WenetSpeech-Pipe is an automated pipeline specifically designed for building large-scale Cantonese datasets with multi-dimensional annotation. It consists of six components: (A) Audio Collection, (B) Speaker Attributes Annotation, (C) Speech Quality Annotation, (D) Automatic Speech Recognition, (E) Text Post-Processing, and (F) Recognizer Output Voting. The figure below provides an overview of the WenetSpeech-Pipe.

WenetSpeech-Pipe

WenetSpeech-Yue

WenetSpeech-Pipe

Data Samples

Domain Sample 1 Sample 2 Sample 3
Storytelling
两只小企鹅都有嘢食

刘备仲马鞭一指蜀兵一齐掩杀过去打到吴兵大败唉刘备八路兵马以雷霆万钧之势啊杀到吴兵啊尸横遍野血流成河

第日长老就请咗成班盗木师傅嚟啦
Entertainment
叫做诶诶直入式你个脑部里边咧记得呢一个嘅以前香港有一个广告好出名嘅佢乜嘢都冇噶净系影住喺弥敦道佢哋间铺头嘅啫但系就不停有人嗌啦平平吧平吧

原来王力宏咧系佢家中里面咧成就最低个吓哇

大约系八分钟之后或者十分钟啦佢就会开始滚翻起
Drama
忽然从光线死角嘅阴影度窜出一只大猫

无论你提出任何嘅要求

跟住佢就坐低马双手运力
Vlog
今日我带大家去见识一位九零后嘅靓仔咧

咁咁多样材料咁我哋首先第一步处理咗一件

咁咧当然睇月路咧我哋要睇
Commentary
香港嘅消费市场从此不一样

啲点样对于佢哋嘅服务态度啊不透过呢一年左右嘅时间啦其实大家都静一静啦咁你就会见到香港嘅经济其实

二零零三年中央政府推出个人游计划挽救沙士后一蹶不振的香港经济

ASR Leaderboard

Model #Params (M) In-House Open-Source WSYue-eval
Dialogue Reading yue HK MDCC Daily_Use Commands Short Long
w/o LLM
Conformer-Yue⭐13016.577.827.7211.425.735.738.975.058.89
Paraformer22083.2251.9770.1668.4947.6779.3169.3273.6489.00
SenseVoice-small23421.086.528.057.346.345.746.656.699.95
SenseVoice-s-Yue⭐23419.196.716.878.685.435.246.935.238.63
Dolphin-small37259.207.3839.6951.2926.397.219.6832.3258.20
TeleASR70037.187.277.027.886.258.025.986.2311.33
Whisper-medium76975.5068.6959.4462.5062.3164.4180.4180.8250.96
Whisper-m-Yue⭐76918.696.866.8611.035.494.708.515.058.05
FireRedASR-AED-L110073.7018.7243.9343.3334.5348.0549.9955.3750.26
Whisper-large-v3155045.0915.4612.8516.3614.6317.8420.7012.9526.86
w/ LLM
Qwen2.5-Omni-3B300072.017.4912.5911.7538.9110.5925.7867.9588.46
Kimi-Audio700068.6524.3440.9038.7230.7244.2945.5450.8633.49
FireRedASR-LLM-L830073.7018.7243.9343.3334.5348.0549.9949.8745.92
Conformer-LLM-Yue⭐420017.226.216.239.524.354.576.984.737.91

TTS Demo

Text
Loading...

TTS System Performance Comparison

Evaluation Overview

Objective and subjective evaluation results of different TTS systems on the WSYue-TTS-eval benchmark.
Model Base Coverage UTMOSv2 I-MOS S-MOS A-MOS
MER (%) SIM MER (%) SIM
Llasa-1B 53.31 0.732 43.68 0.754 2.360 2.60 ± 1.01 3.05 ± 0.87 2.32 ± 0.98
Step-Audio-TTS-3B 27.79 0.762 24.25 0.781 2.496 3.22 ± 0.70 3.14 ± 0.58 2.82 ± 0.69
CosyVoice2 14.38 0.812 13.74 0.826 2.989 3.72 ± 0.50 3.52 ± 0.36 3.22 ± 0.60
Edge-TTS 8.30 - 9.27 - 2.997 4.12 ± 0.28 - 3.48 ± 0.56
Llasa-1B-Yue 10.89 0.762 12.78 0.772 2.696 4.30 ± 0.23 4.11 ± 0.37 4.34 ± 0.34
Commercial system with a single fixed speaker; speaker similarity is not evaluated.