Can YouTube Stream Recordings Improve Speech Recognition for Air Traffic Control?

Authors

DOI:

https://doi.org/10.59490/joas.2026.8477

Keywords:

Air Traffic Control, Automatic Speech Recognition, Public Dataset, Large Language Model

Abstract

Automatic speech recognition for air traffic control (ATC) faces severe training data scarcity due to operational recording restrictions and expensive domain-expert transcription requirements. We address this limitation by developing an automated pipeline that extracts large-scale, high-quality training data from publicly available YouTube streams of virtual ATC simulator sessions from networks such as VATSIM and IVAO. Our approach systematically processes over 2,000 hours of content spanning 709 videos from virtual airports and airspaces in 17 countries across multiple continents, operational domains (ground, tower, approach, en-route), and diverse speaker accents. The pipeline employs speaker diarization for utterance segmentation, parallel transcription using three complementary automatic speech recognition (ASR) architectures with distinct error characteristics, and Large Language Model-based transcript fusion that synthesizes improved pseudo-labels while filtering non-ATC content. Manual verification on a stratified 120-minute evaluation set demonstrates 10.2% word error rate for controller speech and 18.3% for pilot speech—representing 37% relative improvement over the best individual model and establishing pseudo-label quality sufficient for downstream model training. We show the feasibility of this approach by training a compact 115M-parameter ASR model exclusively on automatically generated transcripts without any manually annotated operational data. Evaluation on the operational ATCO2 benchmark reveals 21.1% word error rate compared to 35.6% for published baselines trained on smaller manually-transcribed datasets, despite the domain gap between virtual and operational ATC, while achieving approximately five times faster inference. These results demonstrate that large-scale geographically and acoustically diverse, pseudo-labeled data can effectively compensate for moderate label noise when training specialized-domain speech recognition systems. We openly release the complete processing pipeline, curated video collection, and our trained model to enable reproducible research.

Metrics

Metrics Loading ...

Published

2026-03-20

How to Cite

Wüstenbecker, N., Ohneiser, O., & Kleinert , M. (2026). Can YouTube Stream Recordings Improve Speech Recognition for Air Traffic Control?. Journal of Open Aviation Science, 4(2). https://doi.org/10.59490/joas.2026.8477