EWSLT 2026

Evaluation Workshop on Speech and Language Technologies

IIIT Hyderabad, India

December 2026 (Dates TBA)

Workshop Overview

Recent progress in speech and language technologies, increasingly driven by scale, with larger models, larger datasets, and more compute, has lead to impressive performance gains. However, these systems still have massive performance gaps that are glaringly obvious to human users, yet completely invisible to standard evaluation methods. As a result, evaluation has become a checklist exercise that reports incremental improvements on existing benchmarks rather than providing a meaningful assessment of what systems actually understand. This creates a growing gap between technological capability and our ability to measure, interpret, and ground that progress in principled ways.

The EWSLT workshop aims to address this gap by bringing together researchers and practitioners from across the globe working on speech and language technologies onto a common platform to collectively discuss, debate, and develop innovative, effective, interpretable, and linguistically aware evaluation methodologies. This includes new standards, benchmarks, datasets, metrics, and shared challenges that encourage deeper and more reliable ways of understanding progress in speech and language technologies.

The workshop will include invited talks, paper presentations, panel discussions, and shared evaluation challenges focused on addressing the most pressing bottlenecks in the evaluation of modern speech and language technologies.

Important Dates

Paper submission deadline TBA
Notification of acceptance TBA
Camera-ready submission TBA
Workshop date December 2026

Call for Papers

We invite submissions describing novel standards, evaluation methods, benchmarks, datasets, and analysis of speech and language technologies. Submissions must follow the ACL formatting guidelines. Accepted papers will be presented at the workshop and included in the proceedings.

Topics of interest include (but are not limited to):

  • Evaluation of Automatic Speech Recognition (ASR)
  • Evaluation of Text-to-Speech (TTS)
  • Speech Translation evaluation
  • Evaluation of Speech-to-Speech systems
  • Speech LLM evaluation
  • Evaluation metrics beyond WER
  • Human vs automatic evaluation methods
  • Low-resource and multilingual evaluation
  • Evaluation of expressive and prosodic speech
  • Bias and robustness in speech systems

Shared Tasks

EWSLT will host shared challenges focusing on benchmarking speech technologies.

Details about datasets and evaluation protocols will be released soon. Participants are encouraged to form cross-institutional teams to tackle these benchmarks.

Challenge Tracks
  • ASR Evaluation Challenge
  • TTS Naturalness and Prosody Evaluation
  • Speech Translation Benchmark
  • Speech-LLM Evaluation Challenge

Organizers

  • Language Technologies Research Center (LTRC),
    IIIT Hyderabad

Program Committee

Committee members TBA.

Contact

For all inquiries regarding submissions, shared tasks, or sponsorships for EWSLT 2026, please reach out to the organizing committee.

ewslt.iiit@gmail.com