At this week’s lab meeting, Jetic will present a recent paper on speech recognition from OpenAI:
Abstract: We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zero-shot transfer setting without the need for any fine-tuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.
Wednesday, 5 April at 12pm – click to add to calendar.
This will be a hybrid meeting at ASB 9921. The zoom link will be posted on zulip.