This project will research a new interaction paradigm for search engines, where all input and output is mediated via speech. While such information systems have been important for the visually impaired for many years, a renewed focus on speech is emerging driven by the ever growing sales of internet enabled smart phones. The phones allow internet access in new contexts that require both hands- and eyes-free interaction; one example being searching for
information while driving. Also, smart phones are being accessed by a new and large population of users across the world many of whom struggle with literacy; again requiring access mediated by speech. Currently, search systems poorly serve such a mode. Recent research showed that one cannot just ‘bolt on’ speech recognisers and screen readers to an existing system: a fundamental change to the way search is conducted is required.
Our Project Aim then is to research a new framework for effective information retrieval over a speech-only channel: Spoken Conversational Search (SCS), which provides a conversational approach to determining user information needs, presenting results, and enabling search reformulation.
- JASIST Special Issue on Conversational Approaches to Information Retrieval
- 1st International Workshop on Conversational Approaches to Information Retrieval (CAIR’17), co-located with SIGIR’17 (Tokyo, Japan)
- J.R. Trippas, D.Spina, L. Cavedon and M. Sanderson.”A Conversational Search Transcription Protocol and Analysis” In SIGIR 1st International Workshop on Conversational Approaches to Information Retrieval (CAIR’17), 2017.
- J.R. Trippas, D.Spina, L. Cavedon and M. Sanderson.”Crowdsourcing User Preferences and Query Judgments for Speech-Only Search” In SIGIR 1st International Workshop on Conversational Approaches to Information Retrieval (CAIR’17), 2017.
- D. Spina, J.R. Trippas, L. Cavedon and M. Sanderson.”Extracting Audio Summaries to Support Effective Spoken Document Search.” In Journal of the Association for Information Science and Technology (JASIST), 2017.
- J.R. Trippas, D.Spina, L. Cavedon and M. Sanderson.”How do people interact in conversational speech-only search tasks: A preliminary analysis” In Proceedings of the 2nd ACM SIGIR Conference on Human Information Interaction and Retrieval (CHIIR’17), 2017.
- A. Albahem, D. Spina, L. Cavedon and F.Scholer.”RMIT @ TREC 2016 Dynamic Domain: Exploiting Passage Representation for Retrieval and Relevance Feedback.” In TREC 2016, 2016.
- D. Spina, J.R. Trippas, L.Cavedon and M. Sanderson.”SpeakerLDA: Discovering Topics in Transcribed Multi-Speaker Audio Contents.” In Proceedings of the Third Edition Workshop on Speech, Language & Audio in Multimedia, pp. 7-10. ACM, 2015.
- J.R. Trippas, D. Spina, M. Sanderson and L.Cavedon.”Results Presentation Methods for a Spoken Conversational Search System.” In Proceedings of the CIKM First International Workshop on Novel Web Search Interfaces and Systems, pp. 13-15. ACM, 2015.
- J.R. Trippas, D. Spina, M. Sanderson and L. Cavedon.”Towards Understanding the Impact of Length in Web Search Result Summaries over a Speech-only Communication Channel.” In Proc. of SIGIR’15. 2015.
- Dataset of spoken conversational search utterances: https://jtrippas.github.io/Spoken-Conversational-Search/
- Dataset for preference study on audio podcast summaries: http://damiano.github.io/podcastsummaries/
- Resources for conversational search: https://github.com/ConversationalSearch/ResourceGuide/wiki