This project will research a new interaction paradigm for search engines, where all input and output is mediated via speech. While such information systems have been important for the visually impaired for many years, a renewed focus on speech is emerging driven by the ever growing sales of internet enabled smart phones. The phones allow internet access in new contexts that require both hands- and eyes-free interaction; one example being searching for
information while driving. Also, smart phones are being accessed by a new and large population of users across the world many of whom struggle with literacy; again requiring access mediated by speech. Currently, search systems poorly serve such a mode. Recent research showed that one cannot just ‘bolt on’ speech recognisers and screen readers to an existing system: a fundamental change to the way search is conducted is required.
Our Project Aim then is to research a new framework for effective information retrieval over a speech-only channel: Spoken Conversational Search (SCS), which provides a conversational approach to determining user information needs, presenting results, and enabling search reformulation.
- A. Albahem, D. Spina, L. Cavedon and F.Scholer.”RMIT @ TREC 2016 Dynamic Domain: Exploiting Passage Representation for Retrieval and Relevance Feedback.” In TREC 2016, 2016.
- D. Spina, J.R. Trippas, L.Cavedon and M. Sanderson.”SpeakerLDA: Discovering Topics in Transcribed Multi-Speaker Audio Contents.” In Proceedings of the Third Edition Workshop on Speech, Language & Audio in Multimedia, pp. 7-10. ACM, 2015.
- J.R. Trippas, D. Spina, M. Sanderson and L.Cavedon.”Results Presentation Methods for a Spoken Conversational Search System.” In Proceedings of the CIKM First International Workshop on Novel Web Search Interfaces and Systems, pp. 13-15. ACM, 2015.
- J.R. Trippas, D. Spina, M. Sanderson and L. Cavedon.”Towards Understanding the Impact of Length in Web Search Result Summaries over a Speech-only Communication Channel.” In Proc. of SIGIR’15. 2015.