Current projects


Mobile System for Rehabilitative Vocal Assistance of Surgical Aphonia (2014-2016)


Past projects



Speech Synthesis that Improves Through Adaptive Learning (2011-2014)


In order to be accepted by users, the voice of a spoken interaction system must be natural and appropriate for the content. Using the same voice for every application is not acceptable to users. But creating a speech synthesiser for a new language or domain is too expensive, because current technology relies on labelled data and human expertise. Systems comprise rules, statistical models, and data, requiring careful tuning by experienced engineers.

So, speech synthesis is available from a small number of vendors, offering generic products, not tailored to any application domain. Systems are not portable: creating a bespoke system for a specific application is hard, because it involves substantial effort to re-engineer every component of the system. Take-up by potential end users is limited; the range of feasible applications is narrow. Synthesis is often an off-the-shelf component, providing a highly inappropriate speaking style for applications such as dialogue, speech translation, games, personal assistants, communication aids, SMS-to-speech conversion, e-learning, toys and a multitude of other applications where a specific speaking style is important.

We are developing methods that enable the construction of systems from audio and text data. We are enabling systems to learn after deployment. General purpose or specialised systems for any domain or language will become feasible. Our objectives are:

  • ADAPTABILITY: create highly portable and adaptable speech synthesis technology suitable for any domain or language
    LEARNING FROM DATA AND INTERACTION: provide a complete, consistent framework in which every component of a speech synthesis system can be learned and improved
  • SPEAKING STYLE: enable the generation of natural, conversational, highly expressive synthetic speech which is appropriate to the wider context

  • DEMONSTRATION AND EVALUATION: automatic creation of a new speech synthesiser from scratch, and feedback-driven online learning, with perceptual evaluations.


Sound2Sense (2007-2011)


S2S is an interdisciplinary EC-funded Marie Curie Research Training Network (MC-RTN) involving engineers, computer scientists, psychologists, and linguistic phoneticians.
We use a variety of approaches to investigate what types of information are available in the speech signal, and how listeners use that information when they are listening in their native language, or in a foreign language, or in a noisy place like a railway station, when it is hard to hear the speech. These three types of listening situation allow us to see how listeners actively use their knowledge, together with the speech they hear, to understand a message.
Recent research shows that quite fine phonetic detail in the speech signal can carry information crucial to successfully understanding every aspect of a message, from its formal linguistic content, like words and grammar, to the interactional structure which keeps a conversation going. This is not the traditional view, and it challenges most models of speech processing, especially in the central role they give to phonemes and syllables. In contrast, two of S2S’s fundamental principles are that phonetic information is encoded in units of different lengths and degrees of complexity, and that any given sound in the signal fulfils multiple communicative functions simultaneously—its fine detail indicating what those functions are.