Yanchao Yu
yanchao yu

Dr Yanchao Yu

Lecturer

Biography

Yanchao Yu is a lecturer the School of Computing at Edinburgh Napier University. He holds a BSc from Beijing Information Science and Technology University (China), an MSc in Software Engineering (Distinction) and a PhD in Artificial Intelligence from Heriot-Watt University (UK). After his PhD, he was working as a postdoctoral researcher at Heriot-Watt's Interaction Lab investigating Conversational AI methods for supporting elderly healthcare in hospital, where he was leading the design and development of multimodal conversational agent. In addition, Yanchao has rich industrial experience by working as Dialkogue Modelling Scientist at a Voice-AI company Voysis (which was purchased by Apple Ltd. early 2020) and NLP researcher at a Conversational-AI company (Alana).

Research Interets:
Yanchao's research interests lie in exploring, designing and building smart agents/robots that helps humans to solve realistic, daily tasks/problems through natural, daily conversations. More specifically, he is interested in building data-driven (spoken) conversational systems, including Natural Language Understanding (NLU), Generation (NLG) and Dialogue Management (DM). His recent research focuses on continuously teaching the robot to proactively learn vision-language groundings through interaction with humans in Natural Language using machine/deep learning techniques. He believes conversational AI will be part of human's daily life (e.g. domestic, public and work tasks).

Esteem

Conference Organising Activity

  • Registration Chair
  • Co-Chair for Sigdial 2022 Special Session - Natural Language in Human Robot Interaction (NLiHRI)
  • Short Papers Chair
  • Provocations chair

 

Date


16 results

From documents to dialogue: Context matters in common sense-enhanced task-based dialogue grounded in documents

Journal Article
Strathearn, C., Gkatzia, D., & Yu, Y. (2025)
From documents to dialogue: Context matters in common sense-enhanced task-based dialogue grounded in documents. Expert Systems with Applications, 279, Article 127304. https://doi.org/10.1016/j.eswa.2025.127304
Humans can engage in a conversation to collaborate on multi-step tasks and divert briefly to complete essential sub-tasks, such as asking for confirmation or clarification, be...

How Much do Robots Understand Rudeness? Challenges in Human-Robot Interaction

Conference Proceeding
Orme, M., Yu, Y., & Tan, Z. (in press)
How Much do Robots Understand Rudeness? Challenges in Human-Robot Interaction.
This paper concerns the pressing need to understand and manage inappropriate language within the evolving human-robot interaction (HRI) landscape. As intelligent systems and r...

MoDEsT: a Modular Dialogue Experiments and Demonstration Toolkit

Presentation / Conference Contribution
Yu, Y., & Oduronbi, D. (2023, July)
MoDEsT: a Modular Dialogue Experiments and Demonstration Toolkit. Presented at CUI '23: ACM conference on Conversational User Interfaces, Eindhoven, Netherlands
We present a modular dialogue experiments and demonstration toolkit (MoDEsT) that assists researchers in planning tailored conversational AI-related studies. The platform can:...

Combining Visual and Social Dialogue for Human-Robot Interaction

Presentation / Conference Contribution
Gunson, N., Hernandez Garcia, D., Part, J. L., Yu, Y., Sieińska, W., Dondrup, C., & Lemon, O. (2021, October)
Combining Visual and Social Dialogue for Human-Robot Interaction. Presented at 2021 International Conference on Multimodal Interaction, Montréal, QC, Canada
We will demonstrate a prototype multimodal conversational AI system that will act as a receptionist in a hospital waiting room, combining visually-grounded dialogue with socia...

Coronabot: A conversational ai system for tackling misinformation

Presentation / Conference Contribution
Gunson, N., Sieińska, W., Yu, Y., Hernandez Garcia, D., Part, J. L., Dondrup, C., & Lemon, O. (2021, September)
Coronabot: A conversational ai system for tackling misinformation. Presented at Conference on Information Technology for Social Good, Roma, Italy
Covid-19 has brought with it an onslaught of information for the public, some true and some false, across virtually every platform. For an individual, the task of sifting thro...

Towards visual dialogue for human-robot interaction

Presentation / Conference Contribution
Part, J. L., Hernández García, D., Yu, Y., Gunson, N., Dondrup, C., & Lemon, O. (2021, March)
Towards visual dialogue for human-robot interaction. Presented at HRI '21: ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA
The goal of the EU H2020-ICT funded SPRING project is to develop a socially pertinent robot to carry out tasks in a gerontological healthcare unit. In this context, being able...

Optimising strategies for learning visually grounded word meanings through interaction

Thesis
Yu, Y. Optimising strategies for learning visually grounded word meanings through interaction. (Thesis)
http://researchrepository.napier.ac.uk/Output/3125858
Language Grounding is a fundamental problem in AI, regarding how symbols in Natural Language (e.g. words and phrases) refer to aspects of the physical environment (e.g. ob jec...

Incrementally learning semantic attributes through dialogue interaction

Presentation / Conference Contribution
Vanzo, A., Part, J. L., Yu, Y., Nardi, D., & Lemon, O. (2018, July)
Incrementally learning semantic attributes through dialogue interaction. Presented at AAMAS '18: 17th International Conference on Autonomous Agents and MultiAgent Systems, Stockholm, Sweden
Enabling a robot to properly interact with users plays a key role in the effective deployment of robotic platforms in domestic environments. Robots must be able to rely on int...

An Incremental Dialogue System for Learning Visually Grounded Word Meanings (demonstration system)

Presentation / Conference Contribution
Yu, Y., Eshghi, A., & Lemon, O. (2018, June)
An Incremental Dialogue System for Learning Visually Grounded Word Meanings (demonstration system). Poster presented at Workshop on Dialogue and Perception 2018, Gothenburg

Alana: Social dialogue using an ensemble model and a ranker trained on user feedback

Presentation / Conference Contribution
Papaioannou, I., Curry, A. C., Part, J. L., Shalyminov, I., Xu, X., Yu, Y., Dušek, O., Rieser, V., & Lemon, O. (2017, December)
Alana: Social dialogue using an ensemble model and a ranker trained on user feedback. Presented at Alexa Prize SocialBot Grand Challenge 1
We describe our Alexa prize system (called ‘Alana’) which consists of an ensemble of bots, combining rule-based and machine learning systems, and using a contextual ranking me...

Current Post Grad projects