UGIF :- UI Grounded Instruction Following

# 133






Abstract

New smartphone users have difficulty engaging with it and often use only a limited set of features like calling and messaging. These users are hesitant to explore using the smartphone and rely on experienced users to teach them how to use the phone. However, experienced users are not always around to guide them. To help new users learn how to use the phone on their own, we propose a natural language-based instruction following agent that operates over the UI and shows the user how to perform various tasks. Common how-to questions, such as 'How to block calls from unknown numbers?', are documented on support sites with a sequence of steps in natural language describing what the user should do. We parse these steps using Large Language Models (LLMs) and generate macros that can be executed on-device when the user asks a query. To evaluate this agent, we introduce UGIF-DataSet, a multi-lingual, multi-modal UI grounded dataset for step-by-step task completion on the smartphone. It contains 523 natural language instructions with paired sequences of multilingual UI screens and actions that show how to execute the task in eight languages. We compare the performance of different large language models including PaLM, GPT3, etc. and find that the end-to-end task completion success rate is 48% for English UI but the performance drops to 32% for non-English languages. We analyse the common failure modes of existing models on this task and point out areas for improvement.

Sagar Gubbi Venkatesh, Google

Sagar Gubbi is a postdoctoral researcher advised by Partha Talukdar at Google Research. His current research is on helping novice users learn how to navigate the Android UI by leveraging large language models. He has previously worked on low-power circuits and imitation learning for robots.