Unblind Your Apps: Predicting Natural-Language Labels for Mobile GUI Components by Deep Learning

According to the World Health Organization(WHO), it is estimated that approximately 1.3 billion people live with some forms of vision impairment globally, of whom 36 million are blind. Due to their disability, engaging these minority into the society is a challenging problem. The recent rise of smar...

Full description

Saved in:
Bibliographic Details
Published in2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE) pp. 322 - 334
Main Authors Chen, Jieshan, Chen, Chunyang, Xing, Zhenchang, Xu, Xiwei, Zhut, Liming, Li, Guoqiang, Wang, Jinshui
Format Conference Proceeding
LanguageEnglish
Published ACM 01.10.2020
Subjects
Online AccessGet full text
ISSN1558-1225
DOI10.1145/3377811.3380327

Cover

Loading…
More Information
Summary:According to the World Health Organization(WHO), it is estimated that approximately 1.3 billion people live with some forms of vision impairment globally, of whom 36 million are blind. Due to their disability, engaging these minority into the society is a challenging problem. The recent rise of smart mobile phones provides a new solution by enabling blind users' convenient access to the information and service for understanding the world. Users with vision impairment can adopt the screen reader embedded in the mobile operating systems to read the content of each screen within the app, and use gestures to interact with the phone. However, the prerequisite of using screen readers is that developers have to add natural-language labels to the image-based components when they are developing the app. Unfortunately, more than 77% apps have issues of missing labels, according to our analysis of 10,408 Android apps. Most of these issues are caused by developers' lack of awareness and knowledge in considering the minority. And even if developers want to add the labels to UI components, they may not come up with concise and clear description as most of them are of no visual issues. To overcome these challenges, we develop a deep-learning based model, called Labeldroid, to automatically predict the labels of image-based buttons by learning from large-scale commercial apps in Google Play. The experimental results show thatour model can make accurate predictions and the generated labels are of higher quality than that from real Android developers.
ISSN:1558-1225
DOI:10.1145/3377811.3380327