Image Captioning as an Assistive Technology: Lessons Learned from VizWiz 2020 Challenge

Main Article Content

Pierre Dognin
Igor Melnyk
Youssef Mroueh
Inkit Padhi
Mattia Rigotti
Jarret Ross
Yair Schiff
Richard A. Young
Brian Belgodere

Abstract

Image captioning has recently demonstrated impressive progress largely owing to the introduction of neural network algorithms trained on curated dataset like MS-COCO. Often work in this field is motivated by the promise of deployment of captioning systems in practical applications. However, the scarcity of data and contexts in many competition datasets renders the utility of systems trained on these datasets limited as an assistive technology in real-world settings, such as helping visually impaired people navigate and accomplish everyday tasks. This gap motivated the introduction of the novel VizWiz dataset, which consists of images taken by the visually impaired and captions that have useful, task-oriented information. In an attempt to help the machine learning computer vision field realize its promise of producing technologies that have positive social impact, the curators of the VizWiz dataset host several competitions, including one for image captioning. This work details the theory and engineering from our winning submission to the 2020 captioning competition. Our work provides a step towards improved assistive image captioning systems.


This article appears in the special track on AI & Society.

Article Details

Section
Articles