Show simple item record

dc.contributor.advisorParikh, Devi
dc.contributor.advisorBatra, Dhruv
dc.contributor.advisorLee, Stefan
dc.contributor.authorChattopadhyay, Prithvijit
dc.date.accessioned2019-05-29T14:04:43Z
dc.date.available2019-05-29T14:04:43Z
dc.date.created2019-05
dc.date.issued2019-04-26
dc.date.submittedMay 2019
dc.identifier.urihttp://hdl.handle.net/1853/61308
dc.description.abstractAs AI continues to advance, human-AI teams are inevitable. However, progress in AI is routinely measured in isolation, without a human in the loop. It is crucial to benchmark progress in AI, not just in isolation, but also in terms of how it translates to helping humans perform certain tasks, i.e., the performance of human-AI teams. This thesis introduces a cooperative game – GuessWhich – to measure human-AI team performance in the specific context of the AI being a visual conversational agent. GuessWhich involves live interaction between the human and the AI. The AI, which we call Alice, is provided an image which is unseen by the human. Following a brief description of the image, the human questions Alice about this secret image to identify it from a fixed pool of images. We measure performance of the human-Alice team by the number of guesses it takes the human to correctly identify the secret image after a fixed number of dialog rounds with Alice. We compare performance of the human-Alice teams for two versions of Alice. Our human studies suggest a counter-intuitive trend – that while AI literature shows that one version outperforms the other when paired with an AI questioner bot, we find that this improvement in AI-AI performance does not translate to improved human-AI performance. As this implies a mismatch between benchmarking of AI in isolation and in the context of human-AI teams, this thesis further motivates the need to evaluate AI additionally in the latter setting to effectively leverage the progress in AI for efficient human-AI teams.
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.publisherGeorgia Institute of Technology
dc.subjectVisual conversational agents
dc.subjectVisual dialog
dc.subjectHuman-AI teams
dc.subjectReinforcement learning
dc.subjectMachine learning
dc.subjectComputer vision
dc.subjectArtificial intelligence
dc.titleEvaluating visual conversational agents via cooperative human-AI games
dc.typeThesis
dc.description.degreeM.S.
dc.contributor.departmentComputer Science
thesis.degree.levelMasters
dc.date.updated2019-05-29T14:04:44Z


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record