Automated support for improving software quality of mobile applications before and after release
MetadataShow full item record
Mobile devices are becoming the platform of choice for a number of tasks, including news consumption, online shopping, and streaming content. Mobile applications (or simply apps) play an essential role in the success of mobile platforms and impact our lives fundamentally. To improve the quality of these apps before release, companies invest a great amount of resources in software verification, and in particular in testing. It is therefore crucial to define and use testing approaches that are both effective and efficient. At the same time, because testing techniques cannot generally reveal all bugs, companies release apps containing latent bugs. Additionally, the environment in which apps operate changes quickly and these changes introduce new bugs. The ability to react effectively to reported latent bugs and changes to the environment is therefore also essential to resolve bugs, but the support for these tasks is still limited and based on mostly manual, human-intensive approaches. The overarching goal of this dissertation is to improve software quality by devising automated testing and maintenance techniques that address these problems. To this end, I defined a family of testing and maintenance techniques: Barista, DiffDroid, Yakusu, and AppEvolve. Barista records, encodes, and runs platform-independent test cases to help developers in testing apps. DiffDroid identifies inconsistencies in the behavior of an app running on different platforms and reports these inconsistencies to developers. Yakusu translates natural-language bug reports into test cases, so that developers can use the generated tests to debug failures caused by latent bugs and quickly fix their apps. AppEvolve accounts for bugs caused by changes to the environment; it does so by automatically updating API usages (i.e., interactions with the underlying environment) in an app based on how developers of other apps performed corresponding changes. To evaluate the effectiveness of my techniques, I implemented them as prototype tools and performed a series of empirical investigations on real-world apps. The evaluation shows that the techniques are not only effective but also efficient. These results provide evidence that the techniques can be practically used to improve software quality of mobile apps before and after release.