dc.contributor.advisor Tovey, Craig A. dc.contributor.author Bailey, James P. dc.date.accessioned 2018-01-22T21:08:12Z dc.date.available 2018-01-22T21:08:12Z dc.date.created 2017-12 dc.date.issued 2017-08-11 dc.date.submitted December 2017 dc.identifier.uri http://hdl.handle.net/1853/59189 dc.description.abstract Most social choice algorithms rely on private data from individuals to maximize a social utility. However, many algorithms are manipulable -- an individual can manipulate her reported data to obtain an outcome she prefers often at the cost of social good. Literature addresses this by declaring an algorithm as manipulable'' or strategy-proof''. However, for many decisions we are forced to either use a manipulable algorithm or an algorithm with negative properties; for instance, a dictatorship is the only strategy-proof way to decide an election. Thus, we view it as unwise to take an all-or-nothing approach to manipulation since we so often have to settle for nothing. In this dissertation, we focus on algorithmic design with strategic behavior in mind. Specifically, we develop the framework to examine the effect of manipulation on social choice using a game theoretic model. We show that the standard Nash equlibria solution concept is insufficient to capture human behavior as it relates to deception. The combinatorial nature of many problems results in Nash equilibria where individuals are lying to their own detriment. To remove these absurd equilibria and better capture human behavior, we introduce the Minimal Dishonesty refinement. In our model each individual tells the smallest possible lie to obtain the best possible result and therefore if the individual tries to be more honest then they will get a worse outcome. Our model for human behavior is supported by a vast amount of experimental evidence in psychology and economics and allows us to better understand the likely outcomes of strategic behavior. We also introduce a measure of manipulation -- the Price of Deception -- that \emph{quantifies} the impact of strategic behavior. Specifically, the Price of Deception is the worst-case ratio between the social utility when each individual is sincere, and the social utility at a Nash equilibrium. With Minimal Dishonesty and the Price of Deception we are able to identify algorithms that are negligibly impacted by manipulation, algorithms where strategic behavior leads to arbitrarily poor outcomes, and anything in-between. We demonstrate the power of Minimal Dishonesty and the Price of Deception by answering open problems in assignments, facility location, elections, and stable marriages including a 28-year open problem by Gusfield and Irving. Our results demonstrate that the Price of Deception, like computational complexity, provides finer distinctions of manipulability than between yes" and no". dc.format.mimetype application/pdf dc.language.iso en_US dc.publisher Georgia Institute of Technology dc.subject Game theory dc.subject Price of deception dc.subject Minimal dishonesty dc.subject Social choice dc.subject Voting dc.subject Borda dc.subject Plurality dc.subject Copeland dc.subject Majority judgment dc.subject Veto dc.subject Facility location dc.subject Assignments dc.subject Stable marriage dc.subject Gale-Shapley dc.subject College admission dc.subject Student placement dc.title The Price of deception in social choice dc.type Dissertation dc.description.degree Ph.D. dc.contributor.department Industrial and Systems Engineering thesis.degree.level Doctoral dc.contributor.committeeMember Vempala, Santosh dc.contributor.committeeMember Tetali, Prasad dc.contributor.committeeMember Vazirani, Vijay dc.contributor.committeeMember Griffin, Paul dc.date.updated 2018-01-22T21:08:12Z
﻿