Show simple item record

dc.contributor.advisorTovey, Craig A.
dc.contributor.authorBailey, James P.
dc.date.accessioned2018-01-22T21:08:12Z
dc.date.available2018-01-22T21:08:12Z
dc.date.created2017-12
dc.date.issued2017-08-11
dc.date.submittedDecember 2017
dc.identifier.urihttp://hdl.handle.net/1853/59189
dc.description.abstractMost social choice algorithms rely on private data from individuals to maximize a social utility. However, many algorithms are manipulable -- an individual can manipulate her reported data to obtain an outcome she prefers often at the cost of social good. Literature addresses this by declaring an algorithm as ``manipulable'' or ``strategy-proof''. However, for many decisions we are forced to either use a manipulable algorithm or an algorithm with negative properties; for instance, a dictatorship is the only strategy-proof way to decide an election. Thus, we view it as unwise to take an all-or-nothing approach to manipulation since we so often have to settle for nothing. In this dissertation, we focus on algorithmic design with strategic behavior in mind. Specifically, we develop the framework to examine the effect of manipulation on social choice using a game theoretic model. We show that the standard Nash equlibria solution concept is insufficient to capture human behavior as it relates to deception. The combinatorial nature of many problems results in Nash equilibria where individuals are lying to their own detriment. To remove these absurd equilibria and better capture human behavior, we introduce the Minimal Dishonesty refinement. In our model each individual tells the smallest possible lie to obtain the best possible result and therefore if the individual tries to be more honest then they will get a worse outcome. Our model for human behavior is supported by a vast amount of experimental evidence in psychology and economics and allows us to better understand the likely outcomes of strategic behavior. We also introduce a measure of manipulation -- the Price of Deception -- that \emph{quantifies} the impact of strategic behavior. Specifically, the Price of Deception is the worst-case ratio between the social utility when each individual is sincere, and the social utility at a Nash equilibrium. With Minimal Dishonesty and the Price of Deception we are able to identify algorithms that are negligibly impacted by manipulation, algorithms where strategic behavior leads to arbitrarily poor outcomes, and anything in-between. We demonstrate the power of Minimal Dishonesty and the Price of Deception by answering open problems in assignments, facility location, elections, and stable marriages including a 28-year open problem by Gusfield and Irving. Our results demonstrate that the Price of Deception, like computational complexity, provides finer distinctions of manipulability than between ``yes" and ``no".
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.publisherGeorgia Institute of Technology
dc.subjectGame theory
dc.subjectPrice of deception
dc.subjectMinimal dishonesty
dc.subjectSocial choice
dc.subjectVoting
dc.subjectBorda
dc.subjectPlurality
dc.subjectCopeland
dc.subjectMajority judgment
dc.subjectVeto
dc.subjectFacility location
dc.subjectAssignments
dc.subjectStable marriage
dc.subjectGale-Shapley
dc.subjectCollege admission
dc.subjectStudent placement
dc.titleThe Price of deception in social choice
dc.typeDissertation
dc.description.degreePh.D.
dc.contributor.departmentIndustrial and Systems Engineering
thesis.degree.levelDoctoral
dc.contributor.committeeMemberVempala, Santosh
dc.contributor.committeeMemberTetali, Prasad
dc.contributor.committeeMemberVazirani, Vijay
dc.contributor.committeeMemberGriffin, Paul
dc.date.updated2018-01-22T21:08:12Z


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record