Naturally, I'm looking at this like an engineer. Engineers design systems to meet specifications. Those specifications often contradict each other, and some balance has to be found. There are two steps in the process of checking a design, equally critical. Verification asks whether the system meets its specifications. (Did you build the thing right?) Validation asks whether the specifications serve the needs of the user. (Did you build the right thing?)
So when it comes to elections, what are the needs of the user (the people)? From there, we can derive specifications to meet those needs.
I posit that the needs of the people in elections fall into five categories, with "good" usually somewhere on a spectrum between two extremes:
- Accuracy: Does the outcome accurately reflect the preferences and intent of the constituents? (universal good)
- Accessibility: Who is allowed to vote? And can all those nominally allowed to vote actually do so? (minimal case is oligarchy, maximal case is rule by uninformed mob of children)
- Step response: How long does the system take to respond to sudden changes in the preferences of the constituents? (minimal case is mob rule, maximal case is dictatorship)
- Trust: How much do constituents trust the system to operate according to their needs? (minimal case is social unrest and chaos, maximal case allows no detection of flaws)
- Cost efficiency: How many resources are spent on elections? (minimal case denies necessary resources, maximal case is wasteful)
- Define constituency
- Trigger election
- Fund election
- Define ballot
- Cast votes
- Count votes
- Determine outcome
Define the constituency
Minimize voters without clear preferences (children, the mentally incompetent... others?)
Beyond that, how is the constituency defined? Geographically? By votes cast? By pre-existing groups?
Does the definition of constituency inflict bias on the system? (Gerrymandering)
How quickly do elections respond to changes in the will of the people? Too slow is a dictatorship. Too fast is instability and chaos.
Compare to control system step response, sampling and aliasing effects.
Do elections have sufficient funding to function?
How many voting locations is too many?
Avoid recounts wherever possible
Minimize required human labor
Accuracy of preference. Did the voter cast their vote to reflect true preference? Tactical voting considerations. Blackmail considerations, requiring secret ballot.
Accuracy of intent. Was the vote counted as cast? Can this be confirmed by the voter? Breaks secret ballot.
Prevent votes from being created or destroyed.
Check for statistical anomalies, like massive shifts in voting patterns, numbers of votes cast vs. registered voters, or entire precincts favoring one candidate at 100%.
Voter relative weight: in some elections not all votes have equal weight, so how wide is the spread?
Maximum unrepresented fraction: in all elections some people are not represented in the outcome, both as a fraction of the constituency and as a fraction of the overall populace. Quantify this fraction. Consider that it changes over time as the constituency changes between elections.
Candidate ballot access
Ballot ease of use
Ballot lack of bias; random order per voter
Cost of voting to the constituent
Point-of-voting incorrect admissions/rejections, and balance between them
Ballot confirmation by voter after casting
Translate votes to an outcome, many possible ways
Transparency at all steps
Public oversight of counting
Public recount possibility
Bayesean regret of voting systems
I think the above covers every flaw in voting systems I've ever seen suggested, and then some. I think this defines a framework within which every possible voting system, good and bad, can be defined and quantified, allowing conscious and purposeful selection of tradeoffs between them. I could even imagine that every aspect of the election could be given some numerical score, translating to an overall score of good-ness for the system as a whole.
Edited by Omega, 05 January 2017 - 09:02 AM.