When districts are drawn by politicians for partisan advantage, this is known as partisan gerrymandering. No one denies that politicians in both parties practice it. But at what point does fiddling with district maps amount to a constitutional violation? The Court heard two cases of alleged partisan gerrymandering this year (

*Gill v. Whitford*,

*Benisek v. Lamone*). The judges, however, avoided answering the principal question at the core of both cases: is there a standard according to which the courts can rule a map to be unconstitutional?

In oral arguments in a case coming out of Wisconsin, Justice Alito described partisan gerrymandering as “distasteful,” but quickly cautioned that if the Court were to rule in the case, it would have to agree upon some standard — some measurement (ideally not subject to interpretive argument) — for deciding whether a given map represents voters fairly. Justice Gorsuch and Chief Justice Roberts likewise asked whether explicit criteria could be articulated; if not, Roberts feared, the “status and integrity” of the Court could be harmed by wading into the thicket of politics. The judges’ decision to send both cases back to lower courts is revealing.

¤

The challenges of partisan gerrymandering are not new, nor is the hope that mathematics can offer a cure.

Math has always, in fact, been implicated in how “fairness” is construed in American electoral politics. It undergirds representative democracy. The perennial challenge, though, is that quantification and its opaque rigors can all too easily be strategically deployed and conflated with the interpretive work of politics. Numbers and quantification — so often taken to be objective, unbiased, and merely descriptive — actually can end up formalizing political arguments.

Recently, mathematicians and computer scientists from Duke University, Tufts University, and Princeton University entered the fray. Though their approaches are different, they share a belief that advances in computer science and mathematics over the past decades have at long last made it possible to provide mathematical solutions to the gerrymandering problem.

One solution researchers are testing uses supercomputers to randomly generate thousands of possible maps for a given state, ranking them according to a given measurement such as, for example, the number of seats that a given political party would be expected to win under each given map or according to the so-called efficiency gap. The “efficiency gap” seeks to measure each party’s “wasted votes” (votes are considered “wasted” if more than a 51 percent majority of citizens votes on a candidate who wins a seat, or if votes are spent on a candidate who does not win a seat). If maps carved up by politicians are not drawn strategically, with the purpose of advancing one party over the other, then they should fall within the bell curve of all possible maps. A map drawn by legislators could thus be compared to randomly generated maps. Is it an outlier, lying at the tail of the bell curve, or does it fit squarely in the middle of the distribution? Such an approach was impossible a decade or two ago when neither the expertise nor the computational capacity was available. If the courts decide to adopt some of these new measures, then future fights over gerrymandering will lean on a computational definition of fairness. But what would that even mean?

These latest approaches, while promising, still avoid the fundamental question: is fairness something you can mathematically optimize? While this might seem like a contemporary post-digital question, it has, in fact, a much longer history dating back to the origins of American representative democracy. The fact is that the role of mathematicians becomes apparent only during moments of disagreement when political, legal, and scientific rationales clash.

For example, during the 1920s, Congress was unable to reach an agreement. Apportionment for the next decade remained based on the outdated 1910 census. Keep in mind that the Constitution dictates that the seats in the House of Representatives be apportioned after each decennial census in proportion to the size of the population in each state. But in this time of great demographic change, it was in the interest of rural states with declining populations to maintain the status quo. By 1926, one congressman urged his colleagues to act; he pointed out that the district of Los Angeles, which had one representative, had more than a million inhabitants, while some other districts around the country had as few as 180,000 inhabitants. Surely, he insisted, it was crucial to redistribute the seats among the states to account for these changes.

Throughout the decade, various bills were drafted by Congress, but no consensus was reached. Some congressmen argued that the size of the House of Representatives should be increased to accommodate population growth (a common practice until then); others argued that only native or naturalized citizens should be counted for the purpose of apportionment (as a response to the flood of immigrants); a small number insisted that the disenfranchisement of African Americans in the South should be counted against the total population of Southern states (fulfilling Section Two of the 14th Amendment, which states that when the right to vote is denied to law-abiding citizens, the basis of representation should be reduced to account for this infringement). In the midst of these debates, a technical disagreement emerged, which revolved not around

*how*to count the population or limit the size of the House, but rather about the

*method*of apportionment — namely, how could state seats in the House be divided in practice?

It is this fight over method that is akin to current debates over gerrymandering. At its core, the issue was less about the ins and outs of the mathematical theory than it was about the possibility of objectively measuring fairness.

What exactly is the source of the problem? If the population of California is twice that of Colorado, then following the Constitution, California should have twice as many members of Congress. But unlike in mathematics textbooks, when it comes to people, numbers do not add up neatly — and that is the crux of the problem. In 1910, the population of the United States was 92,228,496 and the size of the House of Representatives was 435. That means that approximately every 212,019 (92,228,496/435) people in the United States were entitled to one representative. To figure out how many representatives Massachusetts would send to Congress, the next step was to look at the total population of the state, which was 3,366,416, and divide that number by 212,019. This results in the Massachusetts quota being 15.88. Similarly, Maine’s quota was 3.5, and Arkansas’s 7.42. What are you going to do with these remainders? Since you can’t send .5 of a representative from Maine, there needs to be a procedure for smoothing out these fractions. Should you ignore all fractions? Or should you round up and down depending on the fraction’s size? Before you settle on an answer, know that neither of these solutions works — you might end up with either too many or not enough representatives.

This problem was first recognized soon after the first US census in 1790, with many of the greatest American thinkers of that generation lending their minds to the problem. Thomas Jefferson, Alexander Hamilton, Daniel Webster, and John Adams tried to tackle it, each offering a methodology for apportionment. By 1850, Congress settled on the method known as the “Hamilton method”: once you allocate all the whole numbers in each state’s quota, you simply arrange the fractions in decreasing order and begin allocating the “remaining” seats until you reach the desired size of the House. This method would probably have remained in effect indefinitely, and the entire debate would have been avoided if not for some curious anomalies. Statisticians in the Bureau of the Census began noticing them toward the end of the 19th century. Most curious of all was the “Alabama Paradox”: all things being equal, increasing the number of seats in the House by one would result in Alabama

*losing*a representative.

In the 1920s, as Congress struggled to pass an apportionment bill, two scholars devised two new methods. Walter Willcox, a former statistician in the Census, advocated for the method of “major fractions.” Harvard mathematician Edward Huntington promoted that of “equal proportions.” Throughout the 1920s, both men testified in several hearings in front of the Committee on the Census, composed detailed reports outlining their respective positions, wrote letters to other scholars seeking allies and supporters, and brought their feud to the educated public in a series of papers published in

*Science*. Both of them argued that their method represented the

*fairest*solution to the problem of apportionment, but as their dispute wore on, it became clear that fairness was a slippery concept indeed, with more than one definition.

Since a perfect solution was now recognized as impossible — with some states inevitably better represented than others, the question became how to measure disparities between states. Huntington argued that the correct measurement is the relative difference between average districts in each state. If the average district in Massachusetts is 212,000 (population divided by number of representatives) and in Missouri 223,000, then the goal is to ensure that the ratio between the former and the latter is as close to one as possible (because if they were equal, the ratio would be exactly one). The optimal solution, according to Huntington, was the one that minimized this ratio between any two states.

Willcox had a different idea of how disparity should be measured. The fairest solution, according to him, and the one he believed most closely followed the aims of the Constitution, would ensure that an individual’s “share” of a representative (number of representatives divided by population) in one state would be as close to an individual’s share in another state. For example, if a New York state resident’s share of a representative was .00000472, and a South Dakota resident’s share was .0000052, a fair representation, on Willcox’s telling, would be one that would ensure that the absolute difference between the two numbers were as close to zero as possible.

At first glance, it isn’t obvious why the two measurements of disparity proposed by Huntington and Willcox would lead to different apportionments, but if you work through several examples, the differences become clear. The mathematics of relative and absolute differences is the key to the divergence in their approaches. In a sense, the two men disagreed on how to measure the “amount of inequality” between any two states. There was no way to eliminate all inequality, but it was also unclear what is the most “just” way to measure and minimize it.

But the debate did not end here. Some methods tended to favor larger states and others smaller states. In particular, if Congress adopted the method of equal proportions, then over the decades smaller states would benefit from greater representation than larger ones. Similarly, the method of major fractions benefited larger states. Both Willcox and Huntington knew this, and yet each man fervently (and wrongly) argued that his method was unbiased. For example, when he testified before the Committee on the Census in January 1927, Willcox argued that one of the strongest arguments “in favor of the method of major fractions is that it seems to me to hold the balance between the large state and the small state.” Huntington feverishly disputed this fact in the pages of

*Science*five months later: “The mathematical evidence, which was seriously misrepresented in the recent hearings, clearly indicated that the method of equal proportions is the one method which has no bias in favor of either the smaller or the larger states.”

The linguistic shift from “fairness” to “bias” is telling. At first, the argument focused on how to correctly measure the disparity between any two states, but as the dispute marched on, a more complex argument emerged. Balancing the impact of apportionment between the large and small states could be understood as a mathematical reading of the constitutional provision, but it inevitably smuggled political agendas into apportionment. The measurement of disparity and the measurement of bias were inherently connected, but they represented two different ideals about who should benefit from reapportionment.

Finally, Willcox and Huntington also disagreed on the nature of the problem itself and what sort of expertise was required to solve it. Throughout the decade, Huntington maintained that the problem of apportionment was purely about arithmetic. “However widely scholars may differ on political questions they surely should be able to present a united front on questions of arithmetic,” he exclaimed. From a purely mathematical point of view it was clear, to Huntington’s eye, how fairness should be measured; he was convinced that he had shown that his solution was the correct one. Many mathematicians supported him.

Willcox, however, did not concede. For him, the problem was as much about politics as arithmetic. The fair solution was not the one that satisfied the mathematical community, but the one that met the approval of Congress and the American people. It was impossible, he insisted, to isolate the math element of the problem from democratic contexts. The voice of legal and political scholars, Willcox argued, was just as important as those of the mathematicians. Willcox wrote in

*Science*in 1929, “the choice of a method seems to me to be of little importance compared with the need of securing congressional compliance with the constitution. I would gladly abandon my preference for the method of major fractions if I thought another method had a better chance of acceptance by Congress and the country.” For him, fairness was contextual. The optimal solution from a mathematical standpoint was not necessarily the best political one.

¤

The current debate about gerrymandering follows similar logic. The problem can be approached geometrically. According to federal laws, districts need to be of equal population (not geographic) size and comply with the Voting Rights Act of 1965. However, various states have additional provisions. For example, districts need to be compact, contiguous, and respect communities of interest. For mathematicians, redistricting can be thought of as a geometrical optimization problem. Given the various provisions of a given state, is there a way for geometers to find the optimal solution, where “optimal” concurs with some mathematical measurement?

The problem, however, is not simply geometric, especially when it comes to partisan gerrymandering. The measurement of fairness in this case is both operative and political. One of the most well-accepted principles of gerrymandering today is that of partisan symmetry, first advanced by Gary King and Robert Browning. This principle states that given roughly equal conditions, the result of an election should be the same regardless of which party is in control. In other words, if the Democrats win 60 percent of the seats with 55 percent of the total votes, then the Republicans should win roughly 60 percent of the seats if they receive 55 percent of the votes. If, on the other hand, 55 percent of the votes gives Republicans 80 percent of the seats, symmetry has been violated. Such a measurement, like the one about bias between large and small states in the 1920s debates, operates both as a mathematical and a political metric. Arguing for a geometrically optimized solution is thus insufficient.

Chief Justice Roberts’s description during oral arguments that the current measurement in front of the Court, known as the efficiency gap, amounts to “sociological gobbledygook” makes plain that the question of expertise is very much still with us today. Who, in other words, gets to decide what fairness is?

A new reapportionment bill was signed into law in June 1929. In its final formulation, the bill did not specify whether equal proportions or major fractions should be used. Instead, the bill instructed that if Congress fails to act, the last method used would be used again. The Census Bureau was instructed to provide Congress with the results of both methods. Luckily for Congress, after the 1930 census, Huntington’s and Willcox’s methods were in agreement and so the debate on method was postponed until the 1940s, when Huntington prevailed.

¤

Clearly, there is no singular fair method of apportionment and no such thing as a fair redistricting map, at least not if by “fair” we mean some practical objective measurement rather than, say, a theoretical possibility.

This is not to say that the Supreme Court should not choose a measurement. The latest ruling by the Court did not close the judicial debate over partisan gerrymandering but merely postponed it. Quantification is useful, and the suggestions put before the Court are persuasive and rigorous. But neither should the Court argue against a quantitative strawman. Even if no objective standard exists, this does not imply that there are

*no*standards. Trying to put a number on fairness is important, but it is an act of judgment, not interpretation. This means that the number is always open to debate and argumentation, as it should be. In other words: Our attempts to quantify fairness are anything but futile, but they should be acknowledged and implemented as normative rather than descriptive. Conflating the two will only lead to greater confusion and potential abuse of democracy by the numbers. Perhaps the stakes have never been as high as they are today, when conservatives control the executive branch, and the Supreme Court may, after Justice Kennedy’s retirement, sway rightward for a generation. The fate of the legislative branch hangs in the balance, and who gets to decide fair representation will undoubtedly shape the US government for decades to come. Ironically, while mathematical solutions seem to promise universality, there may in the end not even be a national consensus on fairness. The decision may be left to states to adjudicate.

¤

*Alma Steingart is a lecturer in the Department of the History of Science at Harvard University. She earned her PhD from MIT in the Program in History | Anthropology | Science, Technology, and Society (HASTS), and served as a junior fellow in the Harvard Society of Fellows.*