Tuesday, November 5, 2019

Too Much Democracy Is Bad For Democracy | The major American parties have ceded unprecedented power to primary voters. It's a radical experiment—and it's failing. [theatlantic]

https://www.theatlantic.com/magazine/archive/2019/12/too-much-democracy-is-bad-for-democracy/600766/

Too Much Democracy Is Bad For Democracy

The major American parties have ceded unprecedented power to primary voters. It's a radical experiment—and it's failing.

Jonathan Rauch and Ray La Raja December 2019 Issue

Americans who tuned in to the first Democratic presidential debates this summer beheld a spectacle that would have struck earlier generations as ludicrous. A self-help guru and a tech executive, both of them unqualified and implausible as national candidates, shared the platform with governors, senators, and a former vice president. Excluded from the proceedings, meanwhile, were the popular Democratic governor of a reliably Republican state and a congressman who is also a decorated former marine.

If the range of participants seemed odd, it was because the party had decided to let small donors and opinion polls determine who deserved the precious national exposure of the debate stage. Those were peculiar metrics by which to make such an important decision, especially given recent history. Had the Democrats seen something they liked in the 2016 Republican primary? The GOP’s nominating process was a 17-candidate circus in which the party stood by helplessly as it was hijacked by an unstable reality-TV star who was not, by any meaningful standard, a Republican. The Democrats in 2016 faced their own insurgency, by a candidate who was not, by any meaningful standard, a Democrat. And yet, after the election, the Democrats changed their rules to reduce the power of the party establishment by limiting the role of superdelegates, who had been free to support the candidate of their choosing at the party convention, and whose ranks had been filled by elected officials and party leaders. Then, as the 2020 race began, the party deferred to measures of popular sentiment to determine who should make the cut for the debates, all but ensuring runs by publicity-hungry outsiders.

Americans rarely pause to consider just how bizarre the presidential nominating process has become. No other major democracy routinely uses primaries to choose its political candidates, nor did the Founders of this country intend for primaries to play a role in the republican system they devised. Abraham Lincoln did not win his party's nomination because he ran a good ground game in New Hampshire; rather, Republican elders saw in him a candidate who could unite rival factions within the party and defeat the Democratic nominee in the general election. Today's system amounts to a radical experiment in direct democracy, one without precedent even in America's own political history.

The two major parties made primaries decisive as recently as the early 1970s. Until then, primaries had been more like political beauty contests, which the parties' grandees could choose to ignore. But after Hubert Humphrey became the Democrats' 1968 nominee without entering a single primary, outrage in the ranks led the party to put primary voters in charge. Republicans soon followed suit.

The new system—consisting of primaries, plus a handful of caucuses—seemed to work: Most nominees were experienced politicians with impressive résumés and strong ties to their party. Starting in 1976, Democratic nominees included two vice presidents, three successful governors, and three prominent senators (albeit one with little national experience). Republican nominees included a vice president, three successful governors, and two prominent senators. All were acceptable to their party establishment and to their party's base.

What often went unnoticed, however, was why the nominees were so impressive: Professional gatekeeping had survived informally, in the parallel vetting process known as the invisible primary. Even after the rule changes of the 1970s, candidates still needed to prove their viability to the party's elected officials and insiders, which meant showing that they could win influential endorsements, command media attention, appeal to multiple constituencies, attract top campaign talent, and raise money. To be competitive, they had to run a gantlet of party bigwigs, factional leaders, and money brokers. As recently as 2008, four leading political scientists argued that, as the title of their book put it, the party decides.

Then came 2016. Neither Donald Trump nor Bernie Sanders changed the system all by himself. Rather, both saw and exploited the invisible primary's fragility. The electorate had come to view the establishment's seal of approval with skepticism or outright hostility, allowing outsider candidates to tout their lack of endorsements as a mark of authenticity. Candidates have learned to bypass traditional moneymen by reaping donations online, tapping deep-pocketed tycoons, or just financing themselves. And the media landscape has evolved to provide oxygen to rogue candidates.

Restoring the old era of smoke-filled rooms is neither possible nor desirable. Primaries bring important information to the nominating process. They test candidates' ability to excite voters and campaign effectively; they provide points of entry for up-and-comers and neglected constituencies; they force candidates to refine their messages and prove their stamina. But as 2016 made clear, primaries are only half of a functional nominating system. The other half is input from political insiders and professionals who can vet candidates, steer them to appropriate races, and, as a last resort, block them if they are unacceptable to the party or unfit to govern.

The Democrats' 2020 race may well avoid a descent into chaos. The desire to unseat Trump may encourage primary voters to coalesce around a candidate with a traditional résumé and broad appeal. The field appears to be narrowing to such candidates. While party elders can hope for a rational outcome, however, nothing in the process guarantees one. In 2016, mandarins in the Republican Party assumed, until it was too late, that its primary electorate would eventually reject Donald Trump. Both parties' presidential-nominating contests have reached a point where they cannot promise to choose nominees who are competent to govern or who represent a majority of the party's voters.

Political professionals—insiders such as county and state party chairs, elected officials such as governors and legislative leaders—are uniquely positioned to evaluate whether contenders have the skills, connections, and sense of responsibility to govern capably. Only they can do the brokering and bridge-building to form majorities and shape coalitions, and to ensure that the nominee is acceptable to a broad cross section of party factions. Only they can reduce the element of sheer randomness that plagues today's primaries, where a stroke of bad luck in a single state can sink a candidacy. The voters need their help.

This may seem like an argument for elitism over democracy, but the current system is democratic only in form, not in substance. Without professional input, the nominating process is vulnerable to manipulation by plutocrats, celebrities, media figures, and activists. As entertainment, America's current primary system works pretty well; as a way to vet candidates for the world's most important and difficult job, it is at best unreliable—and at worst destabilizing, even dangerous.

At the most fundamental level—the level of electoral math and cognitive bandwidth—primaries are an inherently flawed mechanism for registering voter preferences. Voters' views are often ambivalent and conflicted, their attention finite. When offered several options, individuals tend to lack sufficient information to make a choice that reflects their actual preferences. In fact, many presidential-primary voters mistakenly back candidates who do not reflect their views. One important study of the 2008 presidential primaries found that voters do barely better than chance at picking the candidate whose views most closely match their own. Summarizing abundant research, the political scientist Bruce Cain concludes, "The original sin of citizenship is our cognitive fallibility; namely, limitations in knowledge and motivation." In other words, the nomination process makes unrealistic demands on voters, not because they are lazy or stupid—they're not—but because they are human.

https://www.the-american-interest.com/2014/10/03/populist-illusions-and-pluralist-realities/

Even if all voters were cognitive virtuosos and informed to the hilt, distilling millions of individual preferences into a single candidate choice is much more difficult than most people assume. As long ago as the 1780s, the French mathematician and philosopher Nicolas de Condorcet showed that in any field offering multiple candidates, majorities may prefer Smith over Jones and Jones over Brown, yet Brown may nonetheless beat Smith. In the 1950s, the Nobel Prize–winning economist Kenneth Arrow took the argument further, proving mathematically that, no matter what system of voting is used or how rationally voters behave, the electorate can fail to arrive at any consistent majority choice, even when choosing from as few as three candidates.

Theorists have thus long understood that there is no one right way to aggregate group preferences, nor is there any one representative majority. Candidates can emerge from the pack or be eliminated because of a random event or a fluke of election timing. Worse, there is a nontrivial likelihood that the plurality winner will turn out to be positively unwanted by a majority, as was the case in 2016, when Trump secured the Republican nomination without managing to garner 50 percent support within the party. When the number of candidates reaches double digits, elections can enter a world that we think of as Arrow's nightmare: The process, while observing the formalities of voting, is not particularly representative of anything.

The U.S. primary system compounds the problem by adding its own elements of randomness. For example, the front-loading of primaries in the 2020 race allows a candidate to wrap up the nomination based on a small share of the electorate voting in a handful of states. Moreover, because the ballot registers only each voter's first choice, it provides no additional information about the intensity of preferences for other candidates. If the electorate splits its vote among several candidates competing in the same political lane—say, two or three pragmatists facing an extreme partisan—the extremist can win even if she is the last choice of the majority. Under the old convention system, by contrast, party leaders would move to a more broadly representative second-choice candidate if the plurality candidate was unacceptable to the larger coalition. Party leaders did so on several occasions, the most famous being Lincoln's selection over front-runner William Seward, in 1860.

Even in near-optimal conditions, the process's vagaries afford dangerously good odds to fringe candidates. In a large field, enduring the early battles requires mobilizing a loyal faction rather than pulling together a coalition. The goal is not to win a majority but to survive and hope that luck pits you against a clutch of candidates who compete with one another in a different lane. Insurgents, extremists, and demagogues are good at pursuing factions, because they are not tethered to the realities of governing, which demand compromise and coalition-building.

The media might be expected to expose the flaws and limitations of such candidates. In reality, the media can be a powerful accomplice to fringe candidates who play their cards right. Extremism, outrage, and conflict are catnip for journalists. The Trump campaign spent a little over half as much money as Hillary Clinton's campaign did in the 2016 election, but according to the tracking firm mediaQuant, Trump got 50 percent more coverage than Clinton—coverage that was worth $5.6 billion, vastly more than the $195 million his campaign spent on paid media.

The current media landscape also encourages the kind of large fields that bedevil our primary system. With cable news eager to book prime-time town halls even for marginal candidates, and the parties willing to create double-bill debates, fringe figures have every incentive to throw their hat in the ring. Even if they can't win, joining the fray will drive social-media followers, book sales, and TV appearances. Paradoxically, the more candidates who enter, the greater the incentive for additional entrants, because each one reduces the number of votes needed to win.

Americans thus have good reason to dislike the current nominating system—and indeed they do dislike it. In a March 2016 Pew Research Center poll, just 35 percent of voters said primaries are a good way of selecting the best-qualified nominees. Not surprisingly, a majority of all candidates' supporters, except Trump's, viewed the primaries negatively. Studies also show that large proportions of Americans favor reforming the presidential-nominating process, particularly in states that hold their contests in the later stages. Voters in both large and small states think Iowa and New Hampshire have unfair advantages and influence—which, of course, is true.

The primary became a fixture of the American political landscape during the Progressive era. Reformers believed direct elections were more fair, honest, and democratic. By 1917, all but four states had adopted the primary for many statewide nominations.

One dissenter warned of unforeseen consequences. "The idea which commends the direct primary to the masses," wrote a Princeton University politics professor named Henry Jones Ford, in a 1909 issue of The North American Review, "is that it is a means of giving power to the people." The reality, he argued, would be quite different. Direct primaries would "take advantage and opportunity from one set of politicians and confer them upon another set." Some people, not the people, would gain. The party-boss system had its flaws, Ford argued, but it elevated candidates who could govern successfully, whereas primaries favored candidates who had the wherewithal to promote themselves. "When power is conditioned upon ability to finance costly electioneering campaigns, plutocratic rule is established." Ford's assessment was withering: Direct primaries would make politics "still more confused, irresponsible, and costly."

Ford could not have known, in 1909, that the direct primary would devolve into today's mess, but he was astute in challenging the assumption that democratization would remove elites' thumbs from the scales. As he noted, changing the rules may simply hand influence to a different kind of elite—the real-world result of democratization can be to reduce representativeness.

In 2012, the political scientists Dennis C. Spies and André Kaiser looked at data for 53 parties in nine Western European countries from 1970 to 1990. They examined parties that empowered different groups—party elites, activists, or rank-and-file voters—in the nomination process. The goal was to see which method picks candidates who are closest to the preferences of the average voter in a party. Their finding: "Parties in which party elites decide the nomination of candidates show slightly higher degrees of representation than parties with more inclusive selectorates." Other research shows that more inclusive methods tend to dampen the representation of women and may favor extremist candidates who fare poorly in general elections.

New research by the political scientists Byron Shafer and Regina Wagner considers the effects of traditional state-party organizations, with their focus on transactional politics, being displaced by a "volunteer" model. Volunteerism might sound like an unqualified good; the problem is that not everyone is equally likely to volunteer. Turnout in primaries is notoriously paltry, and those who do show up are more partisan, more ideological, and more polarized than general-election voters or the general population. They are also wealthier, better educated, and older.

When party insiders evaluate candidates, they think about appealing to overworked laborers, harried parents, struggling students, less politicized moderates, and others who do not show up on primary day—but whose support the party will need to win the general election and then to govern. Reducing the influence of party professionals has, as Shafer and Wagner observe, amplified the voices of ideological activists at the expense of rank-and-file voters. Political theorists sometimes refer to the gap between primary voters and the larger electorate as the problem of "unrepresentative participation." Whatever you call it, it has a perverse consequence: As Henry Jones Ford predicted, rather than disenfranchising political elites, primaries shift power from one set of elites (insiders who serve the party organizations) to another set (ideologues and interest groups with their own agendas).

No wonder these new elites favor ever more democratization, and less influence for party insiders. By contrast, the public is quite comfortable with insiders taking a role. In a survey, one of us (La Raja) asked voters to allocate 100 points to four different groups based on who they think should have influence in primary nominations. On average, "party voters" were allocated a 42 percent influence share, and "independent voters" were allocated 22 percent of the influence. But 19 percent of the influence was allocated to "party officials," and another 17 percent to "nonpartisan experts." In other words, survey respondents wanted to give professionals (party officials and experts) more than a third of the influence in picking the nominee. To judge by the survey results, Americans view a mixed system as a good idea. They want the electorate to do about two-thirds of the deciding and parties and professionals to do about one-third, proportions that strike us as quite reasonable.

The public is right. Reinstating a prominent role for political professionals—what the Brookings Institution's Elaine Kamarck refers to as peer review—would make the system more democratic and representative, not less so.

https://www.brookings.edu/research/re-inserting-peer-review-in-the-american-presidential-nomination-process/

Two filters are better than one. Electoral and professional perspectives check and improve each other. Each provides the other with vital information that otherwise would be missed. Among the reasons:

Professional vetting emphasizes competence in governing. Candidates now worry more about mobilizing die-hard activists and producing story lines for journalists than demonstrating their aptitude for governing. Voters, for their part, have a distorted sense of the powers of the president, subscribing to what the political scientist Brendan Nyhan calls the Green Lantern theory of the office: the belief that the president's sheer willpower can overcome obstacles to governance. In reality, political influence in our system comes primarily from the soft power of relationships and political debts. Professionals judge the strength of those relationships. What's more, they boost candidates' capacity to govern after the election by encouraging aspirants to meet with party leaders and elected officials throughout the country, forcing candidates to cultivate connections and build political capital.

Despite their flaws, smoke-filled rooms did a good job of identifying qualified people who could unify their party and also exert broad appeal in a general election. Lincoln, Theodore Roosevelt, Woodrow Wilson, Franklin D. Roosevelt, and Harry Truman all emerged in large part from the haze of those rooms.

Of course, the party elders did not always get it right. Warren Harding and Richard Nixon were both promising prospects who turned out to be poor choices. Again, the point is not that either filter is infallible but that both are necessary.

Professional vetting deters renegades. The professional filter also helps exclude candidates who are downright dangerous. The Harvard government professors Steven Levitsky and Daniel Ziblatt, in their recent book, How Democracies Die, make this point forcefully. In a democracy, they argue, party organizations' most crucial function is to act as gatekeepers against demagogues and charlatans who, once in power, undermine democratic institutions from within, as Donald Trump has done.

Parties have blocked antidemocratic candidates in the past. In 1924 Henry Ford, the legendary carmaker, contemplated a run for president. He promised to rid the nation of its corrupt and ineffective politics, electrifying voters with his claim to be someone "who could do things and do them quick." He also spouted anti-Semitic and racist rhetoric. In both respects, he was an antecedent to Trump. In Ford's day, however, the primaries controlled too few delegates to secure the nomination, and the grandees of both parties saw Ford as dangerous. Lacking a pathway around the hostile party establishment, Ford declined to enter the race and contented himself with offering to serve if the nation summoned him. "What Ford was saying, in effect," write Levitsky and Ziblatt, "was that he would only consider running if the gatekeeping system blocking his path were somehow removed. So, in reality, he never stood a chance."

Or consider a more recent example, recounted by Jon Ward in his book, Camelot's End. In 1976, Democrats in Florida were desperate to prevent Alabama Governor George Wallace from winning the state's presidential primary. A populist who had built his career on racism, Wallace was toxic to the party's base outside of the South and likely unacceptable to the general electorate. To stop him, party operatives leaned on other Democratic candidates to clear the field for Jimmy Carter, the only other southerner in contention. With the field narrowed, Carter defeated Wallace in Florida and ended Wallace's candidacy (and went on to the White House—which the Democratic operatives had not expected). Still more recently, in 1996, Democratic National Committee Chair Don Fowler was able to exclude the conspiracy-mongering Lyndon LaRouche from the Democratic Party's nominating process. LaRouche stood no chance of unseating the incumbent, Bill Clinton, but the refusal to allow him to compete signaled to other would-be disrupters that the party could and would stand in their way.

Professional vetting checks the power of donors and the media. Thanks to court decisions such as SpeechNow.org v. Federal Election Commission, political fundraising and spending by independent groups effectively have no limit. Formerly compelled to seek funds from many establishment donors, candidates can now be bankrolled by quirky billionaires with pet agendas. In the 2012 Republican nominating cycle, former House Speaker Newt Gingrich's campaign was kept alive with millions of dollars spent by the casino magnate Sheldon Adelson, even though Gingrich's support was weak among both the primary electorate and the party establishment. Billionaires can also bankroll themselves, buying their way around accountability to any party or constituency.

Among progressive candidates, it has become popular to abstain from raising money from deep-pocketed individuals and corporations, and instead to rely on small donations from grassroots supporters. Like many other observers, we value the participatory enthusiasm of small donors, yet there's a troubling downside. Academic research suggests that small donors are not representative of the electorate. They are as extreme and polarized as large donors, perhaps more so. They are also skewed demographically compared with the rest of America: They are wealthier, whiter, and older.

Extremist candidates tend to do better at raising small-donor money, because they get the media's attention by staking out bold (if unrealistic) positions and making attention-grabbing statements, many of which violate political norms. It is no coincidence that in the 2016 election, Trump shattered records, bringing in more money from small donors than Barack Obama did in his 2012 campaign and accumulating slightly more than both Clinton and Sanders combined.

Our point is not that small donations are necessarily bad or good. It is that small donations are more constructive in a system that provides professional vetting than in a free-for-all. 
Then there are the media, who have completely different incentives than political professionals when they evaluate candidates. The media prefer the novel, the colorful, and the combative, qualities that drive compelling narratives, not ones that make for effective governing. Restoring insider influence in the nominating process will not vitiate the role of the media in covering campaigns, nor should it. But it will help ensure that candidates are vetted for competence.

Most of today's political reformers are focused on proposals to tinker with voting protocols (ranked-choice voting, proportional voting, multimember districts), or to increase participation (voting by mail, weekend voting, automatic registration), or to improve fairness (redistricting commissions, small-donor matches, donor transparency). But whatever their merits and demerits, all such process reforms fix the wrong filter. They ignore the more urgent and essential problem.

No mechanical changes in the electoral system can substitute for professional judgment. With so many candidates, so much strategic uncertainty, and so much confusing information, primary voters could not evaluate and organize the field by themselves even if they were inclined to try. Mixed systems ensure that the full spectrum of democratic values gets attention.

If the parties decide to give their professionals more clout, there are all kinds of ways to do it. For example, the role of superdelegates in the Democratic nomination process could be strengthened instead of weakened. Another route, suggested by Elaine Kamarck, is a pre-primary vote of confidence by party leaders. Members of Congress, governors, and party officials could meet with candidates before the first caucus or primary and issue a vote of confidence (or no confidence) on each, or they might rate candidates on measures such as political experience, ability to work with others, ethics, and so on. While not dispositive, insiders' judgments would encourage the news media and the public to focus on characteristics that matter for governing.

A more formal kind of early vetting might require candidates to obtain petition signatures from state and county party chairs and elected officials—not a radical idea, as candidates already need to obtain petition signatures from voters. And parties can and should consider candidates' track record and party service in allocating debate slots. This year, the Democrats opted for a participation test based on candidates' ability to attract small donors and garner poll ratings. Glaringly excluded was any reflection of candidates' experience in office or support among party leaders. Why not consider length of time in office, number and prominence of offices held, major endorsements, and efforts on behalf of other politicians in their party?

These ideas are merely examples. The ways in which party professionals could tighten the reins are many. Indeed, when the invisible primary worked best, it exerted influence through many channels, formal and informal.

Because professional vetting was the norm in American politics for all but the past decade or so of our political history, no one can say it is a radical, alien, or untried scheme. The challenge, then, is not altering the procedures; it's altering the politics. As peer review has lost public support and legitimacy, the parties have grown reluctant to provide it. They fear the kind of excoriation that Bernie Sanders's supporters unleashed against superdelegates. They worry about being blamed for unpopular and unsuccessful choices. Far easier to pass the buck to the voters.

The most important reform is thus conceptual, not mechanical: changing the mind-set that reflexively regards popular elections as the only legitimate way to choose nominees. Paradoxically, democratic fundamentalism—the insistence that the remedy for whatever ails democracy must always be more democracy—is dangerously undemocratic. Likewise, political consumerism—the idea that more choices are always better—is a recipe for confusion and chaos.

A healthier approach prizes institutional actors. They understand the party not simply as a vehicle for one candidate's ambitions, but as a national political association with a past, present, and future. Their empowerment does not banish populism—as if that were possible—but balances it, by restoring the Madisonian pillars of pluralism, checks on power, and deliberative institutions.

In December 2015, former Florida Governor Jeb Bush, then a leading candidate for the Republican presidential nomination, famously said of Trump, "He's a chaos candidate, and he'd be a chaos president." Without professional input, America's nominating system will be a chaos process, giving birth to chaos candidates and chaos presidents for years to come.


About the Authors

Jonathan Rauch is a contributing writer at The Atlantic and National Journal and a senior fellow at the Brookings Institution.
   
Ray La Raja is a political-science professor at the University of Massachusetts at Amherst.

No comments:

Post a Comment