Stats Con

A new chapter in the census scandal

Illustration by Peter Ryan

There can be little doubt at this point that by cancelling the long-form census, the Conservative government destroyed our best source for the evidence it claims should guide policy decisions: opposition from across the political spectrum cried foul, but perhaps the clearest sign is that the government acted against the advice of Canada’s chief statistician, Munir Sheikh, who then resigned in protest. Now, heading into a much-diminished 2011 census, Sheikh’s replacement, former communications and operations assistant chief Wayne Smith, has announced that the federal government wants Statistics Canada to explore alternatives to the short form. Should we be worried?

The practice of counting population goes back about as far as we can trace any government action, to ancient Egypt, Babylonia, Palestine, and China. The Roman enumeration lasted 800 years, until, with the Middle Ages, census taking in the Western world essentially died out. Oddly enough, it was revived in New France, where intendant Jean Talon personally went door to door and tallied the colony’s 3,215 settlers, in 1666.

Census methods have grown more sophisticated since then, but the principle remains the same. The advantage of accounting for every citizen is that you don’t need to worry about random error, which occurs when you make statistical inferences about the whole population based on a sample. But in recent years, there has been growing concern about another fundamental flaw.

In an early episode of the American political drama The West Wing, when President Bartlet’s staff tries to block a bill that would ban random sampling in the US census, adviser Sam Seaborn describes the head count as “staggeringly inaccurate.” The trouble, as real-world statisticians will confirm, is that perfect coverage is becoming increasingly difficult to achieve. “The response rates for household surveys by government are going down over time, and that is a worldwide phenomenon,” says Roeland Beerten of the UK’s Office for National Statistics. “If you want a full population count, you have to spend progressively more money on response chasing, which makes the census progressively more expensive.”

In the United States, where enumerators may visit reluctant households up to six times, the census cost nearly twice as much in 2010 as in 2000. The price tag for the UK census has almost doubled over the past decade as well. Increases have been less extreme in Canada—while the 2006 census cost $567 million, this year’s is budgeted at $660 million—but Canadians are generally becoming harder to count. We screen our calls; we are home at odd hours; and more of us live in apartment buildings, which are difficult for census staff to access.

And some people are more difficult to reach than others. On The West Wing, staff pointed out that black citizens are most likely to be missed and whites more likely to be counted twice, thereby framing the head count as a violation of civil rights. In fact, many marginalized socio-economic populations, as well as young men, are vulnerable to under-representation, and increases in such non-random error are coming just as we’re asking more of our census data. We want policy to respond to our immediate needs, e.g., schools to open where there are more children, transit systems to serve growing neighbourhoods; we need estimates that reflect a rapidly shifting demographic picture.

As a result, many countries are rethinking their censuses. Of the twenty-seven European nations that conducted a traditional census in 2000, seven had switched to some alternative by 2010. And the UK may be next. A project called Beyond 2011 is bringing together data users and methodologists to recommend alternatives: “All options are on the table,” says Beerten. While it’s much too soon to talk specifics, one possibility is France’s “rolling census,” which produces annual estimates by sampling large municipalities every year and counting small ones exhaustively once every five years; or a permanent population register of the sort used in many European countries. In this model, citizens are required to report address changes to the register, which may also link to administrative data the government has obtained through, for instance, licensing or benefit plans.

Ultimately, there is no ideal approach, only the best approach given the particularities of the population being measured. Interestingly, just three weeks before Wayne Smith’s announcement, one of his top demographers, André Cyr (joined by head of media relations Peter Frayne), seemed confident that the best way of measuring Canada’s population is the short form.

Our response rates are still the envy of the United States, Cyr explained over the phone, and we run our census every five years, frequent by international standards. In between, we’re able to update the count using exceptionally high-quality administrative data from federal social programs, so the numbers stay fresh. And because much of the country is sparsely populated, sampling would produce a high degree of random error. “A census is still, for us, the best way to collect information for every Canadian,” he concluded. “I don’t really see any issue for us now.”

So what happened? While Frayne now says Statistics Canada “independently initiated plans to examine options” last fall, it looks in many ways like another rash top-down decision, perhaps reflecting the opinion of prominent Fraser Institute economists that government simply shouldn’t be in the business of collecting data. Right out of the gate, Smith suggested that a population register is “not very likely,” and highlighted instead the American model, with a head count every ten years and a rolling survey, like the American Community Survey, between census years. But the acs is more like our erstwhile long form, not meant to estimate population, which means Smith is merely advocating a less frequent head count.

Over the year-long review period, it may be difficult to distinguish between spin and the kind of legitimate inquiry into census methodology being implemented across Europe. In this context, it’s worth knowing that when statisticians from these countries talk about what’s been happening in Canada, they adopt the hushed, sympathetic tones of an acquaintance who knows you are grieving.

Portrait of a Nation

FDR’s (not so) great literary map of the United States

Franklin D. Roosevelt’s New Deal provided hundreds of millions in stimulus funds during the Great Depression, most notably for infrastructure projects. But roughly 1 percent of the total package went to the Federal Writers’ Project, a massive undertaking that employed over 6,500 writers, historians, and librarians at its peak. Participants, among them John Cheever, Studs Terkel, Saul Bellow, and Zora Neale Hurston were tasked with creating a comprehensive portrait of America “as it was,” with a focus on regional histories, folklore, and a significant collection of slave narratives. From 1935 to 1943, they produced hundreds of volumes, including books of more than 500 pages on every state in the union. Despite its great ambition, the project drew much criticism: its focus on poverty was controversial; and director Henry Alsberg was forced to step down amid accusations of communist sympathies. Many readers lamented the inconsistent quality of the writing—even Cheever himself, who said his work editing the New York guidebook consisted of “twisting into order the sentences written by some incredibly lazy bastards.” Today much of the series can be found in the Library of Congress, if you’re willing to sift through 1,086 boxes to find the good stuff.

Chris Berube

This appeared in the May 2011 issue.

Allison Martell