Qualitative Political Communication Research

"Methodology is intuition reconstructed in tranquility" – Paul Lazarsfeld

Month: January, 2014

The Virtualities of Political Technology: Some Reflections about the Northstar Campaign System

by Fenwick McKelvey

Canvassing for the Republican National Committee in the 2008 election, Andrew Northwall noticed Obama staffers knew an awful lot about voters, even those flown into the campaign. He discovered that Democrats had better data about voters ready at hand through their digital infrastructure (Kreiss, 2012). A few extra details meant a better rapport on the doorstep and better odds of connecting with voters (Nielsen, 2012). The success of the Obama campaign lingered with Northwall and his fellow Republicans long after the election, enough that he decided to create better technologies for right-wing campaigns in the United States. Andrew Northwall’s efforts resulted in Northstar Campaign Systems (incorporated in 2011). It sells a campaign and issue management software to Republicans campaigns of all sizes. Its software has assisted over 70 campaigns in 17 states so far. In this post, I would like to talk about how software influences political campaigns through Northstar. I will suggest that it is important to pay attention to the software itself in order to inform qualitative studies of technology in politics.

I will use the case of Northstar to explain what I mean by virtualities and how it opens new lines of inquiry into political campaigns’ adoption of technology. Where Anderson and Kreiss (2013) explain that campaigning practices are made durable through software, I would suggest this durability is not always wholly integrated from campaign to campaign. I call this gap between its contextual uses and its total features the virtualities of software (Deleuze, 2007) (or what Latour (2005) would call plasma). The durable, in other words, exceeds the actual. Studying software offers a way to consider the virtualities and the actualities of the campaign by considering what merges and what does not merge. Virtualities might actualize to become mundane, the next “killer app” or get ignored altogether. Software could just as well create new problems as well as solve old ones.

I’d like to use the case of Northstar then to test this method of investigation for the virtualities of software. I’ll consider how its features and functions raise questions to its users. Three questions come to mind: its database, its network and its forms of voter contact. These questions seek to understand how the virtualities of the software either actualize or linger in the campaign assemblage.

An advertisements of Northstar Campaign Systems

An advertisement of Northstar Campaign Systems

Northstar promises “campaign chaos, controlled. precisely” as seen above. I couldn’t help but be curious about any software promising greater control. It suggests that software will help navigate the unfolding of a campaign by creating a system of control (such as ‘computational management’ or ‘managed citizenship’). What are the virtualities of this system of control?

Controlling campaign chaos involves better record keeping. Northstar arrives with a set structure of its record-keeping that must be merged into the campaign. Like many campaign management systems, Northstar is the front-end of a database that logs, stores and retrieves campaign activity. The Northstar database works by organizing campaign record keeping and, more abstractly, its memory. Campaign memory, however, endures in many ways outside of the database through staff, paper and spreadsheets. What default records does Northstar software include that have been merged into the campaign or discarded? How does the interface mediate access to this memory? How do people find data during the everyday tasks? Do they succeed in finding necessary data or do bugs, quirks or obfuscated code thwart their inquiry?

Northstar mostly collects data related to the voters. For example, it keeps track of where voters live, their past donations, and even if they have a yard sign. Voter records also track campaign correspondence as the software can record whether a voter requires feedback and who has been delegated to respond. Over the course of a campaign, workers populate a database with thousands of unique pieces of data about voters that can then be searched, aggregated and fed into other tasks like walk books for canvassing. What default fields populate or get ignored? I was curious about Northstar’s lawn-sign feature for example. Each voter record includes a field to record if a voter wants a yard sign in addition to a map that tell workers where to put the sign and remembers where the sign’s location. Does the sign detail of a campaign use this feature? Why or why not? Asking questions beginning with the sign field opens up questions about what parts of the campaign operate within the purview of the software system and whether software features meet the needs of its adopted campaign.

The left side of the screen lists the type of information collected on voters.

The left side of the screen lists the type of information collected on voters.

Merger also negotiates the desire of the Northstar software to record all campaign activity with the habits of the campaign itself. Northstar includes an imperative to consolidate and integrate data into a common system. It may seem commonplace, but systems like Northstar, promise a convergence of all activity in one database. The question for the campaign then involves whether this convergence is either feasible or expedient. Does the desire of convergence and integration in Northstar influence the campaign? Does it lead to an intensification of their record keeping by promising to keep track of most campaign activity? When so much attention has been given to the influence of big data in politics, it would be helpful to know how campaigns themselves – especially local and smaller campaigns with limited budgets – resource their data collection.

In addition to the virtualities of the database, Northstar also creates its own ideal network that must be adapted to the campaign. The product connects VoIP phones, iPads, smartphones and browsers to a common database on servers run by Northstar. It allows distributed organizations to operate in synchronization by transmitting information across space and time. All these connections update the database in real-time, attempting to synchronize information to all parts of the campaign. Data entered on a tablet by a canvasser travels across the Internet populating a database that can then be accessed by the campaign office. This network must be adapted to the campaign. How does the network of the software integrate into the campaign assemblage? Questioning these virtualities offers an interesting way to study the nature of the campaign’s organization as an intersection of technology and the many other cultural practices as work in a campaign.

Campaign staff enter voter feedback using a touchscreen phone.

Campaign staff enter voter feedback using a touchscreen phone.

Finally, Northstar includes certain modes of voter contact, such as phone banking or telephone based voter contact. Phone banking is the most distinct feature of Northstar. The web interface allows a campaigner to select and assign a chunk of voters to a phone bank then assign a script to be read by callers. At the phone bank, these scripts appear on smart telephones with touch-screens. Campaigners read the scripts and punch in the responses that the phone sends back to the database. Phone banks optimize voter contact to ensure the best data for the campaign – similar to how Yates (1989) describes the memo as a technique of organizational control. The scripts, buttons and inputs guide staff and volunteers in their interactions with voters and ensure clean data input.

Northstar has an interface to builds scripts for phone banking. These scripts can include both variables, such as name, as well as branching trees so that users response will led to different questions.

An example of a script being developed in the Northstar Campaign System.

An example of a script being developed in the Northstar Campaign System.

Further research could observe how campaigns write phone scripts using the Northstar interface. Do they use its branching structure? Phone banking scripts then could be understood both in their actual use in the campaign and in its virtualities of its scripting interface. Do phone scripts offer a different way to structure engagement? Could we compare this form of personalized communication to targeted web advertising or the A/B testing of website interaction?

These three virtualities of Northstar – its database, network and phone bank – hopefully demonstrate the use of attending to software itself as part of ethnographic work. Indeed, the questions I raise here beg an application in a study of a campaign using Northstar or a similar product. Understanding the virtualities mentioned might offer a new line of inquiry into the political influence of software. How does it merge with a campaign and more specifically, what features actualize in the campaign practices and what linger as virtualities?

Attention to the virtualities of software offers a way of developing questions for more extended ethnographic studies of campaigns and technology. Taking a good look at the software itself will not describe its influence on politics nor explain the materiality of campaigning, but it will help researchers post new questions about how certain features and functions of software influence the campaign after its been installed. These questions are pertinent to my future work on the flow of American software to Canada. Does American software inject new virtualities into Canadian politics? The Conservative Party of Canada uses MailChimp, for example, so we might ask along these lines: what features do they adopt, adapt or avoid? Does MailChimp influence their campaign?

I would like to thank Andrew Northwall and Pete Botkin for their openness and generosity in answering my questions about Northstar Campaign Systems. Also, thanks to Daniel Kreiss and Jill Piebiak for comments on an earlier version of this post.

Bibliography

Anagnoson, T. (1986). Software Reviews: Political Campaign Software for Micros. Social Science Computer Review, 4(4), 520–522. doi:10.1177/089443938600400414.

Anderson, B. (1991). Imagined Communities: Reflections on the Origin and Spread of Nationalism. New York: Verso.

Anderson, C. W., & Kreiss, D. (2013). Black Boxes as Capacities for and Constraints on Action: Electoral Politics, Journalism, and Devices of Representation. Qualitative Sociology, 36(4), 365–382. doi:10.1007/s11133-013-9258-4

Beissinger, M. R. (2007). Structure and Example in Modular Political Phenomena: The Diffusion of Bulldozer/Rose/Orange/Tulip Revolutions. Perspectives on Politics, 5(2), 259–276.

Chartrand, R. L. (1972). Computers and Political Campaigning. New York: Spartan Books.

Deleuze, G. (2007). Dialogues II. (C. Parnet, Ed.). New York: Columbia University Press. Retrieved from http://www.loc.gov/catdir/toc/ecip071/2006031862.html

Fuller, M. (Ed.). (2008). Software Studies: A Lexicon. Cambridge: MIT Press.

Gillespie, T. (2010). The Politics of “Platforms.” New Media & Society, 12(3), 347–364.

Howard, P. N. (2006). New Media Campaigns and the Managed Citizen. Cambridge: Cambridge University Press.

Innis, H. A. (1951). The Bias of Communication (2nd ed.). Toronto: University of Toronto Press.

Karpf, D. (2012). The MoveOn Effect: The Unexpected Transformation of American Political Advocacy. New York: Oxford University Press.

Kreiss, D. (2012). Taking Our Country Back: The Crafting of Networked Politics from Howard Dean to Barack Obama. New York: Oxford University Press.

Latour, B. (2005). Reassembling the social: an introduction to actor-network-theory. New York: Oxford University Press.

McKelvey, F. (2011). A Programmable Platform? Drupal, Modularity, and the Future of the Web. Fibreculture, (18). Retrieved from http://eighteen.fibreculturejournal.org/2011/10/09/fcj-128-programmable-platform-drupal-modularity-and-the-future-of-the-web/

Meadow, R. G. (1985). New Communication Technologies in Politics. Washington: The Washington Program, Annenberg School of Communications.

Montfort, N., & Bogost, I. (2009). Racing the Beam: the Atari Video Computer System. Cambridge: MIT Press.

Nielsen, R. K. (2012). Ground Wars: Personalized Communication in Political Campaigns. Princeton: Princeton University Press.

Northwall, A. (2013, July 3). Interview with Andrew Northwall (CEO of Northstar Campaign Systems).

Russell, A. L. (2012). Modularity: An Interdisciplinary History of an Ordering Concept. Information & Culture: A Journal of History, 47(3), 257–287.

Tarrow, S. G. (1994). Power in Movement: Social Movements, Collective Action, and Politics. Cambridge: Cambridge University Press.

Yates, J. (1989). Control through Communication: The Rise of System in American Management. Baltimore: Johns Hopkins University Press.

“I think the quantitative scholar and I are both engaged in an allied effort at spotting patterns”-QualPolComm preview interview

by Rasmus Kleis Nielsen

Michael Serazio is working a paper called “Producing Viral Politics: Technological Strategies, Cultural Production, and Campaign Consultants” for the ICA Preconference on Qualitative Political Communication Research. In it, he looks at political campaigns through the lens of cultural production studies as a “media-making” industry.

The full abstract is below the jump and on the conference page.

Here are questions and answers from an email interview I did with him about his research.

RKN: You work on how the backstage work of campaign consultants in producing political communications. As you note in your abstract, this is not often done, as focus has been on contents and effect over production. How would you say your work connects to core concerns of political communication research? Are there particular researchers or schools of thought you see yourself as being in a dialogue with?

With this particular paper, I’m interested in exploring questions like how new communication technologies have impacted the pacing of political output as well as their use in practices like opposition surveillance and grassroots message seeding.  These, I think, do relate to long-standing topics in political communication like agenda setting, partisan mobilization, and reductive imagery and sound-bites, but, as you note, the project is more about the strategic logic of behind-the-scenes professionals rather than causative outcomes on citizens and audiences.  To that end, the works of Philip Howard, Daniel Kreiss, and Kristen Foot and Steven Schneider, among many others, have been helpful in orienting myself to the literature on new media and politics.

RKN: Your work is partly rooted in political communication research, but also seems to go beyond it—where else did you find theoretical and methodological inspirations?

I’ll be the first to acknowledge I’m something of a dilettante when it comes to the world of political communication research.  I come from a media studies background more oriented to popular culture; advertising was the subject of my first book and I’ve also published on social media, sports, and mash-up culture, among other subjects.  But I’ve studied those various subjects with the same qualitative methodological tools – and cultural production orientation – that I wound up applying to political campaigns (which are, frankly, as entertaining as any fictional text).  So Henry Jenkins, Mark Deuze, Stuart Ewen, Thomas Frank, Matt McAllister, and Joseph Turow – again, among others – have been inspirations that I’ve borrowed from on that front.  Moreover, a recent term that circulated, “critical media industry studies,” is, I find, a helpful way to define the aspirations of this kind of approach.

RKN: Few scholars challenge that qualitative research excels at depth, detail, and precision in terms of understanding particular cases or processes. But some would question whether findings based on, for example, interviews can be generalized. Do you see your own findings as generalizable? If so, how and under what conditions? If you don’t, does it matter to you, or do you think about the reach and validity of your work in different terms?

I do aim for generalizability, but I understand that the often flexible, heterogeneous means by which I assemble my “data” precludes that generalizability on purely quantitative terms.  Because I label this work “exploratory,” in theory, another researcher could perhaps come along after the fact and check out its validity using more closed-ended, numerically convertible questions.  But I’m not sure that such an approach could necessarily tackle the scope of themes contained in a more sprawling qualitative endeavor in as short of a space as a single paper.  There are methods textbooks – by, say, Lindlof or Hammersley and Atkinson, as I usually rely upon – that probably make this case more convincingly and elegantly that I’m capable of here.  I think the quantitative scholar and I are both engaged in an allied effort at spotting patterns; it’s just a matter of whether you define those patterns before or after you sift through the social phenomena you’re interested in examining.  Put differently, I suppose, it’s a bottom-up versus top-down contrast in that regard.  And, honestly – and this is probably the former journalist in me – I’m still kind of a sucker for a powerful, revealing quote; when I start to hear echoes of such a quote across multiple interviewees, I feel like I’m approaching something generalizable.  But there is, again, an intuitive, improvisational aspect that precludes generalizability on quantitative terms.

RKN: Imagine you are talking to a colleague at a conference who does mainly fairly conventional forms of behavioralist, quantitative political communication research, i.e., studies agenda-setting in lab experiments or frame effects on attitudes through survey research. Is your research on news sharing relevant to this colleague? If so, how?

I’d certainly hope so.  I’d like to think that my work in this area explores the conditions that give rise to the political communication that those more traditional researchers then pursue from that behavioralist, effects perspective.  I’m cautious here not to over-assume I know or could characterize the nature of their work and, thus, mistakenly make any big claims about what the conventional approaches have been missing.  In that sense, I’ll again stress that I’m pretty new to this subfield.  But it did appear, in my reading up on the subject, that there was some opportunity to take stock of the perspectives and practices of campaign professionals who write the speeches, make the ads, and pitch the reporters – in other words, the folks (besides journalists) who create the political communication that is then assessed through those lab experiments and attitudes surveys.  The more conventional research that you describe seems concentrated on audiences; I’m also interested in those audiences, but through the lens of how consultants think about them.  It’s that thinking that then shapes the forms and discourse and I’m hoping to map that thinking.  Admittedly, these folks are not always the easiest to get access to (which is probably why there’s less research on them as opposed to other categories within political communication), but their position and power as elites makes them worth studying.

Full abstract below.

Read the rest of this entry »

“It wouldn’t make sense to study news sharing quantitatively when we don’t even know what it is” –QualPolComm preview interview

by Rasmus Kleis Nielsen

Lucas Graves and Magda Konieczna are working a paper called “Sharing the News: Specialization and Symbiosis in the Emerging Media Ecosystem”.

The paper presents fieldwork and interview-based research into how contemporary versions of the age-old and often informal practice of “news-sharing” amongst journalists is being professionalized and institutionalized through for example fact-checking groups in a changing media ecosystem. They argue a qualitative approach is necessary both because these practices are not always obvious and visible from the outside and because they are changing today–and we need to know how before we can even begin to approach the phenomenon in any other way. As they say “It wouldn’t make sense to study news sharing quantitatively when we don’t even know what it is.”

The paper will be presented at the ICA Preconference on Qualitative Political Communication Research. The full abstract is below the jump.

Here are questions and answers from an email interview I did with them about their research.

RKN: You work on news sharing, how different journalistic actors produce news together. How would you say this connects to core concerns of political communication research? Are there particular researchers or schools of thought you see yourself as being in a dialogue with?

One reason we find news sharing as a behavior so interesting is that it draws attention to the relationships among news organizations, and so to the wider landscape or “ecology” of news which includes journalistic and political actors, and all manner of hybrids. A landmark here is Page’s Who Deliberates, which examined what he called the “totality” of coverage to follow public discourse around controversies like the Zoe Baird nomination of 1993 and the riots after the Rodney King verdict. Page argued that to understand the mediated public sphere in a holistic way you have to pay attention to the entire class of “professional communicators”—not just journalists but also politicians, pundits, PR specialists, and other organized political voices. (Walter Lippmann was less hopeful but probably would have agreed that it doesn’t make sense to study journalistic and political actors in isolation.) One way to understand what’s been changing since the 1980s might be to say that the fields (or institutions) of journalism and politics, always intertwined, are becoming less differentiated. This obviously has consequences for journalism studies and for political communication research.

RKN: Your work is explicitly rooted in journalism studies through reference to the work of for example Daniel Hallin. But did you feel you had to go beyond mainstream media and communication research for theoretical and methodological inspirations for your work, and if so where did you find it?

The other reason news sharing is interesting is that it cuts against both everyday and scholarly assumptions about how journalists behave. It reminds us not to use the New York Times as a proxy for the entire media landscape, because that landscape includes many different kinds of news organizations in economic and professional tension, and sometimes in a kind of symbiosis. The entire subject of “intermedia” relationships and effects seems to be hiding in plain sight in communications research: It makes an appearance in various forms in a lot of influential work (e.g. Herbert J. Gans) but rarely receives the spotlight. A few exceptions, based mostly on content analysis, are Reese and Danielian’s paper on coverage of the 1980s crack “epidemic,” Shaw and Sparrow’s article about cue-taking across newspapers, and Boczkowski’s work on “imitation”—and of course an abundance of research on the ecological links between bloggers and journalists.

RKN: Few scholars challenge that qualitative research excels at depth, detail, and precision in terms of understanding particular cases or processes. But some would question whether findings based on, for example, ethnography and interviews can be generalized. Do you see your own findings as generalizable? If so, how and under what conditions? If you don’t, does it matter to you, or do you think about the reach of your work in different terms?

A useful way to think about qualitative work is that in the best cases it provides a window onto phenomena or processes that can also be measured in the aggregate. Obviously we need both. You can’t generalize from one case study to an entire class of organizations; but neither can you generalize from an attenuated quantitative measure to a full account of organizational behavior or change. Very often what’s being measured in broad quantitative studies — in experiments and even surveys—is an intermediate analytical concept that shouldn’t be confused with something existing “in the wild.” Nina Eliasoph gets at this very well in Avoiding Politics: People don’t carry attitudes or beliefs around like change in their pockets, they produce (and reproduce) them in social contexts, including the context of being asked questions by a stranger on the phone.

RKN: Imagine you are talking to a colleague at a conference who does mainly fairly conventional forms of behavioralist, quantitative political communication research, i.e., studies agenda-setting in lab experiments or frame effects on attitudes through survey research. Is your research on news sharing relevant to this colleague? If so, how?

Qualitative research is especially useful for analyzing emergent phenomena and specifying concepts; that opens the door to asking the types of questions that quantitative studies can examine and explain. Framing took shape largely in qualitative work and helped to define a rich new area of quantitative research. Our paper on news sharing aims to nail down the concept; it wouldn’t make sense to study news sharing quantitatively when we don’t even know what it is. But there’s clearly an opportunity for scholars who are interested to start to measure this phenomenon through larger-scale network or content analysis.

Full paper abstract below the jump. Read the rest of this entry »

“The whole notion of framing contests and the behind-the-scenes strategies have been underexplored”–QualPolComm preview interview

by Rasmus Kleis Nielsen

Øyvind Ihlen, Tine Ustad Figenschou and Anna Grøndahl Larsen from the University of Oslo are working on a paper on how different political actors (like government agencies and NGOs) act as “frame sponsors” and compete to shape news media frames on immigration behind the scenes in Norway. As they say (below), “the whole notion of framing contests and the behind-the-scenes strategies have been underexplored.”

Their paper “Behind The Framing Scenes: Qualitative Approaches to Analyze the NGO vs. Government Framing Strategies on Irregular Immigration” for the ICA Preconference on Qualitative Political Communication Research. Here are questions and answers from an email interview I did with them about their research. (Over the next months, we will publish a series of email interviews done with various people who will present at the preconference.) The full abstract is below the jump and on the conference page.

RKN: You work on frame sponsorship, how different political actors work behind the scenes to shape news frames. How would you say this connects to core concerns of political communication research? Are there particular researchers or schools of thought you see yourself as being in a dialogue with?

Our work is geared towards understanding the role played by communication when it comes to the issue of power. This is arguably at the heart of political communication research. We point to how different parties frame to benefit their particular cause, how they relate to the media and each other, and how they reflect on dilemmas, strengths and weaknesses. In this, it is particularly the community of scholars working on framing that we feel a kinship to, since issues of power has been brought up in relation to how certain readings are “naturalized”. At the same time, we feel that the whole notion of framing contests and the behind-the-scenes strategies have been underexplored.

In political communication research, there has traditionally been more work done on framing effects than on frame building. Did you feel you had to go beyond political communication for theoretical and methodological inspirations for your work, and if so where did you find it?

Indeed, and we have benefited from our diverse backgrounds in this sense. At the heart of our approach is a pragmatic orientation: It was our aim to examine how stakeholders in the field of immigration worked strategically to influence the media coverage from various starting points. We are trying to make sense of social phenomena and use the tools to think with that we feel are most useful, no matter which discipline they originated from. Theoretically, for example, we draw on our backgrounds in journalism, strategic communication and rhetoric. At the same time, we see research as a rhetorical operation in and of itself. We present our readings and try to qualify them with different empirical data and convincing theoretical arguments.

Few scholars challenge that qualitative research excels at depth, detail, and precision in terms of understanding particular cases or processes. But some would question whether findings based on for example ethnography and interviews can be generalized. Do you see your own findings as generalizable? If so, how and under what conditions?

The standard answer is that findings based on ethnography and interviews could have analytical generalizability and could also yield hypotheses that could be tested under other conditions and in other settings. We do not think of research as an endeavor where you get at final readings and we discuss our findings in relation to later projects and research in other settings. We are happy if our readings come across as meaningful and become a part of the conversation of how social phenomena are understood. The paper for the pre-conference stems from a larger research project where we have analyzed meditization processes in the Norwegian immigration bureaucracy. Our conclusions in this regard have met huge interest and approval here in other sectors of the bureaucracy, like health, suggesting other actors recognize themselves in the processes we analyze.

Imagine you are talking to a colleague at a conference who does mainly fairly conventional forms of behavioralist, quantitative political communication research, i.e., studies agenda-setting in lab experiments or frame effects on attitudes through survey research. Is your research relevant to this colleague? If so, how?

Actually, survey research is also a part of this multi-method project on migration issues, along with ethnography, qualitative interviews, quantitative framing studies and rhetorical text analysis. As we pursued our research, we felt that a singular focus on quantitative approaches would limit the possibility of getting a deep understanding of the frame building processes we focus on. In other words, we firmly believe in the value of cross-disciplinary and multi-method approaches. We would suggest to our colleague that conclusions can be tested in “the real world” through the use of qualitative methods, and that these provide much thicker descriptions and a deeper understanding than what quantitative approaches typically yield. Moreover, we would underline that particularly our main focus in this paper—actors’ frame-building strategies and, even more so, their reflections around these processes—can only really be identified and studied qualitatively.

Full paper abstract below the jump.

Read the rest of this entry »

On Elite Interviews and Thin Description (Or, What I Learned from “The Checkbox”)

by davekarpf

This is a post about the limitations of one of my most-preferred research methods: the elite-level interview.  In particular, I want to talk about the problems that can crop up when we construct illustrative case studies based solely on a few elite interviews.  It’s a common practice – you can find it even in some of the finest academic books.  But it’s a practice that often leads to hollow case studies.

I was thinking about this methodological issue last month, while I was reading Steven Schier’s 2000 book, By Invitation Only: The Rise of Exclusive Politics in the United States.  Schier offers an engaging and provocative argument about the difference between mobilization and activation.  For Schier, activation consists of “identifying and activating the small segments of citizens most likely to ‘get the message’ and vote or lobby government” (page 1).  This definition of activation sounds an awful lot like present-day mobilization to me, and indeed this is his point.  Political mobilization today involves far more targeting than mass mobilization by 19th century political parties.  The tools and techniques of mobilization have become more fine-grained.  In the process of developing more efficient techniques, we have lost some of the democratically-enriching value that comes from political participation.  It’s an important argument, even more salient today than when it was written and Schier makes it well.

…But.

In chapter 5 of the book, titled “Interest Organizations and Government: Lobbying By Activation,” Schier provides some descriptive cases to show how interest groups are employing activation strategies in their work.  These case studies were constructed on the basis of elite interviews with multiple senior staff members at each organization, conducted in 1997.  And one of those cases was the Sierra Club.

In his five-paragraph description of the Sierra Club, I identified five (well, four and a half) errors.*

Some context: In 1997, I was entering my freshman year at Oberlin College and had just been elected Chair of the Sierra Student Coalition’s (SSC’s) national executive committee.  I had already spent two years within the vast volunteer bureaucracy of the Sierra Club.  I would go on to spend another 13 years collecting various titles within the organization, including 6 on the board of directors.  I was also an avid Sierra Club history buff in college.

Suffice it to say, these are not errors that the average reader would pick up on.  They are items where a quote from the interviewee is slightly misinterpreted.  They are descriptions of committee names and governance processes that don’t quite fit.  They are not errors that undermine his argument.

But therein lies the problem.  If you can make as many errors as you have paragraphs without changing the contribution to your argument, then how important can the case example really be to your thesis?  What’s more, when I think back to those years, a better case example jumps immediately to mind:

In January 1998, I attended my first “winter gathering” with the SSC executive committee.  We spent much of that weekend retreat discussing the unfolding saga of “The Checkbox.”

In the late 1990s, most Sierra Club member recruitments and renewals occurred through direct mail.  For a while, that mail included a “student” checkbox.  Between 10,000 and 30,000 people would check that box in a given year, thus paying the (lower) student rate and ostensibly joining the SSC as members.  But recent mail tests had showed that removing the checkbox increased direct mail response rates.  Member Services removed it without telling the SSC.  All of a sudden, our “membership” rolls crumbled.

The SSC also had a network of active groups at high school and college campuses.  These were our “activists,” rather than our “members.”  But our annual funding was linked to membership, not activists.  So we either had to get the checkbox back or get the funding mechanism changed.  This was a matter of organizational life-and-death.  We spent years working on it.  And it was all a byproduct of the very “activation” strategies that Schier is calling our attention to.

The Checkbox would have made an excellent illustrative example for Schier’s chapter. By 1997, membership in groups like Sierra had become a thin and transactional relationship.  These groups also supported political activism, but they did so by cultivating a small core group of devoted participants.  Yet the single sentence that Schier devotes to the SSC –  “Sierra also maintains a ‘student coalition’ of some 10,000 members that are ‘somewhat active.” – conveys none of this information.  Instead, it is barely recognizable to me as a former participant.

This is too common an occurrence in qualitative, case-based research.  Brief case examples, based on a few elite interviews, fill out pages without telling us anything of much substance.  The problem isn’t the interviews (Schier interviewed the right people).  The problem is that it is just interviews.

If we want to understand how Sierra, or the NRA, or AARP, or any other organization engages their supporters, then we need to triangulate from multiple reference points.  Read the minutes from board meetings.  Analyze newsletter content.  Read the magazines and listservs that organizational activists participate in.  Then draw upon these reference points in your elite interviews.  You can also share preliminary findings with them, to find out what you’re getting not-quite-right and dig deeper into key terms and concepts. This triangulation (a variant on what Richard Fenno called “soak and poke” research) helps us move from thin description to thick description.

Now, Schier probably wouldn’t have stumbled upon The Checkbox as his guiding example of activation in practice.  We were a bunch of scrappy undergrads, and probably would’ve been intimidated if a political science professor had wanted to interview us anyway.  But he would have found some other robust campaign or controversy.  That controversy could serve to demonstrate the normative problems of “activation” strategies, while simultaneously rendering Sierra in more recognizable terms.  Five paragraphs of surface description, touching on governance committees, ballot initiatives, membership levels and the SSC tells the reader barely anything at all.  The interviews fill pages, but shed little light.

By Invitation Only is serving as my example here because Schier happened to study my organization, but also because of the considerable strength of his book.  Thin illustrative case examples are everywhere in the literature, including dozens of lesser books and articles far more deserving of critique. But my point here is that even otherwise-excellent research often stumbles through this methodological pothole.

The measure of a descriptive case study should be just how much does it describe?  Thin description, based on a few interviews, can rarely reach the complexity and nuance that we aim for in qualitative political communication research.

Elite interviews are a necessary tool for producing rich case studies.  But they are hardly sufficient.

 

—–

*The four and a half errors were:

(1) “A smaller set of 5,000 members, known as the “core group…”’ A staff member may have used this term, but it was not in common usage at that time.

(2) “One-fifth of the organization’s membership is in California, where the socially oriented Sierra Singles attracts many members.” The membership is 1/5th Californian because the organization was founded in California (by John Muir, who is memorialized on the California quarter) and grew in diasporic fashion in the 1960s, 70s, and 80s.  Sierra Singles has never been a particularly popular program.

(3) “The governance committee of Sierra’s national board of directors has formal authority over the organization’s policy direction.”  The Board delegated authority to six governance committees in 1997 (Conservation, Organizational Effectiveness, Training, Finance, Outdoor Activities, and Communication & Education).  These committees passed recommendations up to the Board, which maintained formal authority and frequently challenged govcom recommendations.

(4) “If 2,000 members sign in favor of a referendum on an issue, a ballot goes out to all members.” A ballot goes to all members every year.  The number of signatures required for a referendum fluctuates, depending on the number of ballots cast in the previous year’s election.

(1/2) “Sierra also maintains a ‘student coalition’ of some 10,000 members that are ‘somewhat active.’”  See above.