Subject: Re: Weighing in on the "Buying Time" Controversy
From: Rick Hasen
Date: 5/24/2003, 8:02 PM
To: Rick Hasen
CC: election-law <election-law@majordomo.lls.edu>

I understand that for some readers of the list, large chunks of my post were cut off. You can find the entire post on my blog at:
http://electionlaw.blogspot.com/2003_05_01_electionlaw_archive.html#200337464
Rick

Rick Hasen wrote:
Introduction

Controversy has recently swirled around two studies on "sham issue advocacy" done by the Brennan Center for Justice (Buying Time 1998 and Buying Time 2000), relied upon by some of the Congressional sponsors of the BCRA (McCain-Feingold) and discussed in detail in the opinions of the three judge court.

The principal published attacks on the study have been this op-ed by George Will (originally appearing in the Washington Post and this article in the Weekly Standard by David Tell. Tom Mann published this response in Roll Call, leading to further criticisms of the report on the election law listserv by Bob Bauer (see also a further response here) and by Joe Sandler (with replies by Trevor Potter on Bauer and Potter on Sandler).

There has been some press coverage of the controversy as well (see this Milwaukee Journal-Sentinel article and this Roll Call article).

The criticisms fall into three categories: (1) that the data were somehow falsified or miscoded to achieve a particular political result; (2) that the studies constituted poor social science; and (3) that the studies do not prove the constitutionality of the major provisions of the BCRA.

1. I saw no evidence that data were falsified or miscoded to achieve a particular result.

I begin with a disclosure---one that I think makes my comments on this aspect of the controversy particularly relevant. Back in 2000, I received a small ($3,000) grant from the Brennan Center to use the CMAG database in my own work (the database is what formed the basis for the Buying Time studies). The only condition on the grant was that I use the data for research I intended to publish. I ultimately did so, publishing my Minnesota Law Review article on overbreadth, an article whose conclusions I think were a mixed bag for the Brennan Center's reform agenda. Never did I feel any pressure to reach a particular result.

Indeed, it was quite the opposite. The goal was to produce the best social science research possible, regardless of result. I worked closely with Craig Holman and Luke McLoughlin of the Brennan Center on understanding their data, and coding. We exchanged dozens and dozens of e-mails (most of which I think were turned over as part of the discovery in the BCRA litigation). Some of those e-mails went to my trying to understand the methodology of the 1998 study---a study led at the Brennan Center by Jon Krasno, not Craig and Luke. I believe it was my questions to Craig that led to some of the exchanges of e-mails between Craig and Josh Rosenkranz (then head of the Center) that have been used by George Will and David Tell to make it seem as though there was deliberate falsification going on.

Hardly. The issues involved coding questions (for example, what about ads that ran in numerous markets: should they be counted as "unique" advertisements if run in different markets?) and conceptual questions (how should overbreadth be measured?) I took one view (explained on pages 1789-91 of my article) and I think Craig shared that view, while others at the Brennan Center took another view. The debate was open and honest. As I can recall, most of the recoding of student responses moved responses into the category of genuine issue advocacy, leading to worse numbers from the point of view of reform. (See my extended discussion of the "Citizens for Better Medicare" advertisements run in 2000 (pages 1797-99 of my article)). In sum, I saw absolutely no bias or falsification whatsoever with Buying Time 2000. I had nothing to do with Buying Time 1998 but have no reason to believe there were such problems there either.

2. The studies do not constitute poor social science.

As the co-editor of a peer reviewed journal and as a law professor and political scientist, I have seen my share of poor social science (though, happily, I usually get to see very good social science). The Buying Time studies are good social science.

Imagine that you are back in 1998 or 2000 and you are trying to measure the following phenomenon: More and more political advertisements are being run during elections that lack express words of advocacy but appear to be intended to influence campaigns. These advertisements are currently unregulated. One political suggestion has been a "bright line" test to regulate such advertisements, by counting all advertisements as electioneering that run within a certain time period before an election, feature a clearly identified candidate for office, but lack words of express advocacy. Such a test, if adopted, might be attacked as "substantially overbroad," because it might capture advertising not intended to influence campaigns. To answer the question about how overbroad such a law might be, one would need to know what percentage of advertisements not intended to influence campaigns would be captured by the bright line test. (I call these "false positives" in my article.) Having students view the advertisements and code them as electioneering or genuine issue advocacy is a sensible way to get at this problem.

Of course, the study can be attacked. Why students? What exactly was the wording of the questions posed to the students? Judge Henderson latched on to these and other criticisms and concluded that the study lacked credibility. But the majority of judges on the BCRA court (Leon and Kollar-Kotelly) disagreed. Leon agreed the study was entitled to "some evidentiary weight" on the overbreadth question. Kollar-Kotelly offered an extended defense of the study's methodology. One can disagree on the edges, but to argue that the Buying Time 2000 study failed to follow usual principles of social science is wrongheaded.

3. Do the studies prove the constitutionality of the BCRA provisions covering "electioneering communications?"

The BCRA imposes the bright line test for two purposes, disclosure and a ban on corporate and union funded advertising (except through a separate segregated fund) for a 60 day period before the election (or 30 days before a primary).

Do the studies show that the bright line test is constitutional for both purposes, i.e., that the test is not substantially overbroad? This is the big question, and one about which I believe reasonable minds can disagree. Judge Kollar-Kotelly said it did so, and Judge Leon said it did not. Judge Henderson rejected the bright-line test without relying on the Brennan Center data.

The dispute over this question may take place on many levels.

(1) As I detail in my article, there are a number of ways of conceiving of the question of "overbreadth." Is it an empirical test? Should there be balancing? I conclude in my article (though others disagree) that Supreme Court case law establishes that "substantial overbreadth" is primarily an empirical test but that some balancing is inevitable. Thus, it would not surprise me if the Supreme Court upheld the bright line test for some purposes (such as disclosure) but not others (such as the corporate-union ban).

(2) Even accepting that substantial overbreadth is primarily an empirical test, there are different ways of conceiving the empirical measurement. (This is the "denominator" issue referred to in Tom Mann's article.) There are other questions as well, such as whether we should use the total number of unique ads, total number of airings, or total dollar amount spent on the ads to engage in the empirical inquiry.

The bottom line is that to the extent the Court views the overbreadth question as an empirical test, the Brennan Center data (and my own take on the data in the Minnesota article) are the most extensive empirical examinations of the question. There is nothing else out there. Even if the study is not perfect, it is better than the Court going on sheer intuition.

Consider the following few paragraphs from my article, discussing the Court's analysis in a case called Massachusetts v. Oakes (footnotes omitted from my article), where the Court did go with intuition alone:


The Court would do a lot worse without the Buying Time studies than it will do with it, and that is a good bottom line.
-- 
Rick Hasen
Professor of Law and William M. Rains Fellow
Loyola Law School
919 South Albany Street
Los Angeles, CA  90015-1211
(213)736-1466
(213)380-3769 - fax
rick.hasen@lls.edu
http://www.lls.edu/academics/faculty/hasen.html
http://electionlaw.blogspot.com
  

-- 
Rick Hasen
Professor of Law and William M. Rains Fellow
Loyola Law School
919 South Albany Street
Los Angeles, CA  90015-1211
(213)736-1466
(213)380-3769 - fax
rick.hasen@lls.edu
http://www.lls.edu/academics/faculty/hasen.html
http://electionlaw.blogspot.com