Introduction
Controversy has recently swirled around two studies on "sham issue advocacy"
done by the Brennan Center for Justice (Buying Time 1998 and Buying Time
2000), relied upon by some of the Congressional sponsors of the BCRA (McCain-Feingold)
and discussed in detail in the opinions of the three judge court.
The principal published attacks on the study have been this
op-ed by George Will (originally appearing in the Washington Post
and this article in the Weekly Standard by David Tell. Tom Mann
published this response
in Roll Call, leading to further criticisms of the report on the election
law listserv by Bob
Bauer (see also a further response here)
and by Joe
Sandler (with replies by Trevor
Potter on Bauer and Potter
on Sandler).
There has been some press coverage of the controversy as well (see this Milwaukee
Journal-Sentinel article and this
Roll Call article).
The criticisms fall into three categories: (1) that the data were somehow
falsified or miscoded to achieve a particular political result; (2) that
the studies constituted poor social science; and (3) that the studies do
not prove the constitutionality of the major provisions of the BCRA.
1. I saw no evidence that data were falsified or miscoded to achieve a
particular result.
I begin with a disclosure---one that I think makes my comments on this aspect
of the controversy particularly relevant. Back in 2000, I received a small
($3,000) grant from the Brennan Center to use the CMAG database in my own
work (the database is what formed the basis for the Buying Time studies).
The only condition on the grant was that I use the data for research I intended
to publish. I ultimately did so, publishing my Minnesota
Law Review article on overbreadth, an article whose conclusions I
think were a mixed bag for the Brennan Center's reform agenda. Never did
I feel any pressure to reach a particular result.
Indeed, it was quite the opposite. The goal was to produce the best social
science research possible, regardless of result. I worked closely with Craig
Holman and Luke McLoughlin of the Brennan Center on understanding their data,
and coding. We exchanged dozens and dozens of e-mails (most of which I think
were turned over as part of the discovery in the BCRA litigation). Some of
those e-mails went to my trying to understand the methodology of the 1998
study---a study led at the Brennan Center by Jon Krasno, not Craig and Luke.
I believe it was my questions to Craig that led to some of the exchanges of
e-mails between Craig and Josh Rosenkranz (then head of the Center) that have
been used by George Will and David Tell to make it seem as though there was
deliberate falsification going on.
Hardly. The issues involved coding questions (for example, what about ads
that ran in numerous markets: should they be counted as "unique" advertisements
if run in different markets?) and conceptual questions (how should overbreadth
be measured?) I took one view (explained on pages 1789-91 of my article)
and I think Craig shared that view, while others at the Brennan Center took
another view. The debate was open and honest. As I can recall, most of the
recoding of student responses moved responses into the category of genuine
issue advocacy, leading to worse numbers from the point of view of reform.
(See my extended discussion of the "Citizens for Better Medicare" advertisements
run in 2000 (pages 1797-99 of my article)). In sum, I saw absolutely no bias
or falsification whatsoever with Buying Time 2000. I had nothing to do with
Buying Time 1998 but have no reason to believe there were such problems there
either.
2. The studies do not constitute poor social science.
As the co-editor of a peer reviewed journal and as a law professor and political
scientist, I have seen my share of poor social science (though, happily, I
usually get to see very good social science). The Buying Time studies are
good social science.
Imagine that you are back in 1998 or 2000 and you are trying to measure the
following phenomenon: More and more political advertisements are being run
during elections that lack express words of advocacy but appear to be intended
to influence campaigns. These advertisements are currently unregulated. One
political suggestion has been a "bright line" test to regulate such advertisements,
by counting all advertisements as electioneering that run within a certain
time period before an election, feature a clearly identified candidate for
office, but lack words of express advocacy. Such a test, if adopted, might
be attacked as "substantially overbroad," because it might capture advertising
not intended to influence campaigns. To answer the question about how overbroad
such a law might be, one would need to know what percentage of advertisements
not intended to influence campaigns would be captured by the bright line
test. (I call these "false positives" in my article.) Having students view
the advertisements and code them as electioneering or genuine issue advocacy
is a sensible way to get at this problem.
Of course, the study can be attacked. Why students? What exactly was the
wording of the questions posed to the students? Judge Henderson latched on
to these and other criticisms and concluded that the study lacked credibility.
But the majority of judges on the BCRA court (Leon and Kollar-Kotelly) disagreed.
Leon agreed the study was entitled to "some evidentiary weight" on the overbreadth
question. Kollar-Kotelly offered an extended defense of the study's methodology.
One can disagree on the edges, but to argue that the Buying Time 2000 study
failed to follow usual principles of social science is wrongheaded.
3. Do the studies prove the constitutionality of the BCRA provisions covering
"electioneering communications?"
The BCRA imposes the bright line test for two purposes, disclosure and a
ban on corporate and union funded advertising (except through a separate
segregated fund) for a 60 day period before the election (or 30 days before
a primary).
Do the studies show that the bright line test is constitutional for both
purposes, i.e., that the test is not substantially overbroad? This is the
big question, and one about which I believe reasonable minds can disagree.
Judge Kollar-Kotelly said it did so, and Judge Leon said it did not. Judge
Henderson rejected the bright-line test without relying on the Brennan Center
data.
The dispute over this question may take place on many levels.
(1) As I detail in my article, there are a number of ways of conceiving
of the question of "overbreadth." Is it an empirical test? Should there be
balancing? I conclude in my article (though others disagree) that Supreme
Court case law establishes that "substantial overbreadth" is primarily an
empirical test but that some balancing is inevitable. Thus, it would not
surprise me if the Supreme Court upheld the bright line test for some purposes
(such as disclosure) but not others (such as the corporate-union ban).
(2) Even accepting that substantial overbreadth is primarily an empirical
test, there are different ways of conceiving the empirical measurement. (This
is the "denominator" issue referred to in Tom Mann's article.) There are
other questions as well, such as whether we should use the total number of
unique ads, total number of airings, or total dollar amount spent on the
ads to engage in the empirical inquiry.
The bottom line is that to the extent the Court views the overbreadth question
as an empirical test, the Brennan Center data (and my own take on the data
in the Minnesota article) are the most extensive empirical examinations of
the question. There is nothing else out there. Even if the study is not perfect,
it is better than the Court going on sheer intuition.
Consider the following few paragraphs from my article, discussing the Court's
analysis in a case called Massachusetts v. Oakes (footnotes omitted
from my article), where the Court did go with intuition alone:
Oakes involved a criminal defendant accused of violating a Massachusetts
law that prohibited adults from posing or exhibiting nude minors for purposes
of visual representation in any book, magazine, pamphlet, motion picture,
photograph, or picture. The defendant was convicted under the law for taking
ten color photographs of his partially nude fourteen-year-old stepdaughter.
The Massachusetts Supreme Judicial Court had reversed the conviction. The
majority of that court "concluded that [the statute] 'criminalize[d] conduct
that virtually every person would regard as lawful,' and would make a 'criminal
of a parent who takes a frontal view picture of his or her naked one-year-old
running on a beach or romping in a wading pool.'"
A four-justice plurality of the United States Supreme Court refused to apply
the overbreadth doctrine in the case because the Massachusetts legislature
had in the interim repealed the relevant portion of the statute. Five justices
disagreed that the statute's amendment mooted the overbreadth challenge,
but those five justices then split two-three on the question whether the
statute was substantially overbroad.
Justice Scalia, writing for himself and Justice Blackmun, believed the statute
was not substantially overbroad. Referring to a photograph of naked one-year-old
running on the beach as hypothesized in the lower court, Justice Scalia wrote:
"Assuming that it is unconstitutional (as opposed to merely foolish) to prohibit
such photography, I do not think it so common as to make the statute substantially
overbroad. We can deal with such a situation in the unlikely event some prosecutor
brings an indictment."
Justice Brennan, in contrast, writing for himself and Justices Marshall and
Stevens, had a different view of the empirical evidence:
The abundance of baby and child photographs taken every day without full
frontal covering, not to mention the work of artists and filmmakers and nudist
family snapshots, allows one to say, as the Court said in Houston v. Hill,
that "[t]he ordinance's plain language is admittedly violated scores of times
daily, yet only some individuals—those chosen by the police in their unguided
discretion—are arrested."
Justice Scalia stated that it is the burden of the litigant challenging a
statute on grounds of substantial overbreadth to present the empirical evidence,
but it is difficult to see how in a case like Oakes a litigant could gather
such evidence. Short of commissioning a social scientist to do a survey (which
would require asking potentially embarrassing questions of respondents, among
other problems), a judicial guess may be all that is available.
Professor Fallon has criticized the Court's empirical approach in this area
as requiring "uncabined judicial speculation in areas that are, at best,
on the outer fringes of the courts' practical competence." He and others
have called on the Court to abandon, or at least modify, this approach in
favor of a balancing approach that looks at the importance of the competing
interests at stake. In Part III, I add my voice to this criticism, showing
that even with empirical evidence, balancing of interests and harms
is inevitable and should be done in the open. I begin, however, with the empirical
evidence, in particular with data generated from the newly available dataset
from the Brennan Center and Professor Goldstein. The dataset allows replacing
the judicial guess with empirical evidence in the case of bright-line tests
regulating sham issue advocacy. Such evidence does not reveal the constitutionally
relevant proportion of false positives, but it does allow a court to make
determinations of constitutionality in this area with a good handle on the
likely results of its holding.
The Court would do a lot worse without the Buying Time studies than it will
do with it, and that is a good bottom line.
--
Rick Hasen
Professor of Law and William M. Rains Fellow
Loyola Law School
919 South Albany Street
Los Angeles, CA 90015-1211
(213)736-1466
(213)380-3769 - fax
rick.hasen@lls.edu
http://www.lls.edu/academics/faculty/hasen.html
http://electionlaw.blogspot.com