Clash of the Titan Egos, Part II
As settlement of part two of the Lott v Levitt lawsuit, Levitt
sent this letter to John McCall. Recall that Levitt told McCall
that the Conference Issue was NOT peer-refereed (even though
Levitt was one of the referees); Levitt acted like Lott had done
something improper by paying the conference issue espenses;
and Levitt told McCall that Lott put in only articles that supported
him, even though Lott invited Levitt to submit an article.
July 26, 2007
John B. McCall, Ph.D.
576 Rocky Branch
Coppell, Texas 75019
Dear Mr. McCall:
You may recall that I sent you emails on May 25 and 26, 2005,
which made certain statements about the Conference Issue of the
Journal of Law and Economics ("JLE") which was dated October 2001
(the "Conference Issue"). I now want to clarify and correct some
of the statements I made in, and impressions I may have created
by, those emails.
In those emails, I did not mean to suggest that Dr. John R,
Lott, Jr, or anyone acting on his behalf, engaged in bribery or
exercised improper influence in the editorial process with
respect to the preparation and publication of the Conference
Issue. I acknowledge that the articles that were published in the
Conference issue were reviewed by referees engaged by the editors
of the JLE. In fact, I was one of the peer referees. As far as I
know, all papers published in the JLE are refereed.
At the time of my May 2005 emails to you, I knew that scholars
with varying opinions had been invited to participate in the 1999
conference and had been informed that their papers would be
considered for publication in what became the Conference Issue.
Along with other people, I received an email from Dr. Lott
inviting my own participation in that conference. I also was
aware at the time of the May 2005 emails to you that in
connection with the preparation of conference issues for the JLE,
that the organizer of each conference issue needs to provide
funding to JLE to cover publication and mailing expenses. I did
not mean to sugggest that Dr. Lott did anything unlawful or
improper in arranging for the payment of the publication expenses
for the Conference Issue. I have discussed the wording of this
letter with my counsel and am willingly signing it.
I hope the foregoing clarifies and corrects my statements
contained in the emails which I sent only to you.
Very truly yours,
(signed)
Steven D. Levitt.
But the fat lady aint sung. Lott's new lawyer wants to revive the
first part, on the issue of replication, dismissed by the Chicago lawyer.
In more than one of my postings on the seperate Lott 98% issue,
I have claimed that exact statistics and sociology do not belong in
the same sentence and cited
The Numerical Reliability of Econometric
Software by B. D. McCullough and H. D. Vinod from
Journal
of Economic Literature, Vol. XXXVII (June 1999), pp. 633-655.
The replication issue is related to the Lott-Mustard econometric
regression, as applied to 1977-1992 county-level Uniform Crime Report
data. The Lott-Mustard 1997 journal article and John Lott's
book More Guns Less Crime (1998) claim to show measurable crime
reduction attributable to passage of Right-To-Carry (RTC) laws.
In re-reading McCullough and Vinod 1999 recently, I found this
passage especially striking:
Even in rare instances when a software package is
identified in an article, and the package is later discovered
to be defective in a way which affects the article's results,
updating the results with a reliable software package is
problematic. The reason is that virtually no journals require
authors to archive either their data or their code, and this
constitutes an almost insurmountable barrier to replication
in the economic science. Scientific content is not dependent
merely on writing up a summary of results. Just as important
is showing the precise method by which the results were
obtained and making this method available for public scrutiny.
To our knowledge, only the journal Macroeconomic Dynamics (MD)
requires both data and code, while the Journal of Applied
Econometrics (JAE) requires data and encourages code, and the
Journal of Business and Economic Statistics (JBES) and
The Economic Journal require data; all four journals have
archives which can be accessed via the worldwide web. In the
context of replicability and the advancement of science,
the advantage of requiring code in addition to data is obvious.
While it may be trivial to use the archived code to replicate
the results in a published article, only if the code is available
for inspection will other researchers have the opportunity to
find errors in the code. Just as commercial software needs to be
checked, so does the code which underlies published results.
(this was five years before the publication of
Freakonomics
and six years before the Lott v Levitt lawsuit)
In discussion of the accuracy of econometric software in June
1999, authors McCullough and Vinod used "replication in the
economic science" in the same sense that John Lott claims is the
"objective and factual meaning in the world of academic research".
This is the meaning that I derived while doing computer typesetting
for a major economics journal from 1974-2003. To quote myself:
"Replication is part of the peer-review or referee process used by
academic journals. Replicate means to run an author's data through
the author's math to verify that the published results were not
miscalculated or falsified." When the Lott v Levitt lawsuit came
up, I contacted the managing editor of the journal asking for the
usual uses of "replicate" in economics and I was informed:
I have usually heard "replicate" in economics to mean either
you use the author's model and data and get the same results OR
you use the model with different data and verify the result -
the same thing is happening with different data.
I had always read using a model with a different data set as
"testing the robustness" of an econometric model. So there are
two common uses of replicate in economics: the peer-referee
sense of verifying published results and the testing sense of
using different data to test the same model.
I would like to point out that John Lott has made his data and
his code available publicly even before publication, irregardless
of the data and code requirements of the journals. Lott critic
David Hemenway rejected Lott's results because they seemed
"counter-intuitive" to him. At least Lott's data and code can be
tested, unlike Hemenway's intuition. In fact, if Lott did not
openly share his data and code, if Lott kept his data and code
deeply closeted, like Arthur Kellermann or David Hemenway, neither
replicating nor testing of his results would be possible.
There is a movement in the social sciences promoted by
Prof. Gary King of Harvard to make the social sciences more
respectable as sciences by requiring stricter discipline, in
particular making "replicate" and other terms used in science
have the same strict definitions in the social sciences as used
in the physical sciences.
So, in the "soft" social sciences, there has traditionally been
a laxer approach to issues of replication of results and publishing
of data and code than in the "hard" physical sciences. McCullough,
Vinod, King, Lott and other social scientists independently
advocate the stricter approach.
Steven Levitt claimed in
Freakonomics (HarperCollins, 2005):
"When other scholars have tried to replicate [Lott's] results, they
found that right-to-carry laws simply don't bring down crime."
In the peer-referee sense, even Lott critics Ayres and Donohue
claimed to have successfully replicated Lott's published results
from his data and math in their Table 9, Line 1. The National
Academy of Sciences claimed they replicated Lott's published
results in their Table 6.1 Line 2. Given that Levitt cited the
Ayres-Donohue article in
Freakonomics, it is hard to see
how he missed that. I guess that happened the same way he
missed the fact that the conference issue had been peer refereed.
In the "test the robustness" sense of replicate, the finding
that "right-to-carry laws simply don't bring down crime" is not
universal. By using different data or different variables in the
math, some scholars have shown that RTC laws:
1. don't bring down crime,(*)
2. bring crime down about as much as Lott claimed, or
3. bring crime down even more than claimed by Lott.
When other scholars have tried to replicate Lott's results, they
have found results all over the map: less, similar or more.
To claim that Lott's results cannot be replicated in either the
peer-review or test-the-robustness sense is simply not true.
Levitt's use of "replication" in his court papers in answer to
Lott's lawsuit appears so soft as to be formless. Ted Frank, with
very little search, found six instances where Steven Levitt used
replicate in the sense that Lott claims is the "objective and
factual" use of the term.
Lastly, Lott did not "sue out of the blue" as some have claimed:
2006 Jan 11 - Lott wrote Levitt requesting a correction on Levitt's
claim that other scholars had been unable to replicate Lott's results.
2006 Mar 17 - Lott's lawyer wrote Levitt and his publisher requesting
a correction and a retraction.
2006 Apr 10 - Civil Action 06C 2007 (defamation) filed by John Lott
against Steven Levitt and his publisher.
{ None of this addresses the issue of whether a court of law is
the proper venue to resolve an academic dispute. In dismissing
Count 1 over the Freakonomics/replicate dispute, Judge Ruben Castillo
quoted an earlier court ruling: "judges are not well equipped to
resolve academic controversies, ... , and scholars have their own
remedies for unfair criticisms of their work--the publication of a
rebuttal."
Dilworth v Dudley, 17 F.3d 307, 310 (7th Cir. 1996).
On Count 2 over Levitt's comments about Lott to economist John McCall,
Judge Castillo opined "Levitt made a string of defamatory assertions...
(which) ... cannot be reasonably interpeted as innocent or mere
opinion." None of this seems to advance the cause of elevating the
soft standards of the social sciences to something worthy of the
name science. }