Correlation between violent crime and Brady Rank

Status
Not open for further replies.

antsi

Member
Joined
Dec 25, 2002
Messages
1,398
Just for fun, I ran a Pearson correlation between states' Brady rankings and their rate of overall violent crime(accroding to FBI statistics).

The correlation is positive, meaning that higher Brady rankings are associated with higher violent crime rates. The association is not statistically significant (p=.518)

So, basically, it is what we all would have guessed. The laws advocated by the Bradys have no real relationship with crime rates, and if anything, the apparent relationship is that such laws have the opposite of their intended effect.

Thanks to the BluesMan for helping me to find the FBI statistics.
 
If I'm not mistaken, NSF (or was is CDC?) did a study that said the same thing. No correlation could be found between any gun control measure and crime.
 
That's why, for the most part, the anti-gun crowd has not been beating the more guns = more crime drum lately. They know the facts aren't there. Oh sure, if you ask, they'll express a belief that guns cause crime, but that isn't their first tactic anymore.

Now they go at it from a product liability standpoint, screaming "it's for the children." For that reason, I wonder if any of them have bathtubs at home.
 
geekWithA.45 said:
If I'm not mistaken, NSF (or was is CDC?) did a study that said the same thing. No correlation could be found between any gun control measure and crime.

They did say that, but to keep a victory from the pro-gun crowd, the said the study was inconclusive and that more research was needed.:rolleyes:
 
Technosavant said:
That's why, for the most part, the anti-gun crowd has not been beating the more guns = more crime drum lately. They know the facts aren't there. Oh sure, if you ask, they'll express a belief that guns cause crime, but that isn't their first tactic anymore.

Now they go at it from a product liability standpoint, screaming "it's for the children." For that reason, I wonder if any of them have bathtubs at home.


Swimming pools. Statistically speaking, it's swimming pools. Far more dangerous than firearms.
 
antsi said:
Just for fun, I ran a Pearson correlation between states' Brady rankings and their rate of overall violent crime(accroding to FBI statistics).

The correlation is positive, meaning that higher Brady rankings are associated with higher violent crime rates. The association is not statistically significant (p=.518)

So, basically, it is what we all would have guessed. The laws advocated by the Bradys have no real relationship with crime rates, and if anything, the apparent relationship is that such laws have the opposite of their intended effect.

Thanks to the BluesMan for helping me to find the FBI statistics.

Well, technically, a correlation can't "prove" anything.

Statistics Rule#1: Correlation does NOT equal causation.

For example, if you had the data, you could potentially find a high statistically significant correlation (not all that hard with a high enough sample size) between chocolate consumption and IQ scores. However, it would be presumptuous (and possibly foolish) to conclude that chocolate consumption CAUSES one to have a high IQ.
 
Wright, Rossi & Daly, in their "Under The Gun" (U of Fla Press, 1985) concluded among other things that there was no correlation between any gun control law passed by the Florida legislature and rates of violent crime.

Darn those boring ol' statisticians.

Note that Prof. Gary Kleck is also a statistician...

Real-world numbers keep messing up the maunderings of such as the Brady Bunch.

Art
 
ReadyontheRight said:
But they COULD! And it would be very scary. And they could shoot down a helicopter while doing it!:p

Just like in vice city! Ohh my! :rolleyes: There is no saying what will happen if thoes evil 50 cal, anti 747 rifles are not regulated :rolleyes:
 
KONY said:
Statistics Rule#1: Correlation does NOT equal causation.
.

You are partially right, but it doesn't apply in this case.
The presence of a correlation does not prove causation - correct.

But the absence of a correlation certainly does rule out causation.

To continue your example:

If I had data from the relevant sample showing that there is no correlation between chocolate consumption and IQ, it would certainly constitute evidence against the hypothesis that chocoloate consumption raises IQ.

Or if the data showed an inverse correlation between chocolate consumption and IQ, then that would constitute evidence against the hypothesis that eating chocolate raises IQ.

You're right that correlation is not a sufficient condition for causation, but it is a necessary condition for causation.
 
antsi said:
You're right that correlation is not a sufficient condition for causation, but it is a necessary condition for causation.

Well, not exactly. If you do NOT have a correlation between two variables, you could still potentially have a more complex causal relationship if you fail to account for a third variable (moderator) that interacts with your primary variable in predicting your criterion. Remember, in the real world, almost nothing is ever predicted by just one other variable. Thus, causal explanations are almost never as simple as Pearson bivariate correlations make them out to be.
 
KONY said:
Remember, in the real world, almost nothing is ever predicted by just one other variable.

Right, but that's exactly what the Brady people won't get through their heads. They want to assert that there is a direct causal relationship between (lack of) gun control laws and violent crime. Demonstrating that there is no statistically significant relationship between those two things does constitute evidence contrary to the model they are promoting.

Pearson's r does not assume that any criterion variable is entirely predicted by any one predictor, and correlational studies do have scientific value. If you determine that there is a strong correlation between smoking and lung cancer, then it is true that you haven't proven causation. But you have demonstrated that there is an association, which justifies further study to determine if a causal relationship exists.

I think you are misremembering the nature of moderators and interactions. An interaction is present when the strength of the association between a predictor and a criterion depends on the value of a third (moderator) variable. An example would be genetic predispostion to cancer moderating the association between smoking and lung cancer. There is still an association between the predictor and the criterion, even when moderation is present.

You're right about the complexity of causation in real life. But usually this works the other way around. What appears to be a strong association, turns out to be much weaker when the extra variables are accounted for. The classic example of this is early studies linking coffee drinking to lung cancer. In fact, coffee drinking was associated with smoking, which does cause lung cancer. This is an example of a case where correlation "fools you" into thinking there is a causal relationship when in fact none exists.

But it doesn't really work the other way around. It is not the case that variables which have no statistical association turn out to be causally related due to the hidden effect of moderators or confounds. If "a" has a causal relationship to "b," then there is going to be a statistical assocation betwen a and b. But even if a is only indirectly causally related to b, through a relationship with a third variable "x," then there will still be a correlation between a and b. Therefore, the absence of correlation does constitute strong evidence against the existence of a causal relationship.
 
antsi said:
But it doesn't really work the other way around. It is not the case that variables which have no statistical association turn out to be causally related due to the hidden effect of moderators or confounds. If "a" has a causal relationship to "b," then there is going to be a statistical assocation betwen a and b. But even if a is only indirectly causally related to b, through a relationship with a third variable "x," then there will still be a correlation between a and b. Therefore, the absence of correlation does constitute strong evidence against the existence of a causal relationship.

Again, I will disagree. One way you can have a true causal relationship not show up in correlation is measurement error. If your variables are not measuring what they are supposed to measure (i.e., not valid) then you might get an entirely different correlation coefficient. Another way might be a low sample size where you lack the power to detect a significant relationship that does indeed exist. In the end, you're only as good as your measures.

As for the moderation example, I am thinking of main effects vs interaction in regression techniques. You can have nonsignificant main effects and still have a significant interaction effect. Moreover, if you do have a significant interaction, it is usually customary to ignore interpreting main effects. However, I do know some statisticians that advocate interpreting both.
 
KONY said:
Again, I will disagree. One way you can have a true causal relationship not show up in correlation is measurement error. If your variables are not measuring what they are supposed to measure (i.e., not valid) then you might get an entirely different correlation coefficient. Another way might be a low sample size where you lack the power to detect a significant relationship that does indeed exist. In the end, you're only as good as your measures.

As for the moderation example, I am thinking of main effects vs interaction in regression techniques. You can have nonsignificant main effects and still have a significant interaction effect. Moreover, if you do have a significant interaction, it is usually customary to ignore interpreting main effects. However, I do know some statisticians that advocate interpreting both.

Okay, give me an example. What is an example of a causal relationship where there is no correlation?

And more to the point, in this example: you are arguing that the Bradys might be right, that gun control laws do decrease violent crime, despite the absence of a statistical association. How does your proposed model work? What are the other variables? What are the relationships between the variables?

Sample size and measurement error don't really play in this example. Or if they do, they play on my side.

A larger sample size allows one to detect smaller effects. I doubt the Bradyites would be pleased to accept the assertion that the effect of gun control laws on crime is so small that a larger sample than the United States is necessary to demonstrate it. If we are really talking about such a small effect size, then we have basically won our argument. In this case, a vanishingly small effect size is essentially the same thing as no relationship.

And I think the FBI statistics are a valid enough measure for the purposes of this discussion. I'm sure there is some error in their statistics, but again, if the Bradyites are stuck arguing that the effect of the laws they propose is so small as to be swamped by the +/- of FBI crime stats, then again we're talking about an effect so small as to win the gun owners side of the argument.

You could perhaps argue that the Brady "grading" system has excessive instrumentation error - in this case, a total absence of construct validity - but actually, that's kind of the point I was trying to make in the first place. And again, I don't think the Bradyites would be thrilled with that explanation.
 
antsi said:
Okay, give me an example. What is an example of a causal relationship where there is no correlation?

And more to the point, in this example: you are arguing that the Bradys might be right, that gun control laws do decrease violent crime, despite the absence of a statistical association. How does your proposed model work? What are the other variables? What are the relationships between the variables?

Sample size and measurement error don't really play in this example. Or if they do, they play on my side.

Hmm, I just gave you some examples. If you want a concrete one then let's do so ... In terms of measurement error, say you run an experiment whereby you were able to systematically control subject net caloric intake (calories consumed - calories burned) and measure their weight gain/loss. In reality, we do know that net caloric intake will have a causal relationship with weight loss. Thus, we fully expect very high positive correlations. However, if we use a flawed weight scale, we could get very different correlations ranging from significantly positive to significantly negative. This, of course, includes the possibility of no relationship. How so? Say that this scale is broken and records a subject's weight as 150lbs no matter what their net caloric intake is. Thus, you now have NO correlation because you have NO variance in your DV as a function of your IV. Instead, you have a systematic measurement error effect (happens in each weighing) due to a broken scale (flawed measure). What say you?

antsi said:
A larger sample size allows one to detect smaller effects. I doubt the Bradyites would be pleased to accept the assertion that the effect of gun control laws on crime is so small that a larger sample than the United States is necessary to demonstrate it. If we are really talking about such a small effect size, then we have basically won our argument. In this case, a vanishingly small effect size is essentially the same thing as no relationship.

And I think the FBI statistics are a valid enough measure for the purposes of this discussion. I'm sure there is some error in their statistics, but again, if the Bradyites are stuck arguing that the effect of the laws they propose is so small as to be swamped by the +/- of FBI crime stats, then again we're talking about an effect so small as to win the gun owners side of the argument.

You could perhaps argue that the Brady "grading" system has excessive instrumentation error - in this case, a total absence of construct validity - but actually, that's kind of the point I was trying to make in the first place. And again, I don't think the Bradyites would be thrilled with that explanation.

Remember I'm on your side! ;) ... But the gun-owning community wouldn't be happy with your findings either. They would want a significant positive correlation. This is the trend you found, only not significant.:)
 
Kony, yes the infamous Type I and Type II errors. The problem is many people don't understand Stats, sampling, error and confidence levels. This leads people to accept junk science because "numbers don't lie".
 
dpesec said:
Kony, yes the infamous Type I and Type II errors. The problem is many people don't understand Stats, sampling, error and confidence levels. This leads people to accept junk science because "numbers don't lie".

Very true! For instance, in the info Antsi gave us, we not given a sample size nor an effect size coefficient (we know the direction, but not magnitude). Both are necessary whenever you report the relationships between variables. Not saying this is junk science but just giving an example.
 
KONY said:
Very true! For instance, in the info Antsi gave us, we not given a sample size nor an effect size coefficient (we know the direction, but not magnitude). Both are necessary whenever you report the relationships between variables. Not saying this is junk science but just giving an example.

It is not conventional to report effect sizes for nonsignificant relationships. That would be like saying "there is no unicorn, and the unicorn weighs one thousand pounds." [I did admittedly somewhat violate this logic by citing the positive sign, a bit like saying "there is no unicorn, and even if there were, it would be white."]

I assumed that most people know how many states there are in the U.S., and I wasn't really intending to write this up to scientific publication standards. But, if it will make you happier, (n = 50). To really do the descriptives justice, I should probably plot a frequency distribution of the Brady rankings and the FBI violent crime stats, but honestly, I'm getting a little tired of this.

Your bringing in measurement error is a red herring here. Any statistical test always assumes reliability of measures. This is a limitation of both univariate and multivariate statistical models, not a limitation particular to Pearson correlations.

Earlier, you were talking about moderating variables accounting for a causal relationship between a noncorrelated predictor and criterion. (For what it's worth, that's what I was asking for an example of. I understand the concept of measurement error - the concept you are proposing that I'm having a hard time with is noncorrelated causation). Now, you're retreating into a discussion of measurement error and statistical power - which are limitations of any statistical argument, and as I pointed out, not really in play in this example.

I will restate my assertion: "Given reliable measures and appropriate statistical power (which are always assumptions of any statistical test), the absence of a correlation constitutes strong evidence against a causal relationship."

This does argue on the side of what gun owners usually assert - namely, that crime is caused by other factors than those addressed by gun control laws. I tabulated the crime data and the Brady rankings, and found no correlation. This is evidence against the assertions of the Bradyites, namely, that the laws they advocate will cause lower crime rates.
 
antsi said:
To really do the descriptives justice, I should probably plot a frequency distribution of the Brady rankings and the FBI violent crime stats, but honestly, I'm getting a little tired of this.

Antsi, hope I didn't offend. I was merely stating the dangers of interpreting statistics. I actually think that the results you have provided are quite interesting and would be worthy of further inquiry with covariates to strengthen the interpretation.

antsi said:
Your bringing in measurement error is a red herring here. Any statistical test always assumes reliability of measures. This is a limitation of both univariate and multivariate statistical models, not a limitation particular to Pearson correlations.

Earlier, you were talking about moderating variables accounting for a causal relationship between a noncorrelated predictor and criterion. (For what it's worth, that's what I was asking for an example of. I understand the concept of measurement error - the concept you are proposing that I'm having a hard time with is noncorrelated causation). Now, you're retreating into a discussion of measurement error and statistical power - which are limitations of any statistical argument, and as I pointed out, not really in play in this example.

I will restate my assertion: "Given reliable measures and appropriate statistical power (which are always assumptions of any statistical test), the absence of a correlation constitutes strong evidence against a causal relationship."

OK, sir. I know you are getting tired of all of this so I will submit a final attempt at explaining a statistical situation where one could have causation with a low correlation. Taking a situation where one has a criterion (d) that is caused by a distal variable (a) and is mediated by two more proximal variables (b & c). Thus, our model would look like this:

a -> b -> c -> d

Thus, (a) causes (d) via its more proximal impact on variables (c) and (d). This is a mediated relationship. Now, let's say that each of the links above represent significant correlations of r=0.3. What is the expected correlation between predictor variable (a) and criterion (d)?
 
KONY said:
a -> b -> c -> d

Thus, (a) causes (d) via its more proximal impact on variables (c) and (d). This is a mediated relationship. Now, let's say that each of the links above represent significant correlations of r=0.3. What is the expected correlation between predictor variable (a) and criterion (d)?

In your example the correlation coefficient between a and d would be whatever .3 x .3 x .3 is (ie, small), but there still would be a correlation there. The ability to detect this correlation would depend on the effect size and the sample size.

Again, I don't think the Brady people would be happy if their model of the link between guns and crime wound up looking like that. They want something along the lines of a -> b. That their view is oversimplistic at best and most likely downright wrong is something we both seem to agree on.

I really wasn't meaning to make an iron-clad statistical argument with this project - I just did it on a lark as a way to quantitate the silliness of the Bradyites (which probably defies calculation anyway).
 
antsi said:
In your example the correlation coefficient between a and d would be whatever .3 x .3 x .3 is (ie, small), but there still would be a correlation there. The ability to detect this correlation would depend on the effect size and the sample size.

Correct. That would put the correlation at 0.027. If you had a sample size large enough, you'll likely find a significant correlation so it would be more appropriate to judge practical significance via your effect size coefficient.

antsi said:
Again, I don't think the Brady people would be happy if their model of the link between guns and crime wound up looking like that. They want something along the lines of a -> b. That their view is oversimplistic at best and most likely downright wrong is something we both seem to agree on.

I agree. I still maintain that these data are still useful though and as more complex models can be drawn, this will increase. Thanks for posting this.

antsi said:
I just did it on a lark as a way to quantitate the silliness of the Bradyites (which probably defies calculation anyway).

+1! :)
 
The attempt to correlate facts is a logic excersize that can and does often become misleading, ie
Fact: The city of Minneapolis has more Gourmet coffee shops than the City of St. Paul.
Fact: The city of Minneapolis has a higher Murder Rare than the City of St. Paul.
There is no correlation between the two Facts, Yet for the sake of argument you could use the two facts to promote the cause of closing Gourmet Coffee Shops in Minneapolis.
I am leary of Statistical values when presented to support either side of an argument, this is due in most part to the "correlation" of two unrelated facts.
Robert Heinelien said "There are three kinds of lies White Lies, Damn Lies and Statistics"
 
Status
Not open for further replies.
Back
Top