September Surprise:

Industry Group Achieves Junk Science Hall of Shame!



Over the last several months, I've been beating the drum on how the U.S. Environmental Protection Agency has proposed (subliminally) to delete from its cancer risk assessment guidelines statistical significance as a required criterion for determining cause-and-effect relationships.

The public comment period on the EPA proposal has closed and here's where we stand. And for one major industry group, all I can say is shame...shame...shame...

Before going into that, however, a chronological update on this issue.

On May 6, 1996, EPA held a dog-and-pony show about the proposed cancer risk assessment guidelines. During a question-and-answer period, I asked Jeanette Wiltse EPA's point person for the proposed guidelines, why statistical significance had been deleted as a required criterion. She denied flatly that it had been deleted.

Soon thereafter, I sent a letter with analysis requesting clarification on this issue to William Farland, director of EPA's National Center for Environmental Assessment (and Wiltse's boss).

On May 7, 1996, Investors Business Daily printed my op-ed on this issue titled EPA's Power Grab. This article was about how, by deleting statistical significance as a required criterion, EPA was in effect expanding its discretion to label whatever it wanted as cancer-causing, regardless of science.

On July 17, 1996, the chairmen of two key Congressional committees wrote to EPA requesting clarification on whether statistical significance had been deleted in the proposed guidelines.

On August 8, 1996, The Wall Street Journal printed another op-ed on this issue entitled The EPA's Houdini Act. This article was about how EPA was escaping from the shackles of good science without anyone being wise to their trick.

On August 12, 1996, Robert J. Huggett, assistant administrator for EPA's Office of Research And Development (and Farland's boss), responded to the congressional letter. Huggett flatly denied that statistical significance had been deleted. He also pointed out that statistical significance was discussed in section 2.2.1 and 2.6.1 of the proposed guidelines. (Discussed? Yes. Included as a required criterion? NO!)

On August 21, 1996, the public comment period on the proposed guidelines closed.

On August 29, 1996, Farland responded to my May letter. Not only did Farland deny that statistical significance had been deleted in the proposed guidelines,he denied that it was even a requirement in the current guidelines!

When the public comment period closed, I went to the EPA rulemaking docket to see what other commenters thought about the statistical significance issue.Was I the only one who saw this? Was I the only one who cared? Was I wrong about this issue?

Hardly. Here some sample comments.

The American Forest and Paper Association stated

We urge EPA to require statistical significance as a prerequisite to use of a study to infer causation.... While the guidelines mention statistical significance in a number of places, they do not firmly establish that the elimination of random chance as an explanation is imperative before drawing conclusions about causation.

The Utility Health Sciences Group stated

These criteria do not address the role of statistical significance in evaluating epidemiologic data... The omission of the statistical significance standard...would allow EPA to make a finding of [causality] on the basis of studies that do not meet the accepted standard of statistical significance used by scientists throughout the world for cancer research. Any such findings will lack scientific credibility and public credibility.

The Chlorine Chemistry Councilstated

The addition of statistical significance as a further criteria for causality is recommended.

The Edison Electric Institutestated

The criteria for determining carcinogenicity from epidemiologic data are unclear as to the importance of statistical significance in evaluating epidemiologic data.

The American Automobile Manufacturers Association stated

The proposed guidelines are silent on [statistical significance in ascertaining] if there is any significant association between exposure and effects.

The American Water Works Association stated

[T]here is an issue concerning treatment of non-statistically significant increase in incidence. There is a danger of even reporting increases where there is a lack of significance. The role of significance levels is to aid in hypothesis testing. If the Agency says that a prescribed confidence level has not been reached, then this means the null hypothesis (no increase) cannot be rejected. So there is not any evidence of an increase, or at least the evidence is not sufficiently reliable to use. This raises the question of whether it should be reported as an increase at all. The Agency should state clearly how it intends to treat instances of non-statistically significant increases.

Even the state of California's EPA (Cal-EPA), hardly an industry group,recommended that EPA discuss in detail how statistical tests are to be used.

Yes, contrary to EPA's symphony in obfuscation, there is a legitimate issue here.

Now for the sad part of this story.

The American Industrial Health Council submitted the following comment [my point-by-point comments are in brackets]:

AIHC supports the use of the Bradford Hill criteria in assessing causality of an association [Great! The Bradford Hill criteria make statistical significance a prerequisite for a finding of causality.]

AIHC acknowledges that the Agency has received comments regarding whether statistical significance should be another criterion. [Yes, they have!]

AIHC notes that EPA, in discussing the Hill criteria for "strength(magnitude) of the evidence," has already added a passage on the precision of the risk estimate, which in turn addresses the same issue of statistical significance. [False. "Precision" is not statistical significance.Never has been. Never will be. Statistical significance is about ruling out random chance as being the cause of the data. Is the data reliable enough to use? Once you're sure that data has not been caused by chance, THEN you can move on to worrying about its precision among other issues.]

AIHC agrees with EPA that large, precise estimates are more suggestive of a causal association than large, imprecise estimates. [Of course. But what does this have to do with statistical significance?]

AIHC does not regard statistical significance as a prerequisite for judging whether a causal relationship exists. [Didn't they just say that they were in favor of the Bradford Hill criteria — where statistical significance IS a prerequisite?]

Furthermore, statistical significance is judged on a study-by-study basis, while causality is judged based on an entire body of evidence. AIHC does regard statistical significance testing, when applied properly, as appropriate in determining the role of chance for a given finding in a study. [In the real world, no one (not even EPA) makes a determination of causality without what they feel is strong epidemiologic evidence. Strong epidemiologic evidence,to date, usually has consisted of one or more statistically significant studies.Why would anyone consider nonsignificant (unreliable) epidemiologic data as part of a "body of evidence"?]

Although "it ain't over 'til it's over," I'm afraid that AIHC has just aided and abetted EPA in making statistical significance sleep with the fishes, thereby making it possible for junk science to flourish.

For this deed, AIHC has achieved Superstardom in the world of junk science.Visit The Junk Science Hall of Shame to see the September 1996 inductee.

Material presented on this home page constitutes opinion of the author.



Copyright © 1996 Steven J. Milloy. All rights reserved. Site developed and hosted by WestLake Solutions, Inc.

 1