Public health, social science, and the scientific method (Part II)

Journal/Website: 
Surgical Neurology
Article Type: 
Article
Published Date: 
Thursday, March 1, 2007
Source: 
http://www.sciencedirect.com/science/article/pii/S0090301906010792

In Part I, we discussed in general terms some of the shortcomings I encountered in many of the grant proposals submitted during my stint as a grant reviewer for the Centers for Disease Control and Prevention's National Center for Injury Prevention and Control (NCIPC) in the years 2002-2004 [6]. There is no reason to believe that these epidemiologic and scientific shortcomings have been addressed and corrected in subsequent years. And from the outset, let me state that the problems do not lie with the methodology of the peer review process, but rather with the misuse of statistics and the lack of science in many, if not most, of the grant proposals. The methodology of grant review calls for the evaluation of research aims and long-term objectives, significance and originality, as well as research design. These are appropriate criteria, but perhaps, for improvement, an additional criterion should be added: Show me the science!

In Part I, we also stressed the fact that statistics are not science, and cannot prove cause and effect relationships [6]. Yet statistics are a very useful tool of science that when properly applied establish correlations as to disease processes. And we were highly critical that such simple statistical tools as p-values were frequently missing in "scientific" grant proposals submitted for funding, although p-values are important in establishing whether scientific findings reach statistical significance or are due to chance. We also discussed relative risks (RR) and the confidence interval (CI) tests as essential components of epidemiological research. We also described shortcomings in strategic long-term proposals in agenda-driven public (social) health research.

Many of the proposals made in response to Healthy People 2010, sponsored by the public health's call for research papers, have frequently more to do with "social justice," expanding socialism, and perpetuating the welfare state than with making genuine advances in medicine and improving the health of humanity. In some cases, these grant proposals use or misuse statistics, collect data of dubious validity, and even use "legal strategies" to reach social goals and formulate health care policies that the public health researchers believe may achieve "social justice." Many of these studies aimed at nothing less than the gradual radicalization of society using "health" as a vehicle.[2,3,7,8] Healthy People 2010 in short is a veritable bureaucrat's dream, an overflowing cornucopia of public (social) health goals geared toward social and economic reconstruction of American society along socialistic lines.

I also mentioned in Part I of this paper that the reader will be surprised to learn that I found probably as many lawyers and social workers as practicing physicians applying for public health "scientific" research grants! No wonder the science is lacking in many of these proposals, and frankly the peer review process has been too lenient.

Before proceeding, let us once again recall the words, as we did in Part 1, of Bruce G. Charlton, M.D. of the University of Newcastle upon Tyne in his magnificent article "Statistical Malpractice." His words, although excerpted, are worth repeating:

Science versus Statistics:

There is a worrying trend in academic medicine, which equates statistics with science, and sophistication in quantitative procedures with research excellence. The corollary of this trend is a tendency to look for answers to medical problems from people with expertise in mathematical manipulation and information technology, rather than from people with an understanding of disease and its causes.

Epidemiology [is a] main culprit, because statistical malpractice typically occurs when complex analytical techniques are combined with large data sets.Indeed, the better the science, the less the need for complex analysis, and big databases are a sign not of rigor but of poor control. Basic scientists often quip that if statistics are needed, you should go back and do a better experiment.Science is concerned with causes but statistics is concerned with correlations.

Minimization of bias is a matter of controlling and excluding extraneous causes from observation and experiment so that the cause of interest may be isolated and characterized.Maximization of precision, on the other hand, is the attempt to reveal the true magnitude of a variable which is being obscured by the "noise" [from the] limitations of measurement.

Science by Sleight-of-Hand

The root of most instances of statistical malpractice is the breaking of mathematical neutrality and the introduction of causal assumptions into the analysis without scientific grounds

Practicing Science without a License

.Medicine has been deluged with more or less uninterpretable "answers" generated by heavyweight statistics operating on big databases of dubious validity. Such numerical manipulations cannot, in principle, do the work of hypothesis testing. Statistical analysis has expanded beyond its legitimate realm of activity.From the standpoint of medicine, this is a mistake: statistics must be subordinate to the requirements of science, because human life and disease are consequences of a historical process of evolution which is not accessible to abstract mathematical analysis.[4]

To reiterate: Relative risks (RR) can be useful when the value is well above 2.0, but, as previously stated, RRs between 0.5 and 2.0 should not be considered significant in statistical studies because this range strongly suggests that there is no difference in the rate of injuries or interventions between two study populations. Yet, this basic rule is routinely ignored in pilot studies and citations of previous research.

As with strong p-values, I have found that in the rare event that a relative risk is significant, public health researchers invariably mention it. If it is not (i.e., value ranges between 0.5 and 2.0), then it's not mentioned, in present or previous research, or in the pilot studies they cite to bolster their grant proposals, ongoing proposals which they use as stepping stones to apply for more grant money and further investigation.

Again, indicating the p-value reveals whether the statistical difference found in the two populations under study is due to chance. It's becoming more and more infrequent now for public health researchers to submit their p-values in their grant proposals, particularly if the p-value is higher than 0.05, the value that falls below the 95 percent level of confidence. Approval, nevertheless, is frequently achieved, further projects are funded, and the money keeps rolling in.

To further increase the chances of approval in the peer review process, public health investigators now about routinely ignore the basic traditional rules of epidemiology such as RR, the p-value, and the CI test. These are tough tests that would disqualify many low caliber research proposals. So it's no wonder that in a competitive world of funding grant proposals, many epidemiologists claim they are not needed. If high p-values (p>0.05), RRs between 0.5 and 2.0, and confidence interval too wide for comfort were disclosed, funding for health (social) programs may not be granted. Instead, epidemiologists and other public health researchers parade complex numerical computer manipulations - e.g., regression models, stratified multivariate analysis, etc. - designed to eliminate confounding variables. Nevertheless, confounding variables persist and junk science is the result [11].

Here again is Dr. Charlton writing on this subject of statistical elimination for confounding variables: [Science by sleight of hand] commonly happens when statistical adjustmentsare performed to remove the effects of confounding variables. These are manoeuvers by which data sets are recalculated (e.g., by stratified or multivariate analysis) in an attempt to eliminate the consequences of uncontrolled "interfering" variables which distort the causal relationship under study.There are, however, no statistical rules by which confounders can be identified, and the process of adjustment involves making quantitative causal assumptions based upon secondary analysis of the data base.but is illegitimate as part of a scientific enquiry because it amounts to a tacit attempt to test two hypotheses using only a simple set of observations.[4]

The scientific process, like Koch's Postulates of Pathogenicity, indeed calls for much simpler methodology: (1) Observe a natural phenomenon. (2) Develop a hypothesis as a tentative explanation of what is occurring. (3) Test the validity of the hypothesis by performing a laboratory study and collecting pertinent data. Experimental trials (with randomization, control groups, etc.) are best. (4) Refine or reject the hypothesis based on the data collected in the experiment or trial. (5) Retest the refined hypothesis until it explains the phenomenon. A hypothesis that becomes generally accepted explaining the phenomenon becomes a theory [11].

Let us return to the nuts and bolts of statistics and remember: For relative risks greater than 1.0 (no difference at all), the lower bound of the confidence interval test must be greater than 1.0. The inclusion of the value of 1.0 within the interval invalidates the 95 percent level of confidence.

Likewise, for a relative risk less than 1.0 to satisfy the validity of the 95 percent level of confidence, the range of values must be less than 1.0. Again, the inclusion of the 1.0 value or higher invalidates the 95 percent statistical confidence in the study. (Next time you read a scientific study, check the statistics, make sure p-values, RR, and CI are disclosed in the scientific discussion.)[11]

Unfortunately, the public health establishment is going along in relaxing these inconvenient basic rules of statistics in order to continue to justify studies of little scientific validity but great social engineering potential. After all, the money keeps rolling in, year after year, for further research.

It is not surprising that even though Injury Research Grant Review Committee (IRGRC; now the Initial Review Group [IRG]) members are asked to review the proposals for "scientific merit" the fact remains that - starting with the standard methodologies that rely largely on methods of population-based epidemiology, expansion of databases of dubious validity, complex statistical analysis subject to random errors and biases, complex numerical manipulations that are not mathematically neutral, etc., and ending with "health" goals - little attention is paid in these proposals to the scientific method. More attention, in fact, is paid to social results and preordained, result-oriented or feel-good "research" than real, hard science. Science becomes a casualty in the politicized research war.

1. Establishing Public Health Consensus

As of 2004, although the standing Initial Review Group (IRG) is composed of 21 members, there are so many grant proposals and so many members who apparently do not attend the meetings that the CDC contracts for its own additional reviewers, "ad hoc" IRG consultant members to "assist" with the grant review process. Many of these consultants are former IRG members revolving back to the CDC and who tend to be sympathetic to the methodology, as well as social and political goals of the entrenched public health establishment. Thus, there is a current, an underlying conflict of interest intrinsic to this process. Moreover, many of these ad hoc members are statisticians, bureaucrats, epidemiologists or public health personnel themselves, who have a direct or an indirect vested interest in approving the statistical studies of their colleagues rather than promoting medical science. Many of them are epidemiologists who believed that statistics are science that prove cause and effect relationships, without the corroborating findings of clinical medicine. In other words, Koch's Postulates of Pathogenicity, proving whether an organism causes a disease, have been thrown out the window, and with it, of course, the required steps of the scientific method.

Many public health researchers in this milieu have come to believe erroneously that pure statistics are medical science, which they are not. Statistics are a helpful tool of science, but they do not prove cause and effect relationships. Public health, as many workers have come to accept, has become a confusing mesh of social programs bridging between science and politics. Science establishes scientific facts; statistics establishes correlations. It's no wonder then that these public health proposals are fraught with methodological errors and are subject to confounding variables that refuse elimination despite complex numerical computer manipulations. Moreover, biases also enter the computations that cannot be corrected for because of poor data collection (data dredging) despite so-called mathematically neutral statistical "corrections." Someone had to step in front of the tanks in this milieu and proclaim that the emperor had no clothes, when it comes to the fact that science is lacking in many of these public (social) health proposals.

Many members and consultants of IRG have become entrenched bureaucrats in their ivory towers, who want to go along to get along with their public health paper work, rather than clinicians in the trenches of medical care delivery.

The Department of Health and Human Services (HHS) should appoint to the Initial Review Group (IRG) more clinicians, particularly practicing physicians and hard biological scientists, microbiologists, biochemists, physiologists, pathologists, anatomists, and fewer social "scientists," administrators, and bureaucrats. Although I met a few physicians, I did not meet a basic scientist in any of the above specialties, not one scientist involved in genuine research in the basic biological sciences! This is a policy that must be established from above, from HHS, and implemented by the CDC as soon as possible.

Of course there are exceptions. I met many dedicated academicians and other fellow reviewers with whom I had the pleasure of working, both IRG members and consultants. Take for instance, Dr. Daniel O'Leary of the State University of New York, Dr. James F. Malec of the Mayo Clinic, and Dr. Patricia Findley of Rutgers University, who consistently placed science above politics in the panels scientific discussions and conferences. There were others. But in the end, most reviewers readjust their views, if they wish to be re-appointed as consultants. They must ultimately conform to the basic work milieu of the NCIPC and work as to establish the much desired public health consensus in the approval process.

Thus, the CDC staff should have less influence in the review process by reducing or eliminating the need for ad hoc members, who are appointed at the discretion of its staff. These members are not only largely public health and social scientists rather than basic biological scientists or clinicians but are contracted or appointed by the CDC and outnumber the standing IRG members at least three or four to one! This proposed reform gives HHS the opportunity to bring in much-needed new blood and fresh ideas into the program.

Furthermore, there is in the public health milieu a vested interest in promoting unreliable population-based statistical proposals because they lend themselves to the social study of spousal abuse, domestic violence, and adolescent crime that the NCIPC of the CDC is so fond of funding. And so, unfortunately, in the public health (social) research arena there is an overwhelming and disproportionate number of observational studies rather than experimental investigations. Clinical trials (i.e., controlled randomized, prospective trials), the most reliable of scientific research investigations, are few and far between. The vast majority of proposals are observational studies, which include in decreasing order of reliability, Cohort stories (i.e., largely prospective but uncontrolled), Case Control studies (retrospective and uncontrolled), and Ecologic studies (i.e., population-based). Ecologic studies are so prone to error and so utterly unreliable as to give rise to the epidemiologic term of ecologic fallacy, a fallacy to which, in fact, all population-based studies are subject [2,11].

There is also subliminal peer pressure to be lenient in the grant review process for accepting these proposals from other colleagues in the field because, although specific conflict of interest forms are signed and re-signed, many of the reviewers themselves receive federal money, and their own turn for review and approval will come sooner or later. To sum it up: CDC grant review committee members should be composed of more clinicians and more hard basic biological scientists, who are not receiving federal money, whether as ad hoc or standing members.

Another problem is the intrusive role played by a few CDC staffers and liaison officers working for the CDC in conjunction with the various injury control centers and schools of public health being supported by the CDC. One liaison officer I worked with exerted considerable pressure on committee members to approve a center with which he had a liaison. This happened specifically at a major center that I personally inspected with a team of reviewers. When the remote possibility of the center losing its funding came up, he stated this "is an excellent IRC (Injury Research Center)" and that "a score of 1.5 was necessary to assure funding of that center." That may be one of the reasons that center received one of the highest score of all the centers reviewed, despite the fact the entire panel initially considered all of the large and small proposals (except for one) mediocre in methodology and lacking innovation and originality. The CDC staffer was supposed to be an observer and not discuss merit or budgeting (funding) with us. His job was only to make sure that we, the actual reviewers, consider and discuss the "scientific merits" of the program for referral or not to the entire committee.

At the same time, I want to single out for praise among the CDC staff, Gwendolyn Haile Cattledge, PhD, Deputy Associate Director for Science/Scientific Review Administrator at the CDC in 2004, who was the embodiment of professionalism and competence during this time and all the time I worked with her. Likewise, the new Director of the NCIPC, Ileana Arias, PhD, whom I only met briefly before her appointment, should hold some promise for the future.

Next, I would like to make the following observation: Congressional prohibition of CDC use of funds for political lobbying and for gun (control) research efforts has been effective in reducing politicized, result-oriented gun research in the area of violence prevention. Yet, the temptation to resume this area of pseudo-scientific, politically-oriented research is simmering just under the surface. Therefore, HHS should remain vigilant in this area - i.e., making sure this prohibition wisely ordained by Congress in 1996 is obeyed and followed as to preserve the integrity of scientific research.(2,3,7,8) This vigilance on the part of HHS is important. When I specifically asked a director of a prominent IRC if her center planned to do gun [control] research in the future, she stated that "no decision had been made but that they considered it a legitimate area of research and could very well resume doing so in the future."

On the other hand, I have no direct criticisms of the largely good work the CDC has done in the proper application of epidemiology to medical and scientific research in the area of prevention and control of infectious and contagious diseases. That work should continue, particularly with the potential threat of bioterrorism looming in the foreseeable future.[1]

To this day Congress has expanded the budget of the National Center for Injury Prevention and Control of the CDC and continues to fund increasing amounts of taxpayer's money for more injury and "violence" research, a large part of which is of dubious validity and merely supports a burgeoning bureaucracy, expanding and duplicating into other areas of research, health policy, and politics. With all this available funding, it's no wonder that more and more young people are going into the paper work field of epidemiology and the social "sciences," rather than true biologic scientific research (i.e., to the basic sciences, microbiology, biochemistry, physiology, etc.) and direct, clinical patient care, where they will more certainly be needed for an increasingly growing and aging population.

If this trend towards the public (social) health field continues, we will have more and more young people attending law schools, social sciences, and schools of public health than nursing and medical schools. They will not be doing basic research or clinical medicine but working on purely epidemiological studies carrying out armchair multivariate analyses with complex numerical manipulations and regression models rather than preparing themselves for the challenges of basic science (biological) research and attending medical and nursing schools, where there have been a perpetual need for young applicants for many years. Yes, biological scientists, nurses, and physicians will drastically be needed for the rapidly aging population of Americans, baby boomers, who are already retiring and soon will swell the ranks of septuagenarians and octogenarians. We need more nurses, clinicians, physicians at the bedside and public health workers in the arena of medical and health care delivery rather than solely at the computer terminals!

Frankly, money is being squandered by public health, politicized, pre-conceived research toward collectivist agendas, while the government (and the insurance companies follow suit) keeps cutting reimbursement for the physicians and nurses who are actually ministering care to patients with real, individual medical problems. It's not only a question of squandering money and misallocation of finite health care resources, but also, in the end, a question of population-based ethics versus the reinstitution of the individual-based ethics of Hippocrates [5,9,10].

Again, a contributing factor to the growing problem of pseudoscientific research is frankly too much available money, virtually allocated to a narrow area of research (injury prevention and promotion of Healthy People 2010 agenda). I believe that cost ineffective research has been approved for the maintenance and expansion of budgets and in some cases the propagation of social welfare-type programs supported by junk science.

Bluntly speaking, if I may be so bold, radical surgery is needed from the top to end politicized public health "injury research" (including the wealth redistribution, socialistic goals of Healthy People 2010) and return to the field of science and genuine scientific investigation. Taxpayers' money should be transferred from injury prevention and the social sciences masquerading as public health to the genuine good work that the CDC is doing in the field of infectious disease control and prevention as originally intended. There are frankly too many social "scientists" and researchers milking the government cow at the expense of the over-burdened taxpaying public.

References

1. Alibek K, Handelman S. Biohazard. New York (NY):A Delta Book (a division of Random House); 2000.p.270-292.

2. Arnett JC. Book review: Junk Science Judo by Steven J. Milloy. Medical Sentinel 2002;7(4):134-135.

3. Bennett JT, DiLorenzo TJ. From Pathology to Politics - Public Health in America. New Brunswick (NJ):Transaction Publishers; 2000. p. 21-115.

4. Charlton BG. Statistical Malpractice. Journal of the Royal College of Physicians, March-April 1996:112-114.

5. Faria MA. Managed care - corporate socialized medicine. Medical Sentinel 1998;3(2):45-46. www.haciendapub.com.

6. Faria MA. Part 1:Public Health, Social Science, and the Scientific Method. Surgical Neurology 2007;67(2):211-214.

7. Faria MA. Public Health - From Science to Politics. Medical Sentinel 2001;6(2):46-49. www.haciendapub.com.

8. Faria MA. The perversion of science and medicine (Parts I-IV). Medical Sentinel 1997;2(2):46-53 and Medical Sentinel 1997;2(3):81-86. www.haciendapub.com.

9. Faria, MA. The transformation of medical ethics through time (Part I): medical oaths and statist controls. Medical Sentinel 1998;3(1):19-24. www.haciendapub.com.

10. Faria, MA. The transformation of medical ethics through time (Part II): medical ethics and organized medicine. Medical Sentinel 1998;3(2):53-56. www.haciendapub.com.

11. Milloy, SJ. Junk Science Judo - Self-Defense Against Health Scares and Scams. Washington (DC): Cato Institute, 2001. p.41-114. http://www.cato.org.

Miguel A. Faria Jr, MD
Milledgeville, GA 31061 USA
Clinical Professor of Surgery (Neurosurgery, ret.) and Adjunct Professor of Medical History (ret.), Mercer University School of Medicine; Editor emeritus, Medical Sentinel (now the Journal of American Physicians and Surgeons) of the Association of American Physicians and Surgeons (AAPS); Author, Vandals at the Gates of Medicine---Historic Perspective on the Battle Over Health Care Reform (1995); Medical Warrior: Fighting Corporate Socialized Medicine (1997); and Cuba in Revolution---Escape From a Lost Paradise (2002); Member, Editorial Board of Surgical Neurology, An International Journal of Clinical Neurosurgery

(This article was published in Surgical Neurology 2007;67(3):318-322.)

Your rating: None Average: 5 (4 votes)


Fransini Giraldo is a Colombian girl who dances her own style of Salsa. In this video, she dances to the rhythm of Sonora Carruseles de Colombia, presumably in the Colombia countryside. Published July 16, 2013.