Search
Close this search box.

Public health, social science, and the scientific method (Part I)

1. Introduction

During the years 2002-2004, I served in the Injury Research Grant Review Committee (IRGRC, more recently the “Initial Review Group”) of the Centers for Disease Control and Prevention (CDC) — more specifically, the National Center for Injury Prevention and Control (NCIPC).

I participated not only in the major meetings in Atlanta, but also in on-site reviews and inspections of Injury Research Centers (IRG) reviewing thousands of pages of grant applications requesting funding for medical and public health scientific research proposals. I have deliberately let some time elapse before writing this analysis with the purpose of being able to take a step back and write from a distance, objectively.

I should also inform the reader that I must write in generalities, for I cannot disclose by CDC rules specific details of any grant proposal requesting funding, or discuss the content of the review of any specific grant application in which I participated or that came to my knowledge while working at the CDC in the capacity of grant reviewer. This secrecy seems, in retrospect, even more stringent than those that were in place at Los Alamos during the Manhattan Project! So my discussion necessarily will be lacking specific examples to illustrate the thread of my arguments. Nevertheless, I ask you to bear with me, gentle reader, for enough general material discussing points of scientific interest will, I think, make it worth your while — that is, if you have an interest in the present interrelationship between public health, social science, and the purported relationship these disciplines bear today with medicine, including neuroscience, and the scientific method.

Before proceeding, as a further introduction, I would like to quote several excerpts from a magnificent article entitled, “Statistical Malpractice” by Bruce G. Charlton, M.D., of the University of Newcastle upon Tyne. It is perhaps no coincidence that Dr. Charlton is associated with the same great university that gave us Dr. John Snow, the illustrious physician who in 1849 successfully applied the scientific method to epidemiology. (In the process Dr. Snow proved that cholera is a water borne disease. This discovery led to the conquest of epidemic diseases such as dysentery and typhoid fever.) Dr. Charlton’s comments that follow cite the growing misuse of pure epidemiology and statistics as science. As my narrative unfolds the relevance of these momentous passages to my narrative will become obvious.

Science versus Statistics:

There is a worrying trend in academic medicine, which equates statistics with science, and sophistication in quantitative procedures with research excellence. The corollary of this trend is a tendency to look for answers to medical problems from people with expertise in mathematical manipulation and information technology, rather than from people with an understanding of disease and its causes.

Epidemiology [is a] main culprit, because statistical malpractice typically occurs when complex analytical techniques are combined with large data sets.Indeed, the better the science, the less the need for complex analysis, and big databases are a sign not of rigor but of poor control. Basic scientists often quip that if statistics are needed, you should go back and do a better experiment.Science is concerned with causes but statistics is concerned with correlations.

Minimization of bias is a matter of controlling and excluding extraneous causes from observation and experiment so that the cause of interest may be isolated and characterized.Maximization of precision, on the other hand, is the attempt to reveal the true magnitude of a variable which is being obscured by the “noise” [from the] limitations of measurement.

Science by Sleight-of-Hand

The root of most instances of statistical malpractice is the breaking of mathematical neutrality and the introduction of causal assumptions into the analysis without scientific grounds.

Practicing Science without a License

Medicine has been deluged with more or less uninterpretable “answers” generated by heavyweight statistics operating on big databases of dubious validity. Such numerical manipulations cannot, in principle, do the work of hypothesis testing.

Statistical analysis has expanded beyond its legitimate realm of activity.From the standpoint of medicine, this is a mistake: statistics must be subordinate to the requirements of science, because human life and disease are consequences of a historical process of evolution which is not accessible to abstract mathematical analysis.[4]

The following discussion is intended to describe some problems I encountered in the honorific service of grant reviewer, as specified above, while making a few critical observations that will hopefully improve the quality of grant proposals at the CDC.

Moreover, research appropriateness and cost-effectiveness of these grant proposals is included in this discussion because the allocation of public health research funding is of concern to the American public (taxpayers), who are already shouldering a significant burden of their own rising health care costs, and yearly are asked to increase that burden in tax dollars for social and public health research programs that may yield relatively little in terms of cost-effectiveness.

Despite definite improvements in the last several years, it is my opinion, after working in the belly of the NCIPC beast, so to speak, that much work still needs to be done. Much public health research in the area of injury control and prevention is duplicative, redundant, expensive, and not cost-effective. Many of the grant proposals for the study of spousal abuse, domestic violence, inner city crime, juvenile delinquency, etc., belong more properly in the sociologic and criminologic disciplines than in the realm of public health and medicine. When we discuss the lack of science in many of these proposals, it is not surprising that I found (and many reviewers privately acknowledged) that the scientific method, or for that matter, the application of the public health model is found wanting when applied to the study of violence and crime. But funding continues to roll into the CDC’s National Center for Injury Prevention and Control (NCIPC) and it continues to be squandered and misapplied by public health researchers.

2. Congressional authorization

Perhaps the biggest problem of all has been created and promoted by Congress in the allocation of ever-increasing amounts of taxpayer dollars to public health “research” in the area of injury control that, frankly, in many dismaying instances is of questionable scientific validity and even less cost effectiveness. Oversight, accountability, and clear demonstration of cost effectiveness have been clearly lacking. And yet, the Department of Health and Human Services (HHS) shares some of the blame as the executive federal agency has considerable leeway in how these tax dollars are spent by the CDC and the public health establishment.

The unfortunate reality is that in the areas of injury (and “violence”) research much money is being wasted in programs of questionable medical value and scientific validity. Many of the grant proposals submitted in the name of “violence prevention” and other goals of the Healthy People 2010 agenda are generally geared toward promoting social engineering and enlarging the scope and collective role of government in the lives of citizens. For example, numerous times I opposed and voted against proposals that required intrusive home visits to families by social workers (“home observations”), raising serious concerns of privacy violation. Ditto for proposals establishing databases of dubious validity where no hypotheses were tested. Behind many of the health proposals is a compulsive need to protect people from themselves, as if they were children requiring the helping hand of health academic bureaucrats for the people’s own survival. All of these collective patronizing social programs, unrolling year after year are borne out at the expense of personal freedom, not to mention taxpayers’ pocketbooks.

Likewise, many of the proposals made in response to Healthy People 2010, year after year, sponsored by the CDC’s NCIPC have more to do with “social justice,” expanding socialistic agendas, and perpetuating the welfare state than with making genuine scientific advances and improving the health of humanity. In some cases, these grant proposals (many of which are actually funded) use or misuse statistics, collect data of dubious validity, and even use “legal strategies” to reach social goals and formulate health care policies that the public health researchers believe may achieve “social justice.” Many of these studies aimed at nothing less than the gradual radicalization of society using “health” as a vehicle.[1,2,5, 6] “Scientific” peer review in many instances, frankly, is not working or is non-existent. The reader will be surprised to learn that I found probably as many lawyers and social workers as practicing physicians and nurses applying for public health “scientific” research grants!

Healthy People 2010 and injury control programs (particularly “Grants for Violence”) have become vehicles for the creation of over-increasing social programs (and the remodeling of society in the image of public health researchers) through the back door. “Health” and “science” are used as covers because, to a significant extent, many of these proposals have relatively little to do with improving the general health of the public and even less with scientific merit.

Frequently, proposals under these Grant Announcements (perhaps with some notable exceptions for Traumatic Brain Injury and Acute Care and Rehabilitation) are submitted to dovetail previous social “research” in which statistical significance is frequently not established in prior studies. But the money keeps rolling in, and more shoddy research is funded, year after year.

3. Simple statistical tools frequently missing

From the scientific point of view, a trend most troubling is the misuse or nonuse of the very simple but very helpful traditional statistical tools in the statistician armamentarium. I refer to the useful methodology of relative risks (RR), confidence intervals (CI), and the increasingly ignored p-values. These traditional statistical parameters are essential in determining the strength of statistical associations. These tools are actually tough tests that are applied to statistical studies to ascertain their validity. Although these tests don’t establish cause and effect relationships, they are essential in the process of establishing statistical associations, particularly in these social “science” (sociological) investigations that are carried out now routinely under the aegis of public health research. Their time-proven places in statistics are being replaced by complex, inscrutable statistical methodologies, such as regression models, multivariate stratified computer analysis, and other complex statistical manipulations that devoid of the required clinical experience befuddles everyone including the statisticians themselves!

4. Fishing expeditions in search of social problems

Not infrequently I found it difficult to discern in any of these ever-proliferating health (social) proposals strong statistical associations leading to groundbreaking, scientific research. Fishing expeditions in hypothesis searching and solutions in search of social problems are frequent while hypothesis testing is poorly formulated. One reason for this misuse of statistics universally ignored is that epidemiology should be applied to rare diseases that occur at high rates in a defined segment of the population preferably over a short defined period of time (e.g., lung cancer deaths in smokers, or even better, an outbreak of salmonella or shigella poisoning at a convention hall on a specified date, etc.) and not applied to low rate of disease or injury extending over a long period of time and over vast populations (e.g., ongoing studies of juvenile crime in the inner city, health consequences of environmental toxins over a large population, etc. ). Moreover, investigation of these diseases or injuries should be carried out in conjunction with clinical investigation.

Koch’s Postulates of Pathogenicity seemed to have been completely forgotten by the public health establishment. Koch’s Postulates, as the reader will recall, are the steps that a medical researcher follows in proving that a specific microorganism causes a specific disease: (1) The organism must be found in diseased animals and not in healthy individuals. (2) The organism is then grown and isolated in the laboratory in pure culture. (3) The organism from such culture should cause the disease in susceptible individuals upon inoculation. (4) The pathogenic organism must be re-isolated from the diseased individuals and cultured again in the laboratory.[3]

Epidemiological studies, whether case control or ecologic, are unreliable population-based investigations that should not be routinely used for investigating the etiology and course of common diseases, not to mention preventing injuries and violence among the general population. And yet, despite these objections, epidemiological studies are funded routinely to investigate the pathophysiology, course, and outcome of common diseases and injuries that occur slowly or repeatedly over extended populations.

5. Premature disclosure of “scientific findings”

It is no wonder that the media picks up on these reports prematurely and sensationalizes conclusions that often contradict one another as soon as they are published, sometimes in the same issue of the same journal! Disconcertingly, we have learned from researchers and the media that coffee can cause cancer as well as prevent cancer, and that silicone breast implants are harmless and that they are not, etc.! We continue to be bombarded with prematurely reported headline grabbing studies, day after day, to the detriment of the trust that the public has vested in science and medicine. Prevention of premature disclosure of scientific findings to the media in taxpayer funded research is something that both the Directors of the CDC and NCIPC may be able to do something about.

The public, befuddled and confused, loses faith in “science” and looks elsewhere for answers, conducting its own research on the internet, not always with salutary results. Public health proposals commonly tout their complex statistical analyses (e.g., multivariate and regression analysis to eliminate confounding variables, etc.), yet important statistical tools, such as p-values, relative risks, and confidence intervals are frequently not even mentioned in the body of the research!

6. Relative risk

Although relative risk (RR) does not establish cause and effect relationships, it is an invaluable tool in statistics. RR is used to determine if there is a difference in the rate of a disease process or injury between two given populations. A RR of 1.0 signifies no difference. A RR of 2.0 means that the exposed population has twice (100 percent) the rate of disease (a positive association) as compared to the other population. Statistics are not science, and a 100 percent increase in this context is a very small number that could be explained solely by the quality of the data, thus denoting a weak statistical association.

Likewise, a RR value of 0.5 carries a risk of half (50 percent) as compared to the other population (a negative association) and likewise conveys a weak statistical association. Thus, experienced statisticians ignore the statistical significance of a RR ranging between 0.5 and 2.0 which relate no significant risks between the rate of disease or injury in the two populations under study. Remember, RR between 0.5 and 2.0 do not reach statistical significance. These figures should be noted, but can be safely ignored (no cause and effect relationship) by the epidemiologist. To find out further if a RR value outside those limits truly carries any statistical significance, two other tools are used: The p-value, the killer test that determines whether the difference in the two study groups are due to chance, and the confidence interval (CI).[7]

A p-value of 0.05 corresponds to the 95 percent level of confidence that traditionally has been set by scientists as the gold standard for epidemiological research. A 95 percent confidence interval represents a range of values within which we are certain the true value of the relative risk (RR) is found (with 95 percent confidence). One must be aware that many public health researchers want to lower this standard to 90 percent (increasing uncertainty) to increase the chance of reaching statistical significant and confirm the validity of their research findings. Other researchers do not disclose p-values, neglecting the possibility that their findings are due to chance. Those researchers who report and pass the p-value (p<0.05) test usually disclose it in their studies; those who ignore it make their study suspect and their conclusions may not have reached the touted statistical significance they profess to have reached in their conclusions. As we shall see, the 95 percent confidence interval (CI) corresponds to a p-value of 0.05, but low p-values, even as low as p=0.006 or p=0.009, do not establish causation; they only denote strong statistical associations not due to chance. Statistics are not science and cannot prove cause and effect relationships. For an amusing yet informative review of this subject for the busy clinician, neurosurgeon and neuroscientist, I strongly recommend Steven J. Milloy’s book, Junk Science Judo, published by the Cato Institute in 2001.[7]

The next step that should be performed to test RR for statistical significance is the 95 percent confidence interval. Scientists know that the true RR value is found with a 95 percent level of confidence. When a relative risk is greater than 1.0, the range (for both the lower and upper boundaries) must be greater than 1.0 to satisfy the CI test. Conversely, when the RR is less than 1.0, to satisfy the CI test, the range of CI values must fall below 1.0 (again both lower and upper reported boundaries included). Neither statistical association must include 1.0 within the range. If 1.0 is included in either range, a statistical significance is not established.

Frequently, I found that one or more of these traditional tools (RR, p-value, CI) are not mentioned or are incompletely disclosed in public health research proposals, apparently to avoid disclosing the inconvenient fact that drawn conclusions do not reach the level of statistical significance. Weak statistical associations, if honestly reported, would have disqualified them from reaching their preordained conclusions. And therein lies the rub: Full reporting of values reaching no statistical significance may lead to no funding of further research grants! Thus, many grant research proposals in the violence prevention area of public health pad their numbers, increase uncertainty (90% CI), ignore p-values and tout relative risks that have no statistical significance — to apply and re-apply for more funding for shoddy research grants. Remember, RR values between 0.5 and 2.0 denote no difference in rate of disease or injury between the two tested populations and carry no statistical significance. The difference may be due solely to the quality of the data.

Another situation arises when relative risks (Cohort Studies) or odd ratios (Case Control Studies) are mentioned, but then the discussion proceeds as if statistical significance has been reached (it had not), and on the basis of statistically insignificant findings, presto, further social programs are proposed masquerading as ongoing public health studies. The money keeps rolling in. Broad hypotheses that should have been refined or discarded are kept and tested and re-tested. And, in subsequent studies, efforts at hypothesis searching are amplified (i.e., fishing expeditions), eventually obtaining data of poor quality (i.e., data dredging) but conducive to “positive” results. In the process, databases of dubious validity are compiled for further social research! Large data sets are hardly ever verified for accuracy and are used by some researchers in the hope of finding statistical association where none exist (i.e., data mining).

In other words, much of this work entails finding solutions in search of problems, proposals in which narrow hypotheses are lost in nebulous seas of data. Immense sets of data are collected, tested and re-tested, and statistics are tortured until finally they are made to confess to reach predetermined social objectives. These social goals are then put in place for dissemination to health policy makers, who believe that these conclusions have arisen via the arduous trials of the scientific method. More funding is allocated to implement public policy, and for more public health (social) research. And the beat goes on!

In Part II, we will conclude this special commentary article for the readers of Surgical Neurology.

References

1. Arnett JC. Book review: Junk Science Judo by Steven J. Milloy. Medical Sentinel 2002;7(4):134-135.

2. Bennett JT, DiLorenzo TJ. From Pathology to Politics: Public Health in America. New Brunswick (NJ): Transaction Publishers; 2000. p. 21-115.

3. Brock TD. Biology of Microorganisms. Englewood Cliffs (NJ): Prentice Hall, Inc.; 1970. p.9-12.

4. Charlton BG. Statistical Malpractice. Journal of the Royal College of Physicians, March-April 1996:112-114.

5. Faria MA. The perversion of science and medicine (Parts I-IV). Medical Sentinel 1997;2(2):46-53 and Medical Sentinel 1997;2(3):81-86.

6. Faria MA. Public Health — From Science to Politics. Medical Sentinel 2001;6(2):46-49.

7. Milloy, SJ. Junk Science Judo: Self-Defense Against Health Scares and Scams. Washington (DC): Cato Institute, 2001. p.41-114. .

Miguel A. Faria Jr, MD is Clinical Professor of Surgery (Neurosurgery, ret.) and Adjunct Professor of Medical History (ret.), Mercer University School of Medicine; Editor emeritus, Medical Sentinel; Author, Vandals at the Gates of Medicine: Historic Perspective on the Battle Over Health Care Reform (1995); Medical Warrior: Fighting Corporate Socialized Medicine (1997); and Cuba in Revolution: Escape From a Lost Paradise (2002); Member, Editorial Board of Surgical Neurology, An International Journal of Clinical Neurosurgery

This article was published in Surgical Neurology and may be cited as: Faria MA. Public health, social science, and the scientific method (Part I). Surgical Neurology 2007;67(2):211-214.  Available from: https://haciendapublishing.com/public-health-social-science-and-the-scientific-method-part-i.

Copyright ©2007 Miguel A. Faria, Jr., M.D.

Share This Story:

Scroll to Top