Understanding Scientific Research [2]: Experimental vs. Observational Studies

Not all scientific studies are created equal. There are several types of studies, and  the first distinction is between experimental and observational evidence.

Previously I posted about how to read a study and how a study is structured with different sections. Certain features in each section should be present and be clear. For example, in the discussion section results should be put into context of the overall or similar literature and weighed against it. 

Scientific evidence should be used to figure out what is more likely to be true, and not misused to defend what we want to be true, for whatever reason.

In this day and age, scientific beliefs and (provisional) conclusions must be based on solid evidence. But what constitutes solid evidence? This can be a tricky question because we have several kinds of evidence with different strengths and weaknesses. This alone makes it all more difficult to interpret.

We must be able to recognize what we are looking at and how to distinguish between different types of scientific evidence. Some studies have more weight than others.

1. Types of scientific studies

Analytic study designs are sub-classified as observational or experimental study designs (1).



1.1 Experimental studies

Experimental studies are designed to control as many variables as possible to measure a specific outcome. In other words, a variable is isolated so that we can determined specific outcomes.

Randomized, controlled trials (RCTs) were introduced into clinical medicine in 1948, when streptomycin was evaluated against a placebo in the treatment of tuberculosis (2,3) which introduced the method of randomly allocating treatments to patients into therapeutic research. This introduction is often seen as the beginning of the modern era of clinical trials (4) and may rightly be called a scientific paradigm (5). Since then RCTs have become the gold standard for assessing the effectiveness of therapeutic agents (6,7).

It has estimated in 1995 that approximately 9000 randomized clinical trials are performed every year (8).

The strengths of experimental studies:
1.  Controlling and isolating variables.
2.  Quantitative: measures a specific feature or outcome.
3.  Statistical in nature because there are comparison groups.

Weaknesses of experimental studies:
1.  Artifacts.
2.  Interfering with a system may change its behavior.
3.  May not be representative of real-world experiences.
4. May not be practical. There are certain kinds of experimental studies that simply cannot be performed due to ethical reasons for example.

Well-designed randomized controlled trials (RCTs) have held the preeminent position in the hierarchy of Evidence Based Medicine as level I evidence. However well-designed observational studies, recognized as level II or III evidence, can play an important role in deriving evidence (1).  

Levels of Evidence Based Medicine

Level of
Evidence
Qualifying Studies
I
High-quality, multicenter or single-center, randomized controlled trial with adequate power; or systematic review of these studies
II
Lesser quality, randomized controlled trial; prospective cohort study; or systematic review of these studies
III
Retrospective comparative study; case-control study; or systematic review of these studies
IV
Case-series
V
Expert opinion; case report or clinical example; or evidence based on physiology, bench research, or “first principles”


Each category is considered methodologically superior to those below it, and this model has been promoted widely in individual reports, meta-analyses, consensus statements, and educational materials for clinicians (9).

1.2 Observational studies

Observational studies ideally do not intervene, they observe the world with no specific intervention. The investigator simply “observes” and assesses the strength of the relationship between an exposure and disease variable for example (1). These can be very useful for correlations, and such correlations can then be tested experimentally. Several sciences rely on observational evidence, like paleontology, archeology, and astronomy. Such sciences can also be combined with experimental evidence.

Strengths of observational studies:
1. Large amounts of data can be obtained by observing what already exists.
2. Also allows group comparisons.
3. There is minimal intervention in the natural behavior of the system.

Weaknesses of observational studies:
1. Do not control many variables.
2. Always subject to unknown variables.
3. Demonstrate correlation but cannot establish definitively cause and effect.

Three types of observational studies include cohort studies, case-control studies, and cross-sectional studies (1).

- Cohort means “group of people with defined characteristics who are followed up to determine incidence of, or mortality from, some specific disease, all causes of death, or some other outcome”.

- Case-control and cohort studies offer specific advantages by measuring disease occurrence and its association with an exposure by offering a temporal dimension (i.e. prospective or retrospective study design). 

- Cross-sectional studies, also known as prevalence studies, examine the data on disease and exposure at one particular time point. Because the temporal relationship between disease occurrence and exposure cannot be established, cross-sectional studies cannot assess the cause and effect relationship. 

In a cohort study, an outcome or disease-free study population is first identified by the exposure or event of interest and followed in time until the disease or outcome of interest occurs (1).


Because exposure is identified before the outcome, cohort studies have a temporal framework to assess causality and thus have the potential to provide the strongest scientific evidence.

An important distinction lies between cohort studies and case-series. The distinguishing feature is the presence of a control, or unexposed, group. Contrasting with epidemiological cohort studies, case-series are descriptive studies following one small group of subjects. In essence, they are extensions of case reports, but lack a control group. Unless a second comparative group serving as a control is present, these studies are defined as case-series (1,10).


Advantages and Disadvantages of the Case-Control Study (1):

Advantages
Good for examining rare outcomes or outcomes with long latency
 Relatively quick to conduct
 Relatively inexpensive
 Requires comparatively few subjects
Existing records can be used
Multiple exposures or risk factors can be examined

Disadvantages 
Susceptible to recall bias or information bias
 Difficult to validate information
Control of extraneous variables may be incomplete
Selection of an appropriate comparison group may be difficult
Rates of disease in exposed and unexposed individuals cannot be determined
 
Prospective and retrospective studies

Cohort studies can be prospective or retrospective. Prospective studies are carried out from the present time into the future. Prospective studies are designed with specific data collection methods and can be tailored to collect specific exposure data and may be more complete.

The disadvantage of a prospective cohort study may be the long follow-up period while waiting for events or diseases to occur. This is especially inappropriate or inefficient for investigating diseases with long latency periods and is vulnerable to a high loss to follow-up rate (1).

For such purposes retrospective cohort studies are better indicated given the timeliness and inexpensive nature of the study design. They are also known as historical cohort studies, and they look to the past to examine medical events or outcomes (1). However, the primary disadvantage of this study design is the limited control the investigator has over data collection. The existing data may be incomplete, inaccurate, or inconsistently measured between subjects due to the potential for multiple bias (1).


 (1)

The “restricted cohort” design is a method used to strengthen observational studies (11). This method adapts principles of the design of randomized, controlled trials to the design of an observational study as follows (9): 

- It identifies a “zero time” for determining a patient's eligibility and base-line features;
- Uses inclusion and exclusion criteria similar to those of clinical trials;
- Adjusts for differences in base-line susceptibility to the outcome, and uses statistical methods (e.g., intention-to-treat analysis) similar to those of randomized, controlled trials.

For example, the use of a restricted cohort (11) produced results consistent with corresponding findings from multicenter, randomized, double-blind, and placebo-controlled trial (12). 

1.3 Observational vs. Experimental

Observational studies have several advantages over randomized, controlled trials, including lower cost, greater timeliness, and a broader range of patients (2,13). Observational studies are used primarily to identify risk factors and prognostic indicators and in situations in which randomized, controlled trials would be impossible or unethical (2,14).

Well-designed observational studies have been shown to provide results similar to randomized controlled trials, challenging the belief that observational studies are second-rate (1). Contrary to prevailing beliefs, comparable results between observational studies and RCTs have been shown (2,9). Observational studies usually do provide valid information (2).

In one investigation, results of well-designed observational studies (with either a cohort or a case–control design) did not systematically overestimate the magnitude of the effects of treatment as compared with those in randomized, controlled trials on the same topic (9). Another investigation comparing randomized, controlled trials and observational case-control studies of screening mammography found similar results (15). These results challenge the current consensus about a hierarchy of study designs in clinical research.

RCTs can also produce conflicting results as exemplified by a review of more than 200 RCTs on 36 clinical topics (16). Even meta-analyses of RCTs can be discordant with those of large, simple trials on the same clinical topic (17). Due to heterogeneous results, a single randomized trial (or only one observational study) cannot be expected to provide a gold-standard result that applies to all clinical situations (9).

Research design should not be considered a rigid hierarchy, as some propose. Many experts from the ‘Classical EBM ideology’ claimed that a RCT was entirely bias-free and stated “If you find that [a] study was not randomized, we'd suggest that you stop reading it and go on to the next article.” (18). However, as time evolved it became clear that this was not the case. Therefore, according to the currently accepted ‘New EBM ideology, RCTs may minimize, but do not eliminate bias’ (19).

Observational studies may be less prone to heterogeneity in results than RCTs (9). One explanation may turn out to be that each observational study is more likely to include a broad representation of the population at risk, and there is less opportunity for differences in the management of subjects among observational studies (9). In contrast, each RCT may have a distinct group of patients as a result of specific inclusion and exclusion criteria regarding coexisting illnesses and severity of disease, and the experimental protocol for therapy may not be representative of clinical practice (9).

When observational studies are weak, for example trials using historical controls, unblinded clinical trials, or clinical trials without randomly assigned control subjects (20,21,22), recommendations derived from overviews of such trials are also much weaker than recommendations derived from RCTs. But when observational studies are strong results can be similar to RCTs as mentioned above. 

Thus, data based on “weaker” forms of observational studies are often mistakenly used to criticize all observational research. Nevertheless, results of poorly done observational studies are indeed used inappropriately — for example to promote ineffective alternative medicine therapies (23). 

Features of poorly-controlled observational studies:
- Cohort studies with historical controls;
- Clinical trials with nonrandom assignment of interventions;
- Results are not reported in the format of point estimates (e.g., relative risks or odds ratios) and confidence intervals. 

Features of well-controlled observational studies:
- Cohort design (i.e., with concurrent selection of controls);
- Case–control;
- Restricted cohort;
- Results are reported in the format of point estimates (e.g., relative risks or odds ratios) and confidence intervals.

The popular belief that only randomized, controlled trials produce trustworthy results and that all observational studies are misleading does a disservice to patient care, clinical investigation, and the education of health care professionals (9).

However, results of a single randomized, controlled trial, or of only one observational study, should be interpreted cautiously. These two types of evidence, experimental and observational, can and should be combined to provide different kinds of information with different strengths and weaknesses, and paint a better picture or even triangulate a cause and effect relationship, establish questions for future RCTs, and define clinical conditions.

Evidence from both RCTs and from well-designed cohort or case–control studies can and should be used to find the right answers.

You might be interested in reading this one too by Jose Antonio, PhD FNSCA, FISSN  http://www.theissnscoop.com/hierarchy-of-evidence


Would you like to know more? Subscribe.

Summary of 47 articles with 49.167 words and 1321 references on
Exercise and nutrition



References:

1. Jae W. Song, and Kevin C. Chung. Observational Studies: Cohort and Case-Control Studies. Plast Reconstr Surg. 2010 Dec; 126(6): 2234–2242  
2. Benson K, Hartz AJ. A comparison of observational studies and randomized, controlled trials. N. Engl. J. Med. 2000;342:1878–1886.
3. Streptomycin treatment of pulmonary tuberculosis: a Medical Research Council investigation. BMJ. 1948;2:769–82.
4. Feinstein AR. Current problems and future challenges in randomized clinical trials. Circulation 1984; 70: 767–774.
5. Horwitz RI. The experimental paradigm and observational studies of cause-effect relationships in clinical medicine. J Chron Dis 1987; 40: 91–99
6. Byar DP, Simon RM, Friedewald WT, et al. Randomized clinical trials: perspectives on some recent ideas. N Engl J Med. 1976;295:74–80.
7. Feinstein AR. Current problems and future challenges in randomized clinical trials. Circulation. 1984;70:767–74.
8. Olkin I. Statistical and theoretical considerations in metaanalysis. J Clin Epidemiol 1995; 48: 133–146.
9. Concato J, Shah N, Horwitz RI. Randomized, controlled trials, observational studies, and the hierarchy of research designs. N. Engl. J. Med. 2000;342:1887–1892.
10. Jenicek M. Foundations of Evidence-Based Medicine. Parthenon Pub. Group; Boca Raton: 2003. pp. 1–542
11. Horwitz RI, Viscoli CM, Clemens JD, Sadock RT. Developing improved observational methods for evaluating therapeutic effectiveness. Am J Med. 1990;89:630–8.
12. A randomized trial of propranolol in patients with acute myocardial infarction. I. Mortality results. JAMA. 1982;247:1707–14. [PubMed]
13. Feinstein AR. Epidemiologic analyses of causation: the unlearned scientific lessons of randomized trials. J Clin Epidemiol 1989;42:481-489
14. Naylor CD, Guyatt GH. Users' guides to the medical literature. X. How to use an article reporting variations in the outcomes of health services. JAMA 1996;275:554-558
15. Demissie K, Mills OF, Rhoads GG. Empirical comparison of the results of randomized controlled trials and case-control studies in evaluating the effectiveness of screening mammography. J Clin Epidemiol. 1998;51:81–91.
16. Horwitz RI. Complexity and contradiction in clinical trial research. Am J Med. 1987;82:498–510.
17. LeLorier J, GrĂ©goire G, Benhaddad A, Lapierre J, Derderian F. Discrepancies between meta-analyses and subsequent large randomized, controlled trials. N Engl J Med. 1997;337:536–42.
18. Sackett DL, Richardson WS, Rosenberg W, Haynes RB. Evidence-based medicine: how to practice and teach EBM. New York: Churchill Livingtone, 1997. p.108
19. Sami T. and Sedwick P. (2011). Do RCTS provide better evidence than observational studies? Opticon 1826,11,1–10.
20. Chalmers TC, Celano P, Sacks HS, Smith H., Jr Bias in treatment assignment in controlled clinical trials. N Engl J Med. 1983;309:1358–61.
21. Sacks HS, Chalmers TC, Smith H., Jr Sensitivity and specificity of clinical trials: randomized v historical controls. Arch Intern Med. 1983;143:753–5.
22. Kunz R, Oxman AD. The unpredictability paradox: review of empirical comparisons of randomised and non-randomised clinical trials. BMJ. 1998;317:1185–90.
23. Angell M, Kassirer JP. Alternative medicine — the risks of untested and unregulated remedies. N Engl J Med. 1998;339:839–41.