Testing Drugs in People

by Ken Flieger
FDA Consumer special report

Most of us understand that drugs
intended to treat people have to be tested in people. These tests,
called clinical trials, determine if a drug is safe and effective, at
what doses it works best, and what side effects it causes–information
that guides health professionals and, for nonprescription drugs,
consumers in the proper use of medicines.

Clinical testing isn’t the only way to
discover what effects drugs have on people. Unplanned but alert
observation and careful scrutiny of experience can often suggest drug
effects and lead to more formal study. But such observations are usually
not reliable enough to serve as the basis for important, scientifically
valid conclusions. Controlled clinical trials, in which results observed
in patients getting the drug are compared to the results in similar
patients receiving a different treatment, are the best way science has
come up with to determine what a new drug really does. That’s why
controlled clinical trials are the only legal basis for FDA to conclude
that a new drug has shown “substantial evidence of
effectiveness.”

Does It Work?

It’s important to test drugs in the
kind of people they’re meant to help. It’s also important to design
clinical studies that ask, and answer, the right questions about
investigational drugs. And that’s no easy task.

The process starts with a drug sponsor,
usually a pharmaceutical company, seeking to develop a new drug it hopes
will find a useful and profitable place in the market. Before clinical
testing begins, researchers analyze the drug’s main physical and
chemical properties in the laboratory and study its pharmacologic and
toxic effects in laboratory animals. If the laboratory and animal study
results show promise, the sponsor can apply to FDA to begin testing in
people.

Once FDA has seen the sponsor’s plans
and a local institutional review board–a panel of scientists,
ethicists, and non-scientists that oversees clinical research at medical
centers throughout the country–approves the protocol for clinical
trials, experienced clinical investigators give the drug to a small
number of healthy volunteers or patients. These phase 1 studies assess
the most common acute adverse effects and examine the size of doses that
patients can take safely without a high incidence of side effects.
Initial clinical studies also begin to clarify what happens to a drug in
the human body–whether it’s changed (metabolized), how much of it (or a
metabolite) gets into the blood and various organs, how long it stays in
the body, and how the body gets rid of the drug and its effects.

If phase 1 studies don’t reveal major
problems, such as unacceptable toxicity, the next step is to conduct a
clinical study in which the drug is given to patients who have the
condition it’s intended to treat. Researchers then assess whether the
drug has a favorable effect on the condition.

Three Phases of Testing in
Humans

Number of Patients Length Purpose Percent of Drugs
Successfully Tested*
Phase 1 20-100 Several months Mainly safety 70 percent
Phase 2 Up to several
hundred
Several months to 2
years
Some short-term
safety but mainly effectiveness
33 percent
Phase 3 Several hundred to
several thousand
1-4 years Safety, dosage,
effectiveness
25-30 percent
* For example, of
100 drugs for which investigational new drug applications are
submitted to FDA, about 70 will successfully complete phase 1
trials and go on to phase 2; about 33 of the original 100 will
complete phase 2 and go to phase 3; and 25 to 30 of the original
100 will clear phase 3 (and, on average, about 20 of the
original 100 will ultimately be approved for marketing).

Usually, No Miracles

Again, the process appears
straightforward–simply recruit groups of patients to participate in a
clinical trial, administer the drug to those who agree to take part, and
see if it helps them. Sounds easy enough, and sometimes it is. In what
may be medicine’s most celebrated clinical trial, Louis Pasteur treated
patients exposed to rabies with an experimental anti-rabies vaccine. All
the treated patients survived. Since scientists knew that untreated
rabies was 100 percent fatal, it wasn’t hard to conclude that Pasteur’s
treatment was effective.

But that was a highly unusual case.
Drugs do not usually miraculously reverse fatal illness. More often they
reduce the risk of death, but don’t entirely eliminate it. They usually
accomplish this by relieving the symptoms of the illness, such as pain,
anxiety, heart failure, or angina. Or a drug may alter a clinical
measurement–reduce blood pressure or lower the cholesterol level, for
example–in a way that physicians hope will be valuable. Drug effects
like these can be a good deal harder to detect and evaluate than a
result as dramatic as Pasteur’s rabies cure.

This is mainly because diseases don’t
follow a predictable path. Many acute illnesses or conditions–viral
ailments like colds or the flu, minor injuries, insomnia–can usually be
counted on to go away spontaneously without treatment. Some chronic
conditions like arthritis, multiple sclerosis, depression, or asthma
often follow a varying course–better for a time, then worse, then
better again, usually for no apparent reason. And heart attacks and
strokes, for example, have widely variable death rates depending on
treatment, age, and other factors, so that the “expected”
mortality for an individual patient can be hard to predict.

A further difficulty in gauging the
effectiveness of an investigational drug is that in some cases
measurements of disease are subjective, relying in part on what is
essentially a matter of interpretation by the physician or patient. Such
measurements can be imprecise, influenced by a patient’s or physician’s
expectations or hopes. In those circumstances, it’s difficult to tell
whether treatment is having a favorable effect, no effect, or even an
adverse effect. The way to answer this critical question about an
investigational drug is to subject it to a controlled clinical trial.

New Drug Development Timeline

The phases of new drug development,
from preclinical testing, research, and development through
postmarketing surveillance are illustrated in a 6K
PDF chart.

Understanding Controls

In a controlled trial, patients in one
group receive the investigational drug. Those in a comparable group–the
controls–get either no treatment at all, a placebo (an inactive
substance that looks like the investigational drug), a drug known to be
effective, or a different dose of the drug under study.

Usually the test and control groups are
studied at the same time. In fact, usually the same group of patients is
divided in two with each subgroup getting a different treatment. That is
the best way to be sure the groups are similar.

In some special cases, a study uses a
“historical control,” in which patients given the
investigational drug are compared with similar patients treated with the
control drug at a different time and place. “Historical
control” can also refer to a comparison of groups of patients
treated at about the same time but at different institutions.

Sometimes patients are followed for a
time after treatment with an investigational drug, and investigators
compare their status before and after treatment. Here, too, the
comparison is historical. It is based on an estimate of what would have
happened without treatment. The historical control design is
particularly useful when the disease being treated has high and
predictable death or illness rates. Then investigators can be reasonably
sure what would have happened without treatment.

It’s important that treatment and
control groups be as similar as possible in characteristics that can
affect treatment outcome. For instance, all patients in specific groups
must have the disease the drug is meant to treat or same stage of the
disease. In a clinical trial of a drug to treat angina (chest pain
associated with cardiovascular disease), for example, if one group of
patients being studied actually had sore ribs rather than angina, their
differing response to the drug could not be assumed to be due to its
effectiveness or lack thereof.

Treatment and control groups should
also be of similar age, weight, and general health status, and be
similar in other characteristics that could affect the outcome of the
study, such as other treatment being received at the same time.

Two principal methods have been used to
achieve this all-important comparability. One is to carefully pair each
person in the treatment group with a control patient who has closely
matching characteristics. This method is rarely used today because even
in the best of circumstances, it’s difficult to match pairs of patients
for the myriad factors that could have a bearing on results.

In the more common approach, called
randomization, patients are randomly assigned to either the treatment or
control group, rather than deliberately selected for one group or the
other. When the study population is large enough and the criteria for
participation are carefully defined, randomization yields treatment and
control groups that are similar in important characteristics. Because
assignment to one group or another is not under the control of the
investigator, randomization also eliminates the possibility of
“selection bias,” the tendency to pick healthier patients to
get the new treatment.

When It Helps to Be ‘Blind’

In clinical trials, bias (a
“tilt” in favor of a treatment) can operate like a
self-fulfilling prophesy. The hope for a good outcome can skew patient
selection so that the treatment group includes a disproportionate number
of patients likely to do well whatever their treatment. The same kind of
inadvertent bias can lead both patients and investigators to overrate
positive results in the treatment group and negative findings among
controls, and cause data analysts to make choices that favor treatment.
Clinical trials that include such biases are likely to be incapable of
assessing drug effect.

In conjunction with randomization, a
design feature known as “blinding” helps ensure that bias
doesn’t distort the conduct of a study or the interpretation of its
results. Single-blinding consists of keeping patients from knowing
whether they are receiving the investigational drug or a placebo. In a
double-blind study, neither the patients, the investigators, nor the
data analysts know which patients got the investigational drug. Only
when the closely guarded assignment code is broken to identify treatment
and control patients do the people involved in the study know which is
which.

Ethical Questions

Testing experimental drugs in people
inevitably presents ethical questions. For example, is it ethical to
give patients a placebo when effective treatment is available? Not all
authorities agree on the answer. But the generally accepted practice in
the United States–and one increasingly being adopted abroad–is that
well and fully informed patients can consent to take part in a
controlled-randomized-blinded clinical trial, even when effective
therapy exists, so long as they are not denied therapy that could alter
survival or prevent irreversible injury. They can voluntarily agree to
accept temporary discomfort and other potential risks in order to help
evaluate a new treatment.

In any trial in which a possible effect
on survival is being assessed, it’s important to monitor results as they
emerge. That way, if a major effect is seen–positive or negative–the
trial can be stopped. This happened in the first clinical study of the
AIDS drug zidovudine (AZT), when a clear survival advantage for patients
receiving zidovudine was seen well before the trial was scheduled to
end. The trial was then ended early, and within a week FDA authorized a
protocol allowing more than 4,000 patients to receive zidovudine before
it was approved for marketing under the brand name Retrovir. This is an
example of the ethical principle that if a lifesaving or life-extending
treatment for a disease does exist, patients cannot be denied it.

In some cases, a new treatment can be
compared with established treatment, so long as the effectiveness of the
latter can readily be distinguished from placebo and the study is large
enough to detect any important difference.

It is also possible to evaluate new
drugs in this situation in “add-on” studies. In this kind of
trial, all participants receive standard therapy approved for treating
the disease, but those in the treatment group also get the
investigational drug. The control group gets either no added treatment
or placebo. Any difference in results between the treatment and control
groups can be attributed to the investigational drug. It is common to
study new anti-seizure drugs in this way, as well as new agents intended
to reduce mortality after a heart attack.

Testing in Women and Children

In recent years there has been growing
interest at FDA and by the public in drug testing in patient populations
that have been relatively neglected in clinical trials, especially women
and children. Children are generally not included in trials at all until
the drug has been fully evaluated in adults, unless the drug is intended
for a pediatric disease, such as acute lymphocytic leukemia. When
children are not likely to use drugs frequently (for example, drugs to
treat high blood pressure), they often have not been included in
clinical trials at all. 

Without pediatric studies or other
sources of scientific information, labeling cannot include guidance
about dosage, side effects, and when a drug should or should not be used
in children. In October 1992, FDA proposed changes in its regulations
governing drug labeling for “pediatric use.” The proposal is
aimed at encouraging drug sponsors to develop pediatric
information–through clinical trials in children or by extrapolation of
findings in adults–that can be included in drug labeling.

Although both sexes now are generally
represented in clinical trials in proportions that reflect gender
patterns of disease, FDA and women’s health advocates agree that less
care has been taken to develop information about significant differences
in the ways men and women respond to drugs.

A new FDA guideline on the study and
evaluation of gender differences in clinical drug trials, issued in July
1993, encourages drug companies to include appropriate numbers of women
in drug development programs and to pay particular attention to factors
that can affect drug behavior, such as phases of the menstrual cycle,
menopause, and the use of oral contraceptives or estrogens. Another
focus is discovering gender-related differences in how a drug is
absorbed, metabolized or excreted, and how it works.

The guideline also does away with an
FDA policy dating from 1977 that excluded women of childbearing
potential from participation in early clinical studies. The agency
believes that institutional review boards, as well as clinical
investigators and women themselves, can gauge whether women’s
participation in clinical trials is appropriate and make sure that
fetuses are not unduly exposed to potentially toxic agents. Studying
drugs in people will probably never be an exact science. But steady
progress in the methodology and, in a way, the philosophy of clinical
trials is making the process more productive, more reliable, and more
beneficial for us all.

Ken Flieger is a writer in
Washington, D.C.