What do we look for in new drugs?

Amid all the hoopla about a new drug approval, we sometimes lose sight of what we are actually gaining. Words like "significant improvement," "benefit," and "new treatment option" all sound like important milestones and lead to the logical conclusion that if the situation applies, the new drug should now be used. The technical term for what we measure that tells us how well a drug works is called an "endpoint." Different endpoints are used for different trials. Most companies that develop a new drug are primarily interested in getting approval by the Food and Drug Administration and in getting it as quickly as possible. The FDA does not have firm rules on what endpoint is required for drug approval, although they do have guidelines for what is used. The endpoint of survival--measured as how long patients live after being started on a drug--is the gold standard. But this is the hardest to prove because it requires the most patients and the most time (and expense) to conduct. However, time-to-disease progression (the time from the start of treatment to when scans show the cancer has grown by a specified amount) is another endpoint that will sometimes lead to an approval, especially if the available treatment options have more side effects, or if there is no effective standard therapy. The flimsiest endpoint is response rate--the percentage of patients who experience shrinkage of tumor. Some argue that time to progression and response rate are "surrogates" for survival, since many studies show that all these endpoints seem to go in the same direction for a given trial. However, this is not always the case. For less common cancers, it is not possible to enroll enough patients to demonstrate a survival difference attributable to a new drug, so the FDA must use other endpoints. In early-stage cancers that are highly curable (like hormone receptor-positive breast cancer), the number of deaths is so low that, again, time to recurrence must be used as a surrogate. For common cancers, survival advantages must usually be proven, especially for first-line therapy (the first treatment given for advanced disease). However, this was not the case when Avastin was approved for breast cancer, in part because time to disease progression was nearly doubled even though no survival difference was seen.There is no clear answer to this debate. One solution is to streamline the research process and increase the number of centers that are able to do trials, such that large numbers of patients can be enrolled quickly and the gold standard of survival improvement can be proven or disproven. The downside is that the "quick and dirty" approach might overlook important safety issues. The other technological solution is to "validate" surrogate markers, even lab tests, such as one that measures the number of circulating tumor cells that can be determined within weeks of giving experimental therapy. Such studies are ongoing and these may represent the best route, although the validation studies themselves can be time consuming, difficult to interpret, and not generalizeable to different types of cancers or stages of disease. In this day and age of personalized therapies and new drugs given for smaller subsets of patients, the challenge to enroll high numbers is even greater, but at the same time, the hope is that the differences in outcome will be more striking, such that survival differences can be shown with fewer patients (for example as was the case with PARP inhibitors in "triple-negative" breast cancer (see the "Targeting the Triple Threat" in the Fall issue of CURE). The argument for what endpoint is best will continue for some time from the halls of academia to the corporate boardrooms to the FDA hearing rooms.