Thanks for stopping by to read this week's Biotech Stock Mailbag. Brief discussions about the upcoming FDA advisory panels for Sarepta Therapeutics (SRPT) - Get Report and Acadia Pharmaceuticals (ACAD) - Get Report are below. First, some tips on how to spot red flags in clinical trial results.
PM16 writes, "Thank you for doing the call with Dr. [Mark] Ratain, although I had to sign off early. Can you write an article recapping the clinical trial tips you and Dr. Ratain talked about? Any help you can offer about how to analyze clinical trial results would be great and very much appreciated."
Apologies again for the technical glitch that delayed the start of my conference call with Ratain on Tuesday. I know many registered listeners had trouble hearing the call or couldn't access it at all. Slingshot Insights, the research outfit sponsoring the call, ran into some unfortunate and unforeseen problems caused by too many people trying to dial in.
That's called a "good" bad problem, but still not ideal. Joe McCann, Slingshot's co-founder, tells me his engineers are working on a plan to prevent this from happening again. As I said in last week's Mailbag, I like Slingshot's business model -- providing affordable access to industry experts for smaller institutional and retail investors -- so I hope to work with McCann again on future research calls.
Here, then, is a summary of my call with Ratain, including a checklist of things to think about when a biotech or drug company announces clinical trial results:
Generally speaking, single-arm clinical trials are difficult to interpret because they lack a comparator arm. A single-arm study could produce credible evidence of drug activity in certain circumstances -- a study measuring response rate in deeply refractory cancer patients, for example. Be very skeptical, however, of single-arm studies with progression-free survival or overall survival endpoints. Likewise, the use of an historical control is suspect.
Ratain very much likes randomized, controlled studies (preferably blinded as well) for the converse of the reasons stated above.
When the company's press release hits the tape, do these things:
Read the press release. The whole thing. If the headline of the press release states results are "positive," "outstanding" or uses some other superlative, assume there are problems hidden somewhere in the waning paragraphs until proven otherwise. Read the press release again.
The design of the clinical trial, including number of patients enrolled, the type of patient enrolled and the endpoints (primary, secondary), should be explained clearly. There will be a paragraph discussing the study results. Look for these re-assuring words/terms: Intent-to-treat analysis, statistical significance, achieved the primary endpoint. More details, more specific data are better than less.
Be wary of companies which describe "positive" clinical trial results using wishy-washy adjectives. A recent favorite was CymaBay (CBAY) - Get Report , which described responses to its cholesterol-lowering drug as "broad" and having "potential utility." It shouldn't surprise you to learn that CymaBay's drug actually performed quite badly.
Other red flag words/terms to watch for: per protocol, retrospective analysis, responder analysis, subgroup(s), modified intent-to-treat, trend, grade five toxicity (that means a patient died.)
Were any changes made to the trial design, including endpoints? Did the company fully enroll the study as planned? Don't rely on the company to tell you. Compare the company's description of the trial with information listed on ClinicalTrials.gov. You'd be amazed how many times they don't match up.
Earlier this week, Oncolytics Biotech (ONCY) - Get Report claimed positive results in a randomized study of ovarian cancer patients based on the fourth, secondary efficacy endpoint listed in the study's ClinicalTrials.gov description. The primary endpoint and all of the other secondary endpoints failed. Overall survival trended in the wrong direction, lower for patients treated with the Oncolytics Biotech drug.
Assuming the study was enrolled fully, can you track all the patients from beginning to end? I always pay attention to the "Ns" -- slang for the number of patients in each arm of the study. Look out for patients missing from analyses for inexplicable reasons. It's not unusual for patients to drop out of a study, but they still need to be accounted for in whatever efficacy analysis is being used. Patients "disappeared" are a red flag.
A lot of clinical trials fail. This is the bane of biotech. Kudos to companies that acknowledge failure openly and directly. Unfortunately, too many companies spin failed trials with heaping shovels of bullshit. I'm speaking, of course, of the classic data-mined subgroup analysis. You've read a variant of the following bamboozlement before:
Our drug didn't prolong survival compared to a placebo, except in a subset of left-handed patients who ate meatloaf three days prior to enrolling. This substantial evidence of efficacy in left-handed, meatloaf-eating patients will be studied further in our planned phase III study. (I'm writing this column while eating meatloaf leftovers, in case you're wondering.)
Don't fall for this trick, unless you enjoy funding the CEO's outrageous salary and padded expense account for another three years. But do eat meatloaf. It's delicious.
Not every subgroup analysis is fraudulent. But getting comfortable with the potential credibility of a subgroup analysis requires asking a lot more questions and research. The press release will only be the start of that journey.
This is an incomplete list, but it should get you started. I'll end with a depressing truth. You can work through this checklist and do a lot more in-depth research that validates your investment thesis. Despite all that work, you can still end up being totally wrong. This is why we love and curse biotech stocks.
Katherine B. writes, "Adam, I found your article on the doctors supporting Sarepta Therapeutics (SPRT) - Get Report interesting but I didn't get a sense if you think the letter they wrote will convince FDA to approve the drug [eteplirsen] I think the letter will be persuasive so I'm asking if you agree with me."
I don't know.
I can't see how the doctor letter defending the eteplirsen data hurts Sarepta's case in front of the Food and Drug Administration, but it may have a negligible effect on the ultimate approval (or rejection) decision.
The FDA should re-post briefing documents for the Sarepta advisory panel on April 21, two business days ahead of the April 25 panel. Look to see if the FDA includes the doctors' letter as an addendum or exhibit in its briefing document, as they requested. That could be a signal that the doctors' arguments are being taken seriously by the agency.
Of course, you and I will also be reading the FDA's clinical review of eteplirsen very closely to see if it incorporates updated eteplirsen data submitted by Sarepta. The FDA's initial eteplirsen review, disclosed in January, was quite damning and pretty much signaled the agency's intent to reject the drug. Will the FDA soften its criticism this time around? Is the agency writing an entirely new clinical review to include the updated eteplirsen data, or will it just tack on an addendum?
These are questions to think about as April 21 approaches.
Cautiously yes, as long as there aren't any nasty surprises uncovered when the Nuplazid briefing documents are released tomorrow.
I discussed Nuplazid and the April 29 advisory panel meeting here. I will be live-blogging the panel.
Adam Feuerstein writes regularly for TheStreet. In keeping with company editorial policy, he doesn't own or short individual stocks, although he owns stock in TheStreet. He also doesn't invest in hedge funds or other private investment partnerships. Feuerstein appreciates your feedback; click here to send him an email.