Efficacy and safety in randomized controlled clinical trials are often evaluated between active (treatment) and control groups. A limitation is that there may be disparity in (average) treatment effect between subjects. Statistical approaches to determine subgroups with higher treatment effect so to distinguish subgroups have therefore been developed.
Threshold-based methods, such as CART (Classification and Regression Trees; Breiman et al., 1984), have been applied as methods of subgroup identification. Dusseldorp & Mechelen (2013) proposed the `QUINT' method, which uses Cohen's effect size as evaluation criteria. Meanwhile, tree-structured methods are a statistical approach, in which treatment effects are classified into similar subgroups, and are therefore not necessarily a suitable method for identifying responders.
The present study instead focuses on the PRIM method (Patient Rule Induction Method; Friedman & Fisher, 1999). Previously reported PRIM-based methods (SPRIM; Kehl & Ulm, 2006) assumed proportional hazards between active (treatment) and control groups.
We consider the application of a different PRIM method for survival data without such assumption of proportional hazards. It applies restricted mean survival time (RMST) to evaluate the measurement of treatment effect and is used to establish the subgroups to minimize or maximize any differences in treatment effect. The effectiveness of this method is illustrated through a practical example; a small-scale simulation suggests that it can be used to identify more appropriate subgroups than the SPRIM and SIDES (Lipkovich et al., 2011) methods.
View full abstract