Rapid advancement in modern technology has allowed scientists to collect data of unprecedented size and complexity. This is particularly the case in genomics applications. One type of statistical problem in such applications is concerned with modeling an output variable as a function of a small subset of a large number of features based on relatively small sample sizes, which may even be coming from multiple subpopulations. As such, selecting the correct predictive features (variables) for each subpopulation is the key. To address this issue, we consider the problem of feature selection in finite mixture of sparse normal linear (FMSL) models in large feature spaces. We propose a 2-stage procedure to overcome computational difficulties and large false discovery rates caused by the large model space. First, to deal with the curse of dimensionality, a likelihood-based boosting is designed to effectively reduce the number of candidate features. This is the key thrust of our new method. The greatly reduced set of features is then subjected to a sparsity inducing procedure via a penalized likelihood method. A novel scheme is also proposed for the difficult problem of finding good starting points for the expectation-maximization estimation of mixture parameters. We use an extended Bayesian information criterion to determine the final FMSL model. Simulation results indicate that the procedure is successful in selecting the significant features without including a large number of insignificant ones. A real data example on gene transcription regulation is also presented.
现代技术的快速进步使科学家能够收集到规模和复杂程度前所未有的数据。在基因组学应用中尤其如此。此类应用中的一类统计问题涉及基于相对较小的样本量(这些样本量甚至可能来自多个亚群),将一个输出变量建模为大量特征中的一个小子集的函数。因此,为每个亚群选择正确的预测特征(变量)是关键。为了解决这个问题,我们考虑在大特征空间中的稀疏正态线性有限混合(FMSL)模型中的特征选择问题。我们提出了一个两阶段的程序来克服由大模型空间导致的计算困难和高错误发现率。首先,为了应对维度灾难,设计了一种基于似然的提升方法,以有效减少候选特征的数量。这是我们新方法的关键要点。然后,通过一种惩罚似然方法对大幅减少的特征集进行稀疏诱导处理。还针对为混合参数的期望最大化估计寻找良好起始点这一难题提出了一种新的方案。我们使用一种扩展的贝叶斯信息准则来确定最终的FMSL模型。模拟结果表明,该程序在选择重要特征的同时不会包含大量不重要的特征方面是成功的。还给出了一个关于基因转录调控的真实数据示例。