Many applications, e.g., healthcare, education, call for effective methods methods for constructing predictive models from high dimensional time series data where the relationship between variables can be complex and vary over time. In such settings, the underlying system undergoes a sequence of unobserved transitions among a finite set of hidden states. Furthermore, the relationships between the observed variables and their temporal dynamics may depend on the hidden state of the system. To further complicate matters, the hidden state sequences underlying the observed data from different individuals may not be aligned relative to a common frame of reference. Against this background, we consider the novel problem of jointly learning the state-dependent inter-variable relationships as well as the pattern of transitions between hidden states from multi-dimensional time series data. To solve this problem, we introduce the State-Regularized Vector Autoregressive Model (SrVARM) which combines a state-regularized recurrent neural network to learn the dynamics of transitions between discrete hidden states with an augmented autoregressive model which models the inter-variable dependencies in each state using a state-dependent directed acyclic graph (DAG). We propose an efficient algorithm for training SrVARM by leveraging a recently introduced reformulation of the combinatorial problem of optimizing the DAG structure with respect to a scoring function into a continuous optimization problem. We report results of extensive experiments with simulated data as well as a real-world benchmark that show that SrVARM outperforms state-of-the-art baselines in recovering the unobserved state transitions and discovering the state-dependent relationships among variables
许多应用,例如医疗保健、教育,都需要从高维时间序列数据中构建预测模型的有效方法,其中变量之间的关系可能很复杂且随时间变化。在这种情况下,基础系统在有限的一组隐藏状态之间经历一系列未被观察到的转换。此外,观测变量之间的关系及其时间动态可能取决于系统的隐藏状态。使情况更加复杂的是,来自不同个体的观测数据背后的隐藏状态序列可能相对于一个共同的参考框架未对齐。在此背景下,我们考虑从多维时间序列数据中联合学习依赖于状态的变量间关系以及隐藏状态之间的转换模式这一新颖问题。为了解决这个问题,我们引入了状态正则化向量自回归模型(SrVARM),它结合了一个状态正则化递归神经网络来学习离散隐藏状态之间的转换动态,以及一个增广自回归模型,该模型使用依赖于状态的有向无环图(DAG)对每个状态中的变量间依赖关系进行建模。我们提出了一种有效的算法来训练SrVARM,该算法利用了最近提出的一种将关于评分函数优化DAG结构的组合问题重新表述为连续优化问题的方法。我们报告了模拟数据以及一个真实世界基准的大量实验结果,这些结果表明SrVARM在恢复未观测到的状态转换以及发现变量之间依赖于状态的关系方面优于最先进的基准方法。