Combinatorial Optimization (CO) plays a crucial role in addressing various significant problems, among them the challenging Maximum Independent Set (MIS) problem. In light of recent advancements in deep learning methods, efforts have been directed towards leveraging data-driven learning approaches, typically rooted in supervised learning and reinforcement learning, to tackle the NP-hard MIS problem. However, these approaches rely on labeled datasets, exhibit weak generalization, and often depend on problem-specific heuristics. Recently, ReLU-based dataless neural networks were introduced to address combinatorial optimization problems. This paper introduces a novel dataless quadratic neural network formulation, featuring a continuous quadratic relaxation for the MIS problem. Notably, our method eliminates the need for training data by treating the given MIS instance as a trainable entity. More specifically, the graph structure and constraints of the MIS instance are used to define the structure and parameters of the neural network such that training it on a fixed input provides a solution to the problem, thereby setting it apart from traditional supervised or reinforcement learning approaches. By employing a gradient-based optimization algorithm like ADAM and leveraging an efficient off-the-shelf GPU parallel implementation, our straightforward yet effective approach demonstrates competitive or superior performance compared to state-of-the-art learning-based methods. Another significant advantage of our approach is that, unlike exact and heuristic solvers, the running time of our method scales only with the number of nodes in the graph, not the number of edges.
组合优化(CO)在解决各种重要问题中起着至关重要的作用,其中包括具有挑战性的最大独立集(MIS)问题。鉴于深度学习方法的最新进展,已经致力于利用数据驱动的学习方法,通常植根于监督学习和强化学习,以解决NP-HARD MIS问题。但是,这些方法依赖于标记的数据集,表现出较弱的概括,并且通常取决于特定问题的启发式方法。最近,引入了基于RELU的数据掌神经网络以解决组合优化问题。本文介绍了一种新颖的数据二次神经网络公式,其特征是对于MIS问题的连续二次放松。值得注意的是,我们的方法通过将给定的MIS实例视为可训练的实体来消除培训数据的需求。更具体地说,使用MIS实例的图形结构和约束来定义神经网络的结构和参数,以便在固定输入上训练它为问题提供解决方案,从而将其设置为与传统的监督或强化学习方法区分开。通过采用基于梯度的优化算法,例如亚当(Adam),并利用有效的现成的GPU并行实施,我们直接而有效的方法与基于最先进的学习方法相比表现出竞争性或优越的性能。我们方法的另一个重要优点是,与精确和启发式求解器不同,我们方法的运行时间仅在图中的节点数量而不是边缘数量。