Optical and hybrid convolutional neural networks (CNNs) recently have become of increasing interest to achieve low-latency, low-power image classification and computer vision tasks. However, implementing optical nonlinearity is challenging, and omitting the nonlinear layers in a standard CNN comes at a significant reduction in accuracy. In this work, we use knowledge distillation to compress modified AlexNet to a single linear convolutional layer and an electronic backend (two fully connected layers). We obtain comparable performance to a purely electronic CNN with five convolutional layers and three fully connected layers. We implement the convolution optically via engineering the point spread function of an inverse-designed meta-optic. Using this hybrid approach, we estimate a reduction in multiply-accumulate operations from 17M in a conventional electronic modified AlexNet to only 86K in the hybrid compressed network enabled by the optical frontend. This constitutes over two orders of magnitude reduction in latency and power consumption. Furthermore, we experimentally demonstrate that the classification accuracy of the system exceeds 93% on the MNIST dataset.
光学和混合卷积神经网络(CNN)最近在实现低延迟、低功耗的图像分类和计算机视觉任务方面受到越来越多的关注。然而,实现光学非线性具有挑战性,并且在标准CNN中省略非线性层会导致准确率大幅下降。在这项工作中,我们利用知识蒸馏将改进的AlexNet压缩为单个线性卷积层和一个电子后端(两个全连接层)。我们获得了与具有五个卷积层和三个全连接层的纯电子CNN相当的性能。我们通过设计逆向设计的超光学元件的点扩散函数以光学方式实现卷积。使用这种混合方法,我们估计乘法累加运算从传统电子改进型AlexNet中的1700万次减少到由光学前端支持的混合压缩网络中的仅8.6万次。这使得延迟和功耗降低了两个数量级以上。此外,我们通过实验证明该系统在MNIST数据集上的分类准确率超过93%。