Spiking Neural Networks (SNNs) can be implemented with power-efficient digital as well as analog circuitry. However, in Resistive RAM (RRAM) based SNN accelerators, synapse weights programmed into the crossbar can differ from their ideal values due to defects and programming errors, degrading inference accuracy. In addition, circuit nonidealities within analog spiking neurons that alter the neuron spiking rate (modeled by variations in neuron firing threshold) can degrade SNN inference accuracy when the value of inference time steps (ITSteps) of SNN is set to a critical minimum that maximizes network throughput. We first develop a recursive linearized check to detect synapse weight errors with high sensitivity. This triggers a correction methodology which sets out-of-range synapse values to zero. For correcting the effects of firing threshold variations, we develop a test methodology that calibrates the extent of such variations. This is then used to proportionally increase inference time steps during inference for chips with higher variation. Experiments on a variety of SNNs prove the viability of the proposed resilience methods.
脉冲神经网络(SNNs)可以通过高能效的数字电路以及模拟电路来实现。然而,在基于阻变随机存取存储器(RRAM)的SNN加速器中,由于缺陷和编程错误,编程到交叉阵列中的突触权重可能与其理想值不同,从而降低推理精度。此外,当SNN的推理时间步长(ITSteps)的值被设置为使网络吞吐量最大化的临界最小值时,模拟脉冲神经元内改变神经元脉冲频率(通过神经元触发阈值的变化来建模)的电路非理想性会降低SNN的推理精度。我们首先开发了一种递归线性化检查方法,以高灵敏度检测突触权重错误。这会触发一种校正方法,将超出范围的突触值设置为零。为了校正触发阈值变化的影响,我们开发了一种测试方法来校准这种变化的程度。然后,对于具有较大变化的芯片,在推理过程中按比例增加推理时间步长。在多种SNN上进行的实验证明了所提出的弹性方法的可行性。