The focus of this study is on a family of hybrid architectures for feed-forward multi-layer neural networks and issues that arise in their design. The main objective in the design of this family has been to reduce the complexity of hardware, and hence make possible the implementation of larger networks for practical applications, by two main ideas: trading time for circuit complexity by a multiplexing scheme and a modular characteristic that allows multi-chip realizations without a prohibitive number of interconnections. In this paper, we propose to bring the various forms of this architecture together, which are at this time scattered in the literature. After presenting the main points in its operation, we will proceed to permutations and trade-offs, some of which have not been published in accessible literature so far. We start with the introduction of the basic architecture. We then present modifications and discuss some I/O issues. Matching neural transfer characteristics is important to the performance of the system and we address this problem with a set of second order improvements. Another version of the architecture, with external weight memory, is introduced which allows interaction with a host computer, and finally, a pipelined version of the architecture is presented that improves system speed with a small increment in overall complexity.
这项研究的重点是用于饲喂前进多层神经网络的混合体系结构及其设计中出现的问题。该家族设计的主要目标是减少硬件的复杂性,因此可以通过两个主要思想来实施更大的网络以实施实际应用:通过多重方案进行电路复杂性和模块化特征的交易时间允许多芯片实现,而无需过多数量的互连。在本文中,我们建议将这种体系结构的各种形式融合在一起,目前散布在文献中。在介绍了其运营的要点之后,我们将继续进行排列和权衡,其中一些尚未在迄今为止可访问的文献中发表。我们从引入基本体系结构开始。然后,我们提出修改并讨论一些I/O问题。匹配的神经传递特性对于系统的性能很重要,我们通过一组二阶改进解决了这个问题。引入了带有外部重量存储器的架构的另一个版本,该版本允许与主机计算机进行交互,最后,提出了该体系结构的管道版本,以提高系统速度,以较小的整体复杂性增量。