喵ID:7k9x7D免责声明

A family of hybrid neural networks

混合神经网络家族

基本信息

DOI:
10.1109/mwscas.1994.519281
发表时间:
1994
期刊:
Proceedings of 1994 37th Midwest Symposium on Circuits and Systems
影响因子:
--
通讯作者:
M. Shridhar
中科院分区:
文献类型:
--
作者: Aria Nosratinia;N. Yazdi;M. Ahmadi;M. Shridhar研究方向: -- MeSH主题词: --
关键词: --
来源链接:pubmed详情页地址

文献摘要

The focus of this study is on a family of hybrid architectures for feed-forward multi-layer neural networks and issues that arise in their design. The main objective in the design of this family has been to reduce the complexity of hardware, and hence make possible the implementation of larger networks for practical applications, by two main ideas: trading time for circuit complexity by a multiplexing scheme and a modular characteristic that allows multi-chip realizations without a prohibitive number of interconnections. In this paper, we propose to bring the various forms of this architecture together, which are at this time scattered in the literature. After presenting the main points in its operation, we will proceed to permutations and trade-offs, some of which have not been published in accessible literature so far. We start with the introduction of the basic architecture. We then present modifications and discuss some I/O issues. Matching neural transfer characteristics is important to the performance of the system and we address this problem with a set of second order improvements. Another version of the architecture, with external weight memory, is introduced which allows interaction with a host computer, and finally, a pipelined version of the architecture is presented that improves system speed with a small increment in overall complexity.
这项研究的重点是用于饲喂前进多层神经网络的混合体系结构及其设计中出现的问题。该家族设计的主要目标是减少硬件的复杂性,因此可以通过两个主要思想来实施更大的网络以实施实际应用:通过多重方案进行电路复杂性和模块化特征的交易时间允许多芯片实现,而无需过多数量的互连。在本文中,我们建议将这种体系结构的各种形式融合在一起,目前散布在文献中。在介绍了其运营的要点之后,我们将继续进行排列和权衡,其中一些尚未在迄今为止可访问的文献中发表。我们从引入基本体系结构开始。然后,我们提出修改并讨论一些I/O问题。匹配的神经传递特性对于系统的性能很重要,我们通过一组二阶改进解决了这个问题。引入了带有外部重量存储器的架构的另一个版本,该版本允许与主机计算机进行交互,最后,提出了该体系结构的管道版本,以提高系统速度,以较小的整体复杂性增量。
参考文献(1)
被引文献(1)
Learning internal representations by error propagation
DOI:
10.7551/mitpress/5236.001.0001
发表时间:
1985-01-01
期刊:
Tech. Rep
影响因子:
0
作者:
Rumelhart, D. E.;Hinton, G. E.;Williams, R. J.
通讯作者:
Williams, R. J.

数据更新时间:{{ references.updateTime }}

M. Shridhar
通讯地址:
--
所属机构:
--
电子邮件地址:
--
免责声明免责声明
1、猫眼课题宝专注于为科研工作者提供省时、高效的文献资源检索和预览服务;
2、网站中的文献信息均来自公开、合规、透明的互联网文献查询网站,可以通过页面中的“来源链接”跳转数据网站。
3、在猫眼课题宝点击“求助全文”按钮,发布文献应助需求时求助者需要支付50喵币作为应助成功后的答谢给应助者,发送到用助者账户中。若文献求助失败支付的50喵币将退还至求助者账户中。所支付的喵币仅作为答谢,而不是作为文献的“购买”费用,平台也不从中收取任何费用,
4、特别提醒用户通过求助获得的文献原文仅用户个人学习使用,不得用于商业用途,否则一切风险由用户本人承担;
5、本平台尊重知识产权,如果权利所有者认为平台内容侵犯了其合法权益,可以通过本平台提供的版权投诉渠道提出投诉。一经核实,我们将立即采取措施删除/下架/断链等措施。
我已知晓