SPX: Collaborative Research: FASTLEAP: FPGA based compact Deep Learning Platform
SPX:协作研究:FASTLEAP:基于 FPGA 的紧凑型深度学习平台
基本信息
- 批准号:2333009
- 负责人:
- 金额:$ 84.87万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Standard Grant
- 财政年份:2022
- 资助国家:美国
- 起止时间:2022-10-01 至 2024-11-30
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
With the rise of artificial intelligence in recent years, Deep Neural Networks (DNNs) have been widely used because of their high accuracy, excellent scalability, and self-adaptiveness properties. Many applications employ DNNs as the core technology, such as face detection, speech recognition, scene parsing. To meet the high accuracy requirement of various applications, DNN models are becoming deeper and larger, and are evolving at a fast pace. They are computation and memory intensive and pose intensive challenges to the conventional Von Neumann architecture used in computing. The key problem addressed by the project is how to accelerate deep learning, not only inference, but also training and model compression, which have not received enough attention in the prior research. This endeavor has the potential to enable the design of fast and energy-efficient deep learning systems, applications of which are found in our daily lives -- ranging from autonomous driving, through mobile devices, to IoT systems, thus benefiting the society at large.The outcome of this project is FASTLEAP - an Field Programmable Gate Array (FPGA)-based platform for accelerating deep learning. The platform takes in a dataset as an input and outputs a model which is trained, pruned, and mapped on FPGA, optimized for fast inferencing. The project will utilize the emerging FPGA technologies that have access to High Bandwidth Memory (HBM) and consist of floating-point DSP units. In a vertical perspective, FASTLEAP integrates innovations from multiple levels of the whole system stack algorithm, architecture and down to efficient FPGA hardware implementation. In a horizontal perspective, it embraces systematic DNN model compression and associated FPGA-based training, as well as FPGA-based inference acceleration of compressed DNN models. The platform will be delivered as a complete solution, with both the software tool chain and hardware implementation to ensure the ease of use. At algorithm level of FASTLEAP, the proposed Alternating Direction Method of Multipliers for Neural Networks (ADMM-NN) framework, will perform unified weight pruning and quantization, given training data, target accuracy, and target FPGA platform characteristics (performance models, inter-accelerator communication). The training procedure in ADMM-NN is performed on a platform with multiple FPGA accelerators, dictated by the architecture-level optimizations on communication and parallelism. Finally, the optimized FPGA inference design is generated based on the trained DNN model with compression, accounting for FPGA performance modeling. The project will address the following SPX research areas: 1) Algorithms: Bridging the gap between deep learning developments in theory and their system implementations cognizant of performance model of the platform. 2) Applications: Scaling of deep learning for domains such as image processing. 3) Architecture and Systems: Automatic generation of deep learning designs on FPGA optimizing area, energy-efficiency, latency, and throughput.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
近年来,随着人工智能的兴起,深度神经网络(DNN)因其高精度、良好的可扩展性和自适应性而得到广泛应用。许多应用都采用 DNN 作为核心技术,例如人脸检测、语音识别、场景解析。为了满足各种应用的高精度要求,DNN模型变得越来越深、越来越大,并且正在快速发展。它们是计算和内存密集型的,并对计算中使用的传统冯诺依曼架构提出了严峻的挑战。该项目解决的关键问题是如何加速深度学习,不仅是推理,还包括训练和模型压缩,这些在之前的研究中没有得到足够的重视。这项努力有可能实现快速、节能的深度学习系统的设计,其应用广泛存在于我们的日常生活中——从自动驾驶、移动设备到物联网系统,从而造福整个社会。该项目的成果是 FASTLEAP——一个基于现场可编程门阵列 (FPGA) 的平台,用于加速深度学习。该平台将数据集作为输入并输出模型,该模型经过训练、修剪并映射到 FPGA 上,并针对快速推理进行了优化。该项目将利用新兴的 FPGA 技术,该技术可以访问高带宽内存 (HBM) 并由浮点 DSP 单元组成。从纵向来看,FASTLEAP融合了从整个系统堆栈算法、架构到高效的FPGA硬件实现等多个层面的创新。从横向来看,它包括系统的DNN模型压缩和相关的基于FPGA的训练,以及压缩DNN模型的基于FPGA的推理加速。该平台将作为完整的解决方案提供,包括软件工具链和硬件实现,以确保易用性。在 FASTLEAP 的算法层面,提出的神经网络乘法器交替方向法 (ADMM-NN) 框架将在给定训练数据、目标精度和目标 FPGA 平台特性(性能模型、加速器间加速器)的情况下执行统一的权重修剪和量化沟通)。 ADMM-NN 中的训练过程在具有多个 FPGA 加速器的平台上执行,由通信和并行性的架构级优化决定。最后,基于经过压缩训练的 DNN 模型生成优化的 FPGA 推理设计,并考虑 FPGA 性能建模。该项目将涉及以下 SPX 研究领域: 1) 算法:弥合深度学习理论发展与平台性能模型的系统实现之间的差距。 2)应用:深度学习在图像处理等领域的扩展。 3) 架构和系统:自动生成关于 FPGA 优化面积、能效、延迟和吞吐量的深度学习设计。该奖项反映了 NSF 的法定使命,并通过使用基金会的智力价值和更广泛的影响审查进行评估,被认为值得支持标准。
项目成果
期刊论文数量(2)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Dynasparse: Accelerating GNN Inference through Dynamic Sparsity Exploitation
Dynasparse:通过动态稀疏性利用加速 GNN 推理
- DOI:10.1109/ipdps54959.2023.00032
- 发表时间:2023-05
- 期刊:
- 影响因子:0
- 作者:Zhang, Bingyi;Prasanna, Viktor
- 通讯作者:Prasanna, Viktor
A Framework for Monte-Carlo Tree Search on CPU-FPGA Heterogeneous Platform via on-chip Dynamic Tree Management
基于片上动态树管理的 CPU-FPGA 异构平台蒙特卡罗树搜索框架
- DOI:10.1145/3543622.3573177
- 发表时间:2023-02
- 期刊:
- 影响因子:0
- 作者:Meng, Yuan;Kannan, Rajgopal;Prasanna, Viktor
- 通讯作者:Prasanna, Viktor
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Xuehai Qian其他文献
GoSPA: An Energy-efficient High-performance Globally Optimized SParse Convolutional Neural Network Accelerator
GoSPA:节能高性能全局优化的稀疏卷积神经网络加速器
- DOI:
10.1109/isca52012.2021.00090 - 发表时间:
2021-06-01 - 期刊:
- 影响因子:0
- 作者:
Chunhua Deng;Yang Sui;Siyu Liao;Xuehai Qian;Bo Yuan - 通讯作者:
Bo Yuan
DNNGuard: An Elastic Heterogeneous DNN Accelerator Architecture against Adversarial Attacks
DNNGuard:针对对抗性攻击的弹性异构 DNN 加速器架构
- DOI:
10.1145/3373376.3378532 - 发表时间:
2020-03-09 - 期刊:
- 影响因子:0
- 作者:
Xingbin Wang;Rui Hou;Boyan Zhao;Fengkai Yuan;Jun Zhang;Dan Meng;Xuehai Qian - 通讯作者:
Xuehai Qian
Investigation on ablative process of CFRP laminates under laser irradiations
激光照射下CFRP层合板烧蚀过程研究
- DOI:
10.1016/j.optlastec.2024.110687 - 发表时间:
2024-09-13 - 期刊:
- 影响因子:0
- 作者:
Qingfeng Chai;Yongkang Luo;Xuehai Qian;Yu Zhang;Lv Zhao - 通讯作者:
Lv Zhao
A Case for Asymmetric Non-Volatile Memory Architecture
非对称非易失性内存架构案例
- DOI:
10.48550/arxiv.2210.05211 - 发表时间:
2018-09-25 - 期刊:
- 影响因子:0
- 作者:
Teng Ma;Mingxing Zhang;Kang Chen;Xuehai Qian;Yongwei Wu - 通讯作者:
Yongwei Wu
pLock: A Fast Lock for Architectures with Explicit Inter-core Message Passing
pLock:具有显式内核间消息传递的架构的快速锁定
- DOI:
10.1145/3297858.3304030 - 发表时间:
2019-04-04 - 期刊:
- 影响因子:0
- 作者:
Xiongchao Tang;Jidong Zhai;Xuehai Qian;Wenguang Chen - 通讯作者:
Wenguang Chen
Xuehai Qian的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Xuehai Qian', 18)}}的其他基金
CAREER: Algorithm-Centric High Performance Graph Processing
职业:以算法为中心的高性能图形处理
- 批准号:
2331038 - 财政年份:2022
- 资助金额:
$ 84.87万 - 项目类别:
Continuing Grant
SHF: Small: High Performance Graph Pattern Mining System and Architecture
SHF:小型:高性能图模式挖掘系统和架构
- 批准号:
2333645 - 财政年份:2022
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
SHF: Small: High Performance Graph Pattern Mining System and Architecture
SHF:小型:高性能图模式挖掘系统和架构
- 批准号:
2127543 - 财政年份:2021
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
SPX: Collaborative Research: FASTLEAP: FPGA based compact Deep Learning Platform
SPX:协作研究:FASTLEAP:基于 FPGA 的紧凑型深度学习平台
- 批准号:
1919289 - 财政年份:2019
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
CAREER: Algorithm-Centric High Performance Graph Processing
职业:以算法为中心的高性能图形处理
- 批准号:
1750656 - 财政年份:2018
- 资助金额:
$ 84.87万 - 项目类别:
Continuing Grant
CSR: Small: Collaborative Research: GAMBIT: Efficient Graph Processing on a Memristor-based Embedded Computing Platform
CSR:小型:协作研究:GAMBIT:基于忆阻器的嵌入式计算平台上的高效图形处理
- 批准号:
1717984 - 财政年份:2017
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
CRII: SHF: Improving Programmability of GPGPU/NVRAM Integrated Systems with Holistic Architectural Support
CRII:SHF:通过整体架构支持提高 GPGPU/NVRAM 集成系统的可编程性
- 批准号:
1657333 - 财政年份:2017
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
SHF: Small: Accelerating Graph Processing with Vertically Integrated Programming Model, Runtime and Architecture
SHF:小型:利用垂直集成编程模型、运行时和架构加速图形处理
- 批准号:
1717754 - 财政年份:2017
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
Student Travel Support for the 2017 International Conference on Architecture Support for Programming Languages and Operating Systems (ASPLOS)
2017 年编程语言和操作系统架构支持国际会议 (ASPLOS) 的学生旅行支持
- 批准号:
1720467 - 财政年份:2017
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
相似国自然基金
基于交易双方异质性的工程项目组织间协作动态耦合研究
- 批准号:72301024
- 批准年份:2023
- 资助金额:30 万元
- 项目类别:青年科学基金项目
医保基金战略性购买促进远程医疗协作网价值共创的制度创新研究
- 批准号:
- 批准年份:2022
- 资助金额:45 万元
- 项目类别:面上项目
面向协作感知车联网的信息分发时效性保证关键技术研究
- 批准号:
- 批准年份:2022
- 资助金额:30 万元
- 项目类别:青年科学基金项目
面向5G超高清移动视频传输的协作NOMA系统可靠性研究
- 批准号:
- 批准年份:2022
- 资助金额:30 万元
- 项目类别:青年科学基金项目
基于自主性边界的人机协作-对抗混合智能控制研究
- 批准号:
- 批准年份:2022
- 资助金额:30 万元
- 项目类别:青年科学基金项目
相似海外基金
SPX: Collaborative Research: Scalable Neural Network Paradigms to Address Variability in Emerging Device based Platforms for Large Scale Neuromorphic Computing
SPX:协作研究:可扩展神经网络范式,以解决基于新兴设备的大规模神经形态计算平台的可变性
- 批准号:
2401544 - 财政年份:2023
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
SPX: Collaborative Research: Intelligent Communication Fabrics to Facilitate Extreme Scale Computing
SPX:协作研究:促进超大规模计算的智能通信结构
- 批准号:
2412182 - 财政年份:2023
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
SPX: Collaborative Research: Automated Synthesis of Extreme-Scale Computing Systems Using Non-Volatile Memory
SPX:协作研究:使用非易失性存储器自动合成超大规模计算系统
- 批准号:
2408925 - 财政年份:2023
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
SPX: Collaborative Research: NG4S: A Next-generation Geo-distributed Scalable Stateful Stream Processing System
SPX:合作研究:NG4S:下一代地理分布式可扩展状态流处理系统
- 批准号:
2202859 - 财政年份:2022
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant
SPX: Collaborative Research: Cross-stack Memory Optimizations for Boosting I/O Performance of Deep Learning HPC Applications
SPX:协作研究:用于提升深度学习 HPC 应用程序 I/O 性能的跨堆栈内存优化
- 批准号:
2318628 - 财政年份:2022
- 资助金额:
$ 84.87万 - 项目类别:
Standard Grant