Collaborative Online Optimization for Efficient Model-Based Learning

基于模型的高效学习的协作在线优化

基本信息

项目摘要

One of the grand challenges in Artificial Intelligence (AI) and Machine Learning (ML) is building intelligent systems that can learn from data in real time. To learn from streaming data, there is need for novel approaches in online optimization and prediction. Current methods assume sequential availability of gradients (or loss), posing a practical hurdle in implementation. We propose two approaches to address this gap using model-based learning. These approaches are aimed at respectively exploiting, a distributed computing architecture (to divide the required computational effort) or a communications network (to efficiently aggregate disparate data). The collaborative online optimization algorithms and theoretic extensions introduced in this work have a broad range of applications domains such as speech recognition and computer vision, autonomous vehicles, transportation, neuroscience, and business analytics.Most of classical ML algorithms have been developed under the assumption that data sets are already available in batch form. Transitioning from offline to online learning faces a major practical hurdle in many application domains where the closed-form of the objective function is unknown to the learner. When dealing with streaming data, this black-box property leads to a natural trade-off between delays (due to data or computation) and the speed and accuracy with which a model can be identified. A distributed computing architecture provides a way to reduce delays to obtain reasonably accurate models in the necessary timescale. We propose to study fast distributed asynchronous stochastic gradient approaches for online learning in which coordination between multiple workers (processors) interacting asynchronously is carefully engineered. Improved accuracy and speed may also be jointly achieved by a network of learners receiving different streams of data. Thus, we also consider decentralized models of online learning with multiple learning agents that communicate over a network. With the ability to share predictions or estimates with other agents in a network, the collective can aggregate disparate information in a way to outperform (in terms of accuracy and speed) any individually identified model. Finally, we consider the case in which data streams have graph structure. Streaming graph structure data arises in diverse application domains such as transportation networks, social networks and other networks found in biology, where the graph captures the correlation in data. The proposal includes the development of a new graduate course aimed at providing engineering students with working knowledge on state-of-the-art distributed online optimization techniques.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
人工智能(AI)和机器学习(ML)的巨大挑战之一是构建可以实时从数据中学习的智能系统。要从流数据中学习,在线优化和预测中需要新颖的方法。当前方法假设梯度的顺序可用性(或损失),在实施方面构成了实用的障碍。我们建议使用基于模型的学习来解决这一差距的两种方法。这些方法旨在分别利用分布式计算体系结构(分配所需的计算工作)或通信网络(有效地汇总了不同的数据)。这项工作中介绍的协作在线优化算法和理论扩展具有广泛的应用领域,例如语音识别和计算机视觉,自动驾驶汽车,运输,神经科学和业务分析。数据集已经以批处理形式提供。从离线学习过渡到在线学习是许多应用领域的主要实用障碍,而学习者的封闭形式是学习者未知的。在处理流数据时,这种黑盒属性会导致延迟(由于数据或计算)与识别模型的速度和准确性之间的自然权衡。分布式计算体系结构提供了减少延迟以在必要时间范围内获得合理准确模型的方法。我们建议研究用于在线学习的快速分布异步随机梯度方法,在线学习中,对多个工人(处理器)之间的协调性进行了仔细的设计。通过接收不同数据流的学习者网络,也可以共同实现准确性和速度的提高。因此,我们还考虑了通过网络通信的多个学习代理的在线学习模型。凭借能够与网络中其他代理共享预测或估计,集体可以以跑赢大盘(在准确性和速度方面)的方式汇总不同的信息。最后,我们考虑数据流具有图结构的情况。流图结构数据出现在不同的应用领域,例如运输网络,社交网络和生物学中发现的其他网络,该图捕获了数据中的相关性。该提案包括开发新的研究生课程,旨在为工程专业的学生提供有关最先进的在线优化技术的工作知识。该奖项反映了NSF的法定任务,并被认为是值得通过基金会的知识分子来评估的。优点和更广泛的影响审查标准。

项目成果

期刊论文数量(8)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
On Online Optimization: Dynamic Regret Analysis of Strongly Convex and Smooth Problems
  • DOI:
    10.1609/aaai.v35i8.16858
  • 发表时间:
    2021-05
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Ting-Jui Chang;Shahin Shahrampour
  • 通讯作者:
    Ting-Jui Chang;Shahin Shahrampour
Distributed Networked Real-Time Learning
  • DOI:
    10.1109/tcns.2020.3029992
  • 发表时间:
    2020-09
  • 期刊:
  • 影响因子:
    4.2
  • 作者:
    Alfredo García;Luochao Wang;Jeff Huang;Lingzhou Hong
  • 通讯作者:
    Alfredo García;Luochao Wang;Jeff Huang;Lingzhou Hong
Decentralized Riemannian Gradient Descent on the Stiefel Manifold
  • DOI:
  • 发表时间:
    2021-02
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Shixiang Chen;Alfredo García;Mingyi Hong;Shahin Shahrampour
  • 通讯作者:
    Shixiang Chen;Alfredo García;Mingyi Hong;Shahin Shahrampour
Distributed Online Linear Quadratic Control for Linear Time-invariant Systems
  • DOI:
    10.23919/acc50511.2021.9483391
  • 发表时间:
    2020-09
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Ting-Jui Chang;Shahin Shahrampour
  • 通讯作者:
    Ting-Jui Chang;Shahin Shahrampour
Distributed Mirror Descent With Integral Feedback: Asymptotic Convergence Analysis of Continuous-Time Dynamics
  • DOI:
    10.1109/lcsys.2020.3040934
  • 发表时间:
    2020-11
  • 期刊:
  • 影响因子:
    3
  • 作者:
    Youbang Sun;Shahin Shahrampour
  • 通讯作者:
    Youbang Sun;Shahin Shahrampour
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Shahin Shahrampour其他文献

Switching to learn
转行学习
  • DOI:
    10.1109/acc.2015.7171178
  • 发表时间:
    2015
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Shahin Shahrampour;M. Amin Rahimian;A. Jadbabaie
  • 通讯作者:
    A. Jadbabaie
Tracking Dynamic Gaussian Density with a Theoretically Optimal Sliding Window Approach
使用理论上最佳滑动窗口方法跟踪动态高斯密度
  • DOI:
  • 发表时间:
    2024
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Yinsong Wang;Yu Ding;Shahin Shahrampour
  • 通讯作者:
    Shahin Shahrampour
On Optimal Generalizability in Parametric Learning
参数学习中的最优泛化性
N-Dimensional Distributed Network Localization with Noisy Range Measurements and Arbitrary Anchor Placement
具有噪声范围测量和任意锚点放置的 N 维分布式网络定位
  • DOI:
    10.23919/acc.2019.8814820
  • 发表时间:
    2019
  • 期刊:
  • 影响因子:
    0
  • 作者:
    P. P. V. Tecchio;Nikolay A. Atanasov;Shahin Shahrampour;George Pappas
  • 通讯作者:
    George Pappas
Regret Analysis of Distributed Online Control for LTI Systems with Adversarial Disturbances
具有对抗性干扰的 LTI 系统分布式在线控制的遗憾分析
  • DOI:
  • 发表时间:
    2023
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Ting;Shahin Shahrampour
  • 通讯作者:
    Shahin Shahrampour

Shahin Shahrampour的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Shahin Shahrampour', 18)}}的其他基金

Collaborative Research: Consensus and Distributed Optimization in Non-Convex Environments with Applications to Networked Machine Learning
协作研究:非凸环境中的共识和分布式优化及其在网络机器学习中的应用
  • 批准号:
    2240788
  • 财政年份:
    2023
  • 资助金额:
    $ 50万
  • 项目类别:
    Standard Grant
Collaborative Online Optimization for Efficient Model-Based Learning
基于模型的高效学习的协作在线优化
  • 批准号:
    2136206
  • 财政年份:
    2021
  • 资助金额:
    $ 50万
  • 项目类别:
    Standard Grant

相似国自然基金

在线零售的可持续配送系统设计与优化研究
  • 批准号:
    72310107001
  • 批准年份:
    2023
  • 资助金额:
    190 万元
  • 项目类别:
    国际(地区)合作与交流项目
面向边缘智能的联邦学习在线激励与公平优化研究
  • 批准号:
    62372343
  • 批准年份:
    2023
  • 资助金额:
    50 万元
  • 项目类别:
    面上项目
基于非参选择模型的在线产品品类优化研究
  • 批准号:
    72371255
  • 批准年份:
    2023
  • 资助金额:
    41 万元
  • 项目类别:
    面上项目
基于动态电压频率调整的GPU集群在线能效优化研究
  • 批准号:
    62302126
  • 批准年份:
    2023
  • 资助金额:
    30 万元
  • 项目类别:
    青年科学基金项目
基于隶属度函数在线优化的T-S模糊系统的控制问题研究
  • 批准号:
  • 批准年份:
    2022
  • 资助金额:
    54 万元
  • 项目类别:
    面上项目

相似海外基金

Collaborative Research: NeTS: Medium: Black-box Optimization of White-box Networks: Online Learning for Autonomous Resource Management in NextG Wireless Networks
合作研究:NeTS:中:白盒网络的黑盒优化:下一代无线网络中自主资源管理的在线学习
  • 批准号:
    2312835
  • 财政年份:
    2023
  • 资助金额:
    $ 50万
  • 项目类别:
    Standard Grant
Collaborative Research: NeTS: Medium: Black-box Optimization of White-box Networks: Online Learning for Autonomous Resource Management in NextG Wireless Networks
合作研究:NeTS:中:白盒网络的黑盒优化:下一代无线网络中自主资源管理的在线学习
  • 批准号:
    2312836
  • 财政年份:
    2023
  • 资助金额:
    $ 50万
  • 项目类别:
    Standard Grant
Collaborative Research: NeTS: Medium: Black-box Optimization of White-box Networks: Online Learning for Autonomous Resource Management in NextG Wireless Networks
合作研究:NeTS:中:白盒网络的黑盒优化:下一代无线网络中自主资源管理的在线学习
  • 批准号:
    2312834
  • 财政年份:
    2023
  • 资助金额:
    $ 50万
  • 项目类别:
    Standard Grant
Collaborative Research: NeTS: Medium: Black-box Optimization of White-box Networks: Online Learning for Autonomous Resource Management in NextG Wireless Networks
合作研究:NeTS:中:白盒网络的黑盒优化:下一代无线网络中自主资源管理的在线学习
  • 批准号:
    2312833
  • 财政年份:
    2023
  • 资助金额:
    $ 50万
  • 项目类别:
    Standard Grant
Collaborative Research: CPS Medium: Learning through the Air: Cross-Layer UAV Orchestration for Online Federated Optimization
合作研究:CPS 媒介:空中学习:用于在线联合优化的跨层无人机编排
  • 批准号:
    2313110
  • 财政年份:
    2023
  • 资助金额:
    $ 50万
  • 项目类别:
    Standard Grant
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了