喵ID:vBcakT免责声明

Near-optimal reinforcement learning framework for energy-aware sensor communications

基本信息

DOI:
10.1109/jsac.2005.843547
发表时间:
2005-04-01
影响因子:
16.4
通讯作者:
Liu, KJR
中科院分区:
计算机科学1区
文献类型:
Article;Proceedings Paper
作者: Pandana, C;Liu, KJR研究方向: -- MeSH主题词: --
关键词: --
来源链接:pubmed详情页地址

文献摘要

We consider the problem of average throughput maximization per total consumed energy in packetized sensor communications. Our study results in a near-optimal transmission strategy that chooses the optimal modulation level and transmit power while adapting to the incoming traffic rate, buffer condition, and the channel condition. We investigate the point-to-point and multinode communication scenarios. Many solutions of the previous works require the state transition probability, which may be hard to obtain in a practical situation. Therefore, we are motivated to propose and utilize a class of learning algorithms [called reinforcement learning (RL)] to obtain the near-optimal policy in point-to-point communication and a good transmission strategy in multinode scenario. For comparison purpose, we develop the stochastic models to obtain the optimal strategy in the point-to-point communication. We show that the learned policy is close to the optimal policy. We further extend the algorithm to solve the optimization problem in a multinode scenario by independent learning. We compare the learned policy to a simple policy, where the agent chooses the highest possible modulation and selects the transmit power that achieves a predefined signal-to-interference ratio (SIR) given one particular modulation. The proposed learning algorithm achieves more, than twice the throughput per energy compared with the simple policy, particularly, in high packet arrival regime. Beside the good performance, the RL algorithm results in a simple, systematic, self-organized, and distributed way to decide the transmission strategy.
我们考虑在分组传感器通信中每总消耗能量的平均吞吐量最大化问题。我们的研究得出了一种近最优传输策略,该策略在适应输入流量速率、缓冲区状况和信道状况的同时选择最优调制级别和发射功率。我们研究了点对点和多节点通信场景。先前许多工作的解决方案需要状态转移概率,而在实际情况中可能很难获得该概率。因此,我们有动机提出并利用一类学习算法[称为强化学习(RL)]来获得点对点通信中的近最优策略以及多节点场景中的良好传输策略。为了进行比较,我们开发了随机模型以获得点对点通信中的最优策略。我们表明,学习到的策略接近最优策略。我们进一步扩展该算法,通过独立学习来解决多节点场景中的优化问题。我们将学习到的策略与一种简单策略进行比较,在简单策略中,智能体选择最高可能的调制,并选择在给定一种特定调制下能实现预定义信干比(SIR)的发射功率。所提出的学习算法与简单策略相比,每能量实现的吞吐量超过两倍,特别是在高分组到达率的情况下。除了良好的性能外,强化学习算法以一种简单、系统、自组织和分布式的方式来决定传输策略。
参考文献(15)
被引文献(0)

数据更新时间:{{ references.updateTime }}

Liu, KJR
通讯地址:
--
所属机构:
--
电子邮件地址:
--
免责声明免责声明
1、猫眼课题宝专注于为科研工作者提供省时、高效的文献资源检索和预览服务;
2、网站中的文献信息均来自公开、合规、透明的互联网文献查询网站,可以通过页面中的“来源链接”跳转数据网站。
3、在猫眼课题宝点击“求助全文”按钮,发布文献应助需求时求助者需要支付50喵币作为应助成功后的答谢给应助者,发送到用助者账户中。若文献求助失败支付的50喵币将退还至求助者账户中。所支付的喵币仅作为答谢,而不是作为文献的“购买”费用,平台也不从中收取任何费用,
4、特别提醒用户通过求助获得的文献原文仅用户个人学习使用,不得用于商业用途,否则一切风险由用户本人承担;
5、本平台尊重知识产权,如果权利所有者认为平台内容侵犯了其合法权益,可以通过本平台提供的版权投诉渠道提出投诉。一经核实,我们将立即采取措施删除/下架/断链等措施。
我已知晓