dx.doi.org/10.1109/ICONS62911.2024.00057

Preview meta tags from the dx.doi.org website.

Linked Hostnames

2

Thumbnail

Search Engine Appearance

Google

https://dx.doi.org/10.1109/ICONS62911.2024.00057

Towards Scalable and Efficient Spiking Reinforcement Learning for Continuous Control Tasks

By some metrics, spiking neural networks (SNNs) are still far from performing equally well as some of the other artificial neural networks for continuous control tasks. Even though SNNs are efficient by their design and structure, they lack many of the optimizations known from deep reinforcement learning (DeepRL) algorithms. Hence, researchers have combined SNNs with DeepRL algorithms to tackle this challenge. However, the question remains as to how scalable these DeepRL-based SNNs are in practice. In this paper, we introduce SpikeRL, a scalable and efficient framework for DeepRL-based SNNs for continuous control. In the SpikeRL framework, we first incor-porate population encoding from the Population-coded Spiking Actor Network (PopSAN) method for our SNN model. Then, we implement Message Passing Interface (MPI) through its widely used Python bindings in mpi4py in order to achieve distributed training across models and environments. We further optimize our model training by using mixed-precision for parameter updates. Our research findings demonstrate a scalability and efficiency potential for spiking reinforcement learning methods for continuous control environments.



Bing

Towards Scalable and Efficient Spiking Reinforcement Learning for Continuous Control Tasks

https://dx.doi.org/10.1109/ICONS62911.2024.00057

By some metrics, spiking neural networks (SNNs) are still far from performing equally well as some of the other artificial neural networks for continuous control tasks. Even though SNNs are efficient by their design and structure, they lack many of the optimizations known from deep reinforcement learning (DeepRL) algorithms. Hence, researchers have combined SNNs with DeepRL algorithms to tackle this challenge. However, the question remains as to how scalable these DeepRL-based SNNs are in practice. In this paper, we introduce SpikeRL, a scalable and efficient framework for DeepRL-based SNNs for continuous control. In the SpikeRL framework, we first incor-porate population encoding from the Population-coded Spiking Actor Network (PopSAN) method for our SNN model. Then, we implement Message Passing Interface (MPI) through its widely used Python bindings in mpi4py in order to achieve distributed training across models and environments. We further optimize our model training by using mixed-precision for parameter updates. Our research findings demonstrate a scalability and efficiency potential for spiking reinforcement learning methods for continuous control environments.



DuckDuckGo

https://dx.doi.org/10.1109/ICONS62911.2024.00057

Towards Scalable and Efficient Spiking Reinforcement Learning for Continuous Control Tasks

By some metrics, spiking neural networks (SNNs) are still far from performing equally well as some of the other artificial neural networks for continuous control tasks. Even though SNNs are efficient by their design and structure, they lack many of the optimizations known from deep reinforcement learning (DeepRL) algorithms. Hence, researchers have combined SNNs with DeepRL algorithms to tackle this challenge. However, the question remains as to how scalable these DeepRL-based SNNs are in practice. In this paper, we introduce SpikeRL, a scalable and efficient framework for DeepRL-based SNNs for continuous control. In the SpikeRL framework, we first incor-porate population encoding from the Population-coded Spiking Actor Network (PopSAN) method for our SNN model. Then, we implement Message Passing Interface (MPI) through its widely used Python bindings in mpi4py in order to achieve distributed training across models and environments. We further optimize our model training by using mixed-precision for parameter updates. Our research findings demonstrate a scalability and efficiency potential for spiking reinforcement learning methods for continuous control environments.

  • General Meta Tags

    12
    • title
      Towards Scalable and Efficient Spiking Reinforcement Learning for Continuous Control Tasks | IEEE Conference Publication | IEEE Xplore
    • google-site-verification
      qibYCgIKpiVF_VVjPYutgStwKn-0-KBB6Gw4Fc57FZg
    • Description
      By some metrics, spiking neural networks (SNNs) are still far from performing equally well as some of the other artificial neural networks for continuous contro
    • Content-Type
      text/html; charset=utf-8
    • viewport
      width=device-width, initial-scale=1.0
  • Open Graph Meta Tags

    3
    • og:image
      https://ieeexplore.ieee.org/assets/img/ieee_logo_smedia_200X200.png
    • og:title
      Towards Scalable and Efficient Spiking Reinforcement Learning for Continuous Control Tasks
    • og:description
      By some metrics, spiking neural networks (SNNs) are still far from performing equally well as some of the other artificial neural networks for continuous control tasks. Even though SNNs are efficient by their design and structure, they lack many of the optimizations known from deep reinforcement learning (DeepRL) algorithms. Hence, researchers have combined SNNs with DeepRL algorithms to tackle this challenge. However, the question remains as to how scalable these DeepRL-based SNNs are in practice. In this paper, we introduce SpikeRL, a scalable and efficient framework for DeepRL-based SNNs for continuous control. In the SpikeRL framework, we first incor-porate population encoding from the Population-coded Spiking Actor Network (PopSAN) method for our SNN model. Then, we implement Message Passing Interface (MPI) through its widely used Python bindings in mpi4py in order to achieve distributed training across models and environments. We further optimize our model training by using mixed-precision for parameter updates. Our research findings demonstrate a scalability and efficiency potential for spiking reinforcement learning methods for continuous control environments.
  • Twitter Meta Tags

    1
    • twitter:card
      summary
  • Link Tags

    9
    • canonical
      https://ieeexplore.ieee.org/document/10766542/
    • icon
      /assets/img/favicon.ico
    • stylesheet
      https://ieeexplore.ieee.org/assets/css/osano-cookie-consent-xplore.css
    • stylesheet
      /assets/css/simplePassMeter.min.css?cv=20250701_00000
    • stylesheet
      /assets/dist/ng-new/styles.css?cv=20250701_00000

Links

17