dx.doi.org/10.1109/ICONS62911.2024.00057
Preview meta tags from the dx.doi.org website.
Linked Hostnames
2Thumbnail

Search Engine Appearance
Towards Scalable and Efficient Spiking Reinforcement Learning for Continuous Control Tasks
By some metrics, spiking neural networks (SNNs) are still far from performing equally well as some of the other artificial neural networks for continuous control tasks. Even though SNNs are efficient by their design and structure, they lack many of the optimizations known from deep reinforcement learning (DeepRL) algorithms. Hence, researchers have combined SNNs with DeepRL algorithms to tackle this challenge. However, the question remains as to how scalable these DeepRL-based SNNs are in practice. In this paper, we introduce SpikeRL, a scalable and efficient framework for DeepRL-based SNNs for continuous control. In the SpikeRL framework, we first incor-porate population encoding from the Population-coded Spiking Actor Network (PopSAN) method for our SNN model. Then, we implement Message Passing Interface (MPI) through its widely used Python bindings in mpi4py in order to achieve distributed training across models and environments. We further optimize our model training by using mixed-precision for parameter updates. Our research findings demonstrate a scalability and efficiency potential for spiking reinforcement learning methods for continuous control environments.
Bing
Towards Scalable and Efficient Spiking Reinforcement Learning for Continuous Control Tasks
By some metrics, spiking neural networks (SNNs) are still far from performing equally well as some of the other artificial neural networks for continuous control tasks. Even though SNNs are efficient by their design and structure, they lack many of the optimizations known from deep reinforcement learning (DeepRL) algorithms. Hence, researchers have combined SNNs with DeepRL algorithms to tackle this challenge. However, the question remains as to how scalable these DeepRL-based SNNs are in practice. In this paper, we introduce SpikeRL, a scalable and efficient framework for DeepRL-based SNNs for continuous control. In the SpikeRL framework, we first incor-porate population encoding from the Population-coded Spiking Actor Network (PopSAN) method for our SNN model. Then, we implement Message Passing Interface (MPI) through its widely used Python bindings in mpi4py in order to achieve distributed training across models and environments. We further optimize our model training by using mixed-precision for parameter updates. Our research findings demonstrate a scalability and efficiency potential for spiking reinforcement learning methods for continuous control environments.
DuckDuckGo
Towards Scalable and Efficient Spiking Reinforcement Learning for Continuous Control Tasks
By some metrics, spiking neural networks (SNNs) are still far from performing equally well as some of the other artificial neural networks for continuous control tasks. Even though SNNs are efficient by their design and structure, they lack many of the optimizations known from deep reinforcement learning (DeepRL) algorithms. Hence, researchers have combined SNNs with DeepRL algorithms to tackle this challenge. However, the question remains as to how scalable these DeepRL-based SNNs are in practice. In this paper, we introduce SpikeRL, a scalable and efficient framework for DeepRL-based SNNs for continuous control. In the SpikeRL framework, we first incor-porate population encoding from the Population-coded Spiking Actor Network (PopSAN) method for our SNN model. Then, we implement Message Passing Interface (MPI) through its widely used Python bindings in mpi4py in order to achieve distributed training across models and environments. We further optimize our model training by using mixed-precision for parameter updates. Our research findings demonstrate a scalability and efficiency potential for spiking reinforcement learning methods for continuous control environments.
General Meta Tags
12- titleTowards Scalable and Efficient Spiking Reinforcement Learning for Continuous Control Tasks | IEEE Conference Publication | IEEE Xplore
- google-site-verificationqibYCgIKpiVF_VVjPYutgStwKn-0-KBB6Gw4Fc57FZg
- DescriptionBy some metrics, spiking neural networks (SNNs) are still far from performing equally well as some of the other artificial neural networks for continuous contro
- Content-Typetext/html; charset=utf-8
- viewportwidth=device-width, initial-scale=1.0
Open Graph Meta Tags
3- og:imagehttps://ieeexplore.ieee.org/assets/img/ieee_logo_smedia_200X200.png
- og:titleTowards Scalable and Efficient Spiking Reinforcement Learning for Continuous Control Tasks
- og:descriptionBy some metrics, spiking neural networks (SNNs) are still far from performing equally well as some of the other artificial neural networks for continuous control tasks. Even though SNNs are efficient by their design and structure, they lack many of the optimizations known from deep reinforcement learning (DeepRL) algorithms. Hence, researchers have combined SNNs with DeepRL algorithms to tackle this challenge. However, the question remains as to how scalable these DeepRL-based SNNs are in practice. In this paper, we introduce SpikeRL, a scalable and efficient framework for DeepRL-based SNNs for continuous control. In the SpikeRL framework, we first incor-porate population encoding from the Population-coded Spiking Actor Network (PopSAN) method for our SNN model. Then, we implement Message Passing Interface (MPI) through its widely used Python bindings in mpi4py in order to achieve distributed training across models and environments. We further optimize our model training by using mixed-precision for parameter updates. Our research findings demonstrate a scalability and efficiency potential for spiking reinforcement learning methods for continuous control environments.
Twitter Meta Tags
1- twitter:cardsummary
Link Tags
9- canonicalhttps://ieeexplore.ieee.org/document/10766542/
- icon/assets/img/favicon.ico
- stylesheethttps://ieeexplore.ieee.org/assets/css/osano-cookie-consent-xplore.css
- stylesheet/assets/css/simplePassMeter.min.css?cv=20250701_00000
- stylesheet/assets/dist/ng-new/styles.css?cv=20250701_00000
Links
17- http://www.ieee.org/about/help/security_privacy.html
- http://www.ieee.org/web/aboutus/whatis/policies/p9-26.html
- https://dx.doi.org/Xplorehelp
- https://dx.doi.org/Xplorehelp/overview-of-ieee-xplore/about-ieee-xplore
- https://dx.doi.org/Xplorehelp/overview-of-ieee-xplore/accessibility-statement